Log in

A world of dichotomies:

Empirically supported treatments or the common factors? Utilising evidence-based practice and practice-based evidence to mediate this discourse and improve practitioner outcomes

by Daryl Mahon

Psychotherapeutic discourse is often filled with provocative nomenclature and split into false dichotomies. The aim of the current paper is to review one such debate regarding those advocating for the utilisation of diagnostic specific Empirically Supported Treatments (EST); that is, treatments, often based on protocols and manuals, that have been shown to have evidence for specific issues in controlled trials. On the other side of this discourse debate are proponents of the Common Factors (CF) approach to therapy; namely, those that argue therapy works for reasons that are common to all approaches and generally this has little to do with theoretical orientation. This paper investigates current discourses by illuminating these dichotomies based on a critical review of current literature and the many factors that influence the therapy endeavour. Moreover, this review will provide evidence for different factors of psychotherapy theory and practice that have been shown to be effective. Finally, a conceptualisation of the common factors model, and the American Psychological Association’s definition of evidence-based practice will be discussed in order to provide practitioners with a framework to make interventions that are situated within best practice. Practice-based evidence will be introduced to the readers, as a method of assessing the effectiveness of interventions, by focusing on the therapeutic alliance and outcome measurement.

Empirically supported treatments
The American Psychological Association describes “evidence-based practice as the integration of the best available research with clinical expertise in the context of patient characteristics, culture and preferences” (APA, 2006: 273), originally developed within the medical paradigm in order to improve outcomes. Nevertheless, in recent times evidence- based practice (EBP) has come to be understood as a psychosocial intervention that is supported by evidence of effectiveness in the literature. According to Laska, Gurman and Wampold (2014) in a survey of clinical psychology graduate students, the majority identified EBP as synonymous with empirically supported treatments; this understanding was also prevalent with practitioners (Pagoto et al., 2007; Wachtel, 2004; Westen, Novotny & Thompson-Brenner, 2005). This narrative is furthered by EST proponents who postulate specific ingredient therapies, for specific disorders (e.g., Chambless & Crits-Christoph, 2006; Chambless & Hollon, 1998; Siev, Huppert, & Chambless, 2009). Provocative nomenclature is utilised to support the narrative with words such as ‘efficacy’ and ‘statistically significant’, while protocols and fidelity to manualised therapies are propagated as the gold standard.

The implicit message is that if you are not using these therapies then you are not ‘evidence- based’. However, meta-analyses comparing manualised versus non-manualised therapies does not support this contention (Truijens et al., 2018; Vinnars et al., 2005). Indeed with these therapeutic ‘gold standards’, one would expect the outcomes within the field to have progressed substantially over the decades, yet research suggests that outcomes have not improved in 58 years (Weisz et al., 2019).

Within the EST paradigm, the person of the therapist is not considered important as an outcome variable. Protocols and fidelity to theory and technique are said to mitigate for differences between therapists on outcomes and effect sizes. This argument is counter to that of Baldwin and Imel (2013) who contend that individual therapist attributes account for approximately 5-8% of the variance in outcome. Wampold (2001) asserts that a mere 1% of outcome variance is attributed to theory and technique. In addition to these concerns, other research suggests that studies from EST’s don’t always transition into naturalistic settings due to controls utilised to improve internal validity during randomised controlled trials. Real world practice is often very different than research trials, where client characteristics and therapist factors are kept consistent. Thus, this research evidence is often inconsistent with the average practitioner’s experience in real-life settings (Margison et al., 2000).

Common factors
Common factors refer to effective aspects of therapy that are shared by diverse schools of thought, and they are non-specific. Those who purport a common factor approach point to a large body of evidence from randomised controlled trials and meta-analyses showing equivalence in outcomes between bona fide treatments when compared (e.g.,Smith & Glass, 1977; Stiles, Barkham, Mellor-Clark & Connell, 2008; Stiles, Barkham, Twigg, Mellor- Clark, & Cooper, 2006; Wampold et al., 1997; Watts et al., 2013). Indeed, the latest fad of trauma-informed treatments would seem to be no more effective than other bona fide treatments (Imel & Wampold, 2008). Moreover, common factor advocates contend that when the specific active ingredients are removed from empirically supported treatments in dismantling studies, the approaches still show outcomes equal to the full component therapy (e.g., Cahill et al., 1999; Cusack et al., 1999; Bell, Marcus & Goodlad, 2013).

In response to the proliferation of EST’s, common factor proponents put forward an argument that therapeutic outcomes are the result of factors common to all bona fide psychotherapeutic approaches. Indeed, theoretical orientation/techniques account for a minority percentage of variance in outcomes – circa 1% (Laska, Gurman & Wampold, 2014; Wampold, et al., 2015). As Lambert contends: “It will not generally matter which kind of psychotherapy is offered as long as it is a bona fide theory-driven intervention” (2013: 43). The discourse within the common factor paradigm offers differentiated frameworks to conceptualise this phenomena (e.g Duncan et al., 2010; Rosenzweig, 1936; Wampold & Imel, 2015). Chambless & CritsChristoph refute the common factor proposition on what would seem a rigid adherence to philosophical science-based research:

Of all the aspects of psychotherapy that influence outcome, the treatment method is the only aspect in which psychotherapists can be trained, it is the only aspect that can be manipulated in a clinical experiment to test its worth, and, if proven valuable, it is the only aspect that can be disseminated to other psychotherapists.
                                                                                                  (2006: 199)

Nonetheless, the debate regarding whether therapy works through the activation of specific factors, or through the interdependent variables of common factors, remains because we currently do not have the statistical power or methodologies needed to evidence causality (Cuijpers et al., 2019). However, dismantling studies and equivalent outcomes within the literature provides strong evidence against the specific ingredient propositions.

Duncan et al. put forward the following conceptual framework (see Figure 1) to understand the common factors and their interactions. Hence, it is this framework that the current paper utilises to provide a common ground in order to bring together the dichotomies:

The percentages are best viewed as a defensible way to understand outcome variance but not as representing any ultimate truths. They are meta-analytic estimates of what each of the factors contributes to change. Because of the overlap among the common factors, the percentages for the separate factors will not add to 100%.
                                                                                                    (2014: 23)

Figure 1: Common factor model (Duncan et al., 2014).

Practice-based evidence
Outcomes are an area to come under increasing scrutiny by academics, managed care providers and commissioning bodies in recent times. Swisher (2010: 4) explains the concept of practice-based evidence in the following way:

..the real, messy, complicated world is not controlled. Instead, real world practice is documented and measured, just as it occurs, “warts” and all. It is the process of measurement and tracking that matters, not controlling how practice is delivered.

Psychosocial interventions delivered within therapeutic settings are well established within the extant literature as having strong evidence of efficacy and effectiveness (Lambert, 2013; Lambert & Ogles, 2004). Meta-analytic studies conclude that recipients of such interventions greatly benefit when compared to non-treated individuals with aggregated effect sizes ranging from 0.75- 0.85 (Hansen, Lambert & Forman, 2002; Lambert, 2013; Wampold & Imel, 2015).

Nevertheless, the overall effectiveness of counselling and psychotherapy has not progressed and developed in relation to client outcomes in over four decades, despite the emergence of hundreds of empirically supported treatments (Wampold et al, 1997; Weisz et al., 2019). This suggests that there is something other than specific therapy ingredients based on diagnosis-treatment paradigms at play. This is further reinforced by longitudinal research in naturalistic settings suggesting that on the whole therapists became slightly less effective over time (Goldberg et al., 2016). Moreover, research illustrates that approximately 5-10% of those engaged in counselling and psychotherapy actually deteriorate while in treatment (Hansen & Lambert, 2003; Hansen, Lambert & Foreman, 2002; Lambert & Ogles, 2004). More worryingly, this statistic is higher for young people at approximately 24% (Nelson et al., 2013). Lambert (2017) postulates that 30% of patients fail to respond during clinical trials, and as many as 65% of patients in routine care leave treatment without a measured benefit.

Randomised control trials and meta-analyses within the literature on routine outcome measurements suggest that intentionally eliciting live feedback from clients within sessions can improve therapy outcomes, reduce dropout rates, and identify those at risk of deterioration or null outcomes (Harmon et al., 2007; Hawkins et al., 2004). Moreover, research posits that practitioners do not adequately predict the deterioration of clients, those at risk of drop out and null outcomes when they assess clients informally (Ostergard, Randa & Hougaard, 2018). Thus, the utilisation of such processes and procedures could serve to improve outcomes for practitioners and clients. This is further reinforced by studies which contend that the use of such feedback systems produce outcomes that are 2.5 times better than treatment as usual (e.g., Brattland, et al., 2018) and that its use can cut rates of those at risk of deterioration and drop out by 50% (Harmon et al., 2007; Hawkins et al., 2004).

According to Phelps, Eisman & Kohout (1998) despite the numerous measurement methodologies at the disposal of therapists, few clinicians utilise them and outcome data collection is rare. Hatfield and Ogles (2004) conducted a national survey of psychologists and found that uptake of such instruments was limited due to perceived barriers such as time, money, and practicalities of their in session use in terms of time constraints. Interestingly, this links to wider issues of under-utilisation of feedback data by therapists (Lambert, 2017; De Jong et al., 2012). Carlier and Van Eeden (2017) suggest that training should be provided to clinicians in administration, interpretation and using feedback to discuss treatment, stagnation, decline and goal setting with clients.

In response to some of these concerns, Miller and Bargemen (2012) discuss two short four question instruments to measure outcomes based on a shortened version of the Outcome Questionnaire 45. The Outcome Rating Scale (ORS) captures data on client progress that can be aggregated in order to determine a therapist’s overall effectiveness. The Session Rating Scale (SRS) assesses the quality of the therapeutic alliance, which is a key indicator of the effectiveness of therapy (Wampold, 2014); it is based on a shorter version of the Working Alliance Inventory (Horvath & Greenberg,1989).

Both instruments can be administered in different modes – individual, couple and group therapy; with adults, children and adolescents; and across differential clinical presentations. Moreover, each scale has clinical cut-off rates depending on the client’s age linked to normative data. Taken together, both these reliable and validated psychometrically sound outcome measures make up the main components of a pan-theoretical approach, Feedback Informed Treatment (FIT) which has evidence – based practice status in the Substance Abuse and Mental Health Services Administration’s (SAMHSA) National Registry.

Evidence-based practice operationalised
Thus far we have examined one of the main discourse debates within psychotherapy; common factors and empirically supported treatments pitted against one another, fighting for position as the most prominent method. However, this paper contends that such rivalry is based on a false dichotomy as both aspects are interdependent and necessary for therapeutic change to occur. Therefore, this paper puts forward a framework based on the full utilisation of a common factor model, empirically supported treatments within the definition of evidence-based practice as conceptualised by the American Psychological Association (2006).

In order to achieve a fully integrative approach we must turn back to the evidence-based practice framework and the common factor model: “Evidence-based practice as the integration of the best available research with clinical expertise in the context of patient characteristics, culture and preferences” (APA, 2006: 273).

So, to operationalise this framework what must our practitioners do? Integrate the best available evidence? The literature provides us with evidence of several factors that work in therapy. However, when it comes to the theoretical orientation aspect of what works the current debate splits opinion. What we can say is the following:

  • The average treated client is better off than approximately 80% percent of untreated people.
  • There is good evidence for bona fide therapies and their utilisation; however, the role of specific ingredients versus common factors as change agents may not be as important as the integration of both into a unified model mediated through the common factor framework.

To this end, practitioners are best placed to choose a therapy that best fits their worldview (allegiance effect); that they can explain the rationale for its use with clients; and that offer up a theoretical explanation for the client’s presenting issue with a set of corresponding techniques/rituals. However, as per the APA evidence-based practice definition, interventions must be acceptable to clients’ cultural values and preferences. Furthermore, consistent with the common factor model, the therapist’s approach must fit with the client’s idea of the presenting issues, how the presenting issue occurred and possible treatment options and corresponding techniques. Evidence supports these factors as producing favourable outcomes mediated through the therapeutic alliance, client expectancy, instillation of hope, placebo effect and practitioner allegiance to the therapy. Providing this within the scope of the clinician’s expertise means that the practitioner uses all their experience and knowledge garnered through education, clinical experience and ongoing research in conjunction with the person they are working with and their worldview.

Finally, practitioners may be best served by utilising a feedback system to track clients’ progress, identify those at risk of deterioration and/or drop out and those responding to interventions. In addition, data from such methods can be utilised for therapists to actively and intentionally improve upon areas which need further development by providing baseline outcome stats. Chow et al. refer to this method of therapist development as ‘Deliberate Practice’, stating the following:

Consistent with the literature on expertise and expert performance, the amount of time spent targeted at improving therapeutic skills was a significant predictor of client outcomes.
                                                                                                  (2015: 337)

Moreover, eliciting feedback in this manner not only invites clients to be full participants in the therapy endeavour, it also offers a common ground between the internal validity of research trails of EST’s and the evidence-based practice of integrating EST’s into real world practice to fit individual characteristics, preferences, values and the multitude of complexities humans bring to the therapy endeavours.

Daryl Mahon BA Counselling & Psychotherapy, MA Leadership & Management, HDip Training & Education is a Feedback Informed Treatment (FIT) practitioner, supervisor and trainer of trainers. Contact outcomesmatter1@gmail.com

American Psychological Association, Presidential Task Force on Evidence-Based Practice. (2006). Evidence-based practice in psychology. American Psychologist, 61, 271–285. Retrieved 5 August 2019 from https://www.apa.org/pubs/journals/features/evidence-based-statement.pdf

Baldwin, S. A., & Imel, Z. E. (2013). Therapist effects: findings and methods. In M. J. Lambert (Ed.), Bergin and Garfield’s handbook of psychotherapy and behavior change. 6th ed. (pp. 258 –297). New York: Wiley.

Bell, E. C., Marcus, D. K., & Goodlad, J. K. (2013). Are the parts as good as the whole? A meta- analysis of component treatment studies. Journal of Consulting and Clinical Psychology, 81, 722–736.

Brattland H, Koksvik J.M., Burkeland O, Gråwe R.W., Klöckner C.,Linaker O.M., Ryum T., Wampold B., Lara-Cabrera M.L., Iversen V.C. (2018). The effects of routine outcome monitoring (ROM) on therapy outcomes in the course of an implementation process: A randomized clinical trial. Journal of Counselling Psychology, 65(5), 641-652.

Cahill, S.P., Carrigan, M.H., & Frueh, B. C. (1999). Does EMDR Work? And if so, Why? A Critical Review of Controlled Outcome and Dismantling Research. Journal of Anxiety Disorders, 13, 5-33. Retrieved 23 June 2019 from https://www.ncbi.nlm.nih.gov/pubmed/10225499

Carlier I.V.E., & Van Eeden W.A. (2017). Routine outcome monitoring in mental health care and particularly in addiction treatment: Evidence-based clinical and research recommendations. Journal of Addiction Research and Therapy, 8, 332. doi:10.4172/2155-6105.1000332

Chow, D.L., Miller, S.D., Seidel, J.A., Kane, R.T., Thornton, J.A., Andrews, W.P. (2015). The role of deliberate practice in the development of highly effective psychotherapists. American Psychological Association, 52(3), 337- 45.

Chambless, D. L., & Crits-Christoph, P. (2006). The treatment method. In J. C. Norcross, L. E. Beutler, & R. F. Levant (Eds.), Evidence–based practices in mental health (pp. 191–200), American Psychological Association.

Chambless, D. L., & Hollon, S. D. (1998). Defining empirically supported therapies. Journal of Consulting and Clinical Psychology,66, 7–18. Retrieved 7 September 2019 from https://www.ncbi.nlm.nih.gov/pubmed/9489259

Cuijpers, P., Reijnders, M., & Huibers, J.H. (2019). The role of the common factors in psychotherapy outcomes. Annual Review of Clinical Psychology, 15, 207-231. Retrieved 9 September from https://www.annualreviews.org/doi/full/10.1146/annurev-clinpsy-050718-095424

Cusack, K. & Spates, C. R. (1999). The cognitive dismantling of eye movement desensitization and reprocessing (EMDR) treatment of posttraumatic stress disorder (PTSD). Journal of anxiety disorders. Retrieved 6 September 2019 from https://www.ncbi.nlm.nih.gov/pubmed/10225502

De Jong, K., Van Sluis, P., Nugter, M. A., Heiser, W. J., & Spinhoven, P. (2012). Understanding the differential impact of outcome monitoring: Therapist variables that moderate feedback effects in a randomized clinical trial. Psychotherapy research, 22(4), 464–474.

Duncan, B. (2014). On becoming a better therapist: Evidence-based practice one client at a time (2nd Ed.). Washington, DC: American Psychological Association.

Duncan, B.L., Miller, S.D., & Sparks, J. (2004). The heroic client: A revolutionary way to improve effectiveness through client-directed, outcome-informed therapy (2nd Ed.). San Francisco: Jossey-Bass.

Duncan, B.L., Hubble, M.A., & Miller, S.D., (Eds.) (2010). The heart & soul of change: Delivering what works in therapy (2nd ed.). Washington, DC: American Psychological Association.

Goldberg, S. B., Rousmaniere, T., Miller, S. D., Whipple, J., Nielsen, S. L., Hoyt, W. T., & Wampold, B. E. (2016). Do psychotherapists improve with time and experience? A longitudinal analysis of outcomes in a clinical setting. Journal of Counseling Psychology,63(1), 1-11.

Hansen, N.B., Lambert, M. J., & Forman, E.M. (2002). The psychotherapy dose-response effect and its implications for treatment delivery services. Clinical Psychology: Science and Practice, 9(3), 329-343. Retrieved 8 September 2019 form https://onlinelibrary.wiley.com/doi/abs/10.1093/clipsy.9.3.329

Hansen, N.B. & Lambert, M.J (2003). An evaluation of the dose–response relationship in naturalistic treatment settings using survival analysis. Mental Health Service Res, 5, 1-22.

Harmon, S.C., Lambert, M.J., Smart, D.M., Hawkins, E., Nielsen, S.L., Slade, K., Lutz, W. (2007). Enhancing outcome for potential treatment failures: Therapist-client feedback and clinical support tools. Psychotherapy Research,17(4), 379–392.

Hatfield, D. R., & Ogles, B. M. (2004). The use of outcome measures by psychologists in clinical practice. Professional Psychology: Research and Practice, 35(5), 485-491.

Hawkins, E., Lambert, M., Vermeersch, D., Slade, K., & Tuttle, K., (2004). The therapeutic effects of providing patient progress information to therapists and patients. Psychotherapy Research,14(3), 308-327.

Imel, Z. E., & Wampold, B. E. (2008). The common factors of psychotherapy. In S. D. Brown & R. W.Lent (Eds.), Handbook of counselling psychology 4th ed. New York: Wiley.

Lambert, M. (2017). Maximizing psychotherapy outcome beyond evidence-based medicine. Psychotherapy Psychosom., 86(2), 80-89.

Lambert, M. J. (2013). The efficacy and effectiveness of psychotherapy. In M. J. Lambert (Ed.), Bergin and Garfield’s Handbook of psychotherapy and behavior change 6th ed. (pp. 169 –218). New York: Wiley.

Laska, K. M., Gurman, A. S., & Wampold, B. E. (2014). Expanding the lens of evidence-based practice in psychotherapy: A common factors perspective. Psychotherapy, 51(4), 467-481.

Margison, F.R., Barkham, M., Evans, C., McGrath, G., Mellor-Clark, C., Audin, K., & Connell, J. (2000). Measurement and psychotherapy: Evidence-based practice and practice-based evidence. British Journal of Psychiatry. 177(2), 123-130. 10.1192/bjp.177.2.123.

Miller, S. & Bargmann, S. (2012) The outcome and session rating scales: A brief overview. Integrating Science and Practice, 2(2), 28-31. Retrieved 9 September 2019 from http://citeseerx.ist.psu.edu/viewdoc/1363413C2D7D4D6AE9A7758BDFE8AD7?doi= 1&type=pdf

Nelson, P. L., Warren, J. S., Gleave, R. L., & Burlingame, G. M. (2013). Youth psychotherapy change trajectories and early warning system accuracy in a managed care setting. Journal of Clinical Psychology, 69(9), 880–895. Retrieved 9 September 2019 from https://psycnet.apa.org/ record/2013-26552-002

Østergård, O.K., Randa, H., Hougaard, E. (2018). The effect of using the Partners for Change outcome management system as a feedback tool in psychotherapy: A systematic review and meta-analysis. Psychotherapy Research, 1-18. Retrieved 9 September 2019 from https://www.ncbi.nlm.nih.gov/pubmed/30213240

Pagoto, S. L., Spring, B., Coups, E. J., Mulvaney, S., Coutu, M. F., & Ozakinci, G. (2007). Barriers and facilitators of evidence-based practice perceived by behavioral science health professionals. Journal of Clinical Psychology, 63, 695–705.

Phelps, R., Eisman, E. J., & Kohout, J. (1998). Psychological practice and managed care: Results of the CAPP practitioner survey. Professional Psychology: Research and Practice, 29(1), 31-36. Retrieved 8 September from https://psycnet.apa.org/record/1997-38424-005

Rosenzweig, S.l. (1936). Some implicit common factors in diverse methods of psychotherapy. American Journal of Orthopsychiatry,6(3), 412–415. Retrieved 8 September from https:// onlinelibrary.wiley.com/doi/abs/10.1111/j.1939-0025.1936.tb05248.x

Siev, J., Huppert, J. D., & Chambless, D. L. (2009). The dodo bird, treatment technique, and disseminating empirically supported treatments. The Behavior Therapist, 32, 69 –76.

Smith, M. L., & Glass, G. V. (1977). Meta-analysis of psychotherapy outcome studies. American Psychologist, 32(9), 752-760.

Stiles, W. B., Barkham, M., Twigg, E., Mellor-Clark, J., & Cooper, M. (2006). Effectiveness of cognitive-behavioural, person-centred and psychodynamic therapies as practised in UK National Health Service settings. Psychological Medicine, 36, 555–566.

Swisher, A.K. (2010). Practice-based evidence. Cardiopulm Physical Therapy Journal. 21(2), 4. Retrieved 6 September from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2879420/

Truijens, F., Zühlke-van Hulzen, L., & Vanheule, S. (2018). To manualize, or not to manualize: Is that still the question? A systematic review of empirical evidence for manual superiority in psychological treatment. Journal of Clinical Psychology, 75(3), 329-343.

Vinnars, B., Barber, J.P., Norén, K., Gallop, R., & Weinryb, R.M.(2005) Manualized supportive- expressive psychotherapy versus nonmanualized community-delivered psychodynamic therapy for patients with personality disorders: Bridging efficacy and effectiveness. American Journal of Psychiatry, 162(10), 1933-40.

Wachtel, P. (2010). Beyond “ESTs”: Problematic assumptions in the persuit of evidence based practice. Psychoanalytic Psychology,27, 251– 272. doi:10.1037/a0020532

Wampold, B. E., & Serlin, R. C. (2014). Meta-analytic methods to test relative efficacy. Quality & Quantity. International Journal of Methodology, 48, 755–765. Retrieved 8 September 2019 from https://psycnet.apa.org/record/2014-04445-013

Wampold, B. E., Imel, Z. E., Laska, K. M., Benish, S., Miller, S. D., Flückiger, C. & Budge, S. (2010). Determining what works in the treatment of PTSD. Clinical Psychology Review, 30, 923–933.

Wampold, B. E., Mondin, G. W., Moody, M., Stich, F., Benson, K., & Ahn, H. (1997). A meta-analysis of outcome studies comparing bona fide psychotherapies: Empirically, “All must have prizes”. Psychological Bulletin, 122, 203–215.

Wampold, B. E., & Imel, Z. E. (2015). The great psychotherapy debate: the evidence for what makes psychotherapy work 2nd ed. New York: Routledge.

Watts, B.V., Schnurr, P.P., Mayo, L., Young-Xu, Y., Weeks, W.B., & Friedman, M.J. (2013). Meta- analysis of the efficacy of treatments for posttraumatic stress disorder. Journal of Clinical Psychiatry, 74(6), 41-50.

Weisz, J.R., Kuppens, S., Ng, M.Y., Vaughn-Coaxum, R.A., Ugueto, A.M., Eckshtain, D., & Corteselli K.A. (2019) Are psychotherapies for young people growing stronger? Tracking trends over time for youth anxiety, depression, attention-deficit/hyperactivity disorder, and conduct problems. Perspectives in Psychological Science, 14(2), 216-237.

Westen Westen, D., Novotny, C. M., & Thompson-Brenner, H. (2004). The empirical status of empirically supported psychotherapies: Assumptions, findings, and reporting in controlled clinical trials. Psychological Bulletin, 130, 631–663.

The Irish Association of Humanistic
& Integrative Psychotherapy (IAHIP) CLG.

Cumann na hÉireann um Shíciteiripe Dhaonnachaíoch agus Chomhtháiteach

9.00am - 5.30pm Mon - Fri
+353 (0) 1 284 1665

email: admin@iahip.org

Copyright © IAHIP CLG. All Rights Reserved
Privacy Policy