medium_SCRA_logomark_4col.jpg

The
Community
Psychologist

Volume 51   Number 3 Summer 2018

The Education Connection

Edited by Laura Kohn-Wood, University of Miami

Council of Education: Sharing Our Work from the Mini-grant Initiative

Written by Dawn X. Henderson, North Carolina A&T State University

What happens when you devote funding to education initiatives? Well, you can support creative pedagogy, international collaborations, and so much more. Since 2015, after initiating its first call for proposals, the Council of Education (COE) Mini-Grant initiative provided support to more than ten national and international initiatives. The COE mini-grant provides funding (up to $1,250) to support cross-program collaboration, development of a joint educational program or initiative, and recruitment. Funding aims to support opportunities for universities and programs to share educational resources and enhance training and programming. While the amount is not large, creativity among SCRA members is evident in the initiatives proposed within the past two years to include international collaboration between universities, curriculum development, and training of undergraduate and graduate students in participatory research. In this article, we share some of the initiatives and anticipate this work will inspire other SCRA members to apply for mini-grants and generate knowledge that enhances undergraduate and graduate education in community psychology. While this list is not exhaustive and does not include all of the awardees, we highlight some of the work: 

  • Recruitment and Marketing: Dr. Shawn M. Badiako at the University of Maryland Baltimore County developed a digital brochure to recruit students from undergraduate and master’s programs.
  • Enhanced Learning Communities: Dr. Nikke Harre and colleagues at Auckland University in New Zealand organized a Theories of Change Hui/Conference for activists and community groups. The event was offered free and additional sponsorship from Auckland City Council, Fulton Hogan Ltd, Envision NZ Ltd and the University of Auckland helped to support the development of a similar workshop in Ontario, Canada and virtually. To learn more about this work visit the website below: www.theoriesofchange.org
  • International Collaboration: Dr. David Livert with Pennsylvania State University and Dr. Ronni Greenwood with the University of Limerick in Ireland formed a collaboration using an online platform to examine local problem/issues. This project included secondary analyses of data and interviews with individuals involved in addressing the problem in their community and sharing these findings through an international forum. 
  • Undergraduate Education: Dr. Jesica Siham Fernandez and colleagues used funding to support an undergraduate course project designed to increase sociopolitical development and knowledge regarding sociocultural learning. Faculty and students designed a “healing justice” workshop and completed a photovoice (photo + narrative) project that examined self-preservation in the context of political activism. 

With support from the Executive Council, SCRA, and Council of Education members and reviewers the mini-grants would not be successful. We thank those SCRA members who continue to share their ideas and are working to enhance undergraduate and graduate education.  The mini-grant provides numerous ways for SCRA members to become involved in our work, whether that involves creative recruitment strategies or finding ways to collaborate across universities. We want to encourage all members to see the mini-grant as an opportunity to support excellence and visibility in education in community research and action.

Read more at http://www.scra27.org/what-we-do/education/about-cep/#PsbW8QkaT5lIDjXu.99

Understanding the Perceived Health of Graduate Community Psychology Programs and its Relationships with Indicators of Sustainability, Diversity, and Rigor:  Findings from the 2016 Survey of Graduate Programs in Community Psychology

Written by Mason G. Haber, Judge Baker Children’s Center, Harvard Medical School; Laura Kohn-Wood, Department of Education & Psychological Studies, School of Education and Human Development, University of Miami; and Members of the SCRA Council on Education

Introduction

This is the second report of findings related to Community Psychology (CP) training from the 2016 Society for Community Research and Action (SCRA) Council of Education (CoE) Survey of Graduate Programs in Community Psychology. Our previous report (see Haber et al., 2017a) presented findings related to training in Competencies for Community Psychology Practice (“Practice Competencies”; Connell et al., 2013) and Competencies for Community Research (“Research Competencies”; Christens et al., 2015) and examined the challenges, breadth, and levels of training available in these competency areas. In this report, we focus on a broader range of indicators reflecting the “health” of CP graduate training and CP training programs, including program directors’ perceptions of overall health, indicators of programs’ sustainability, and indicators of the diversity of opportunities available across programs in both research and practice. A secondary focus is to provide additional context for findings from the first report related to the levels of training or rigor of training available in Research and Practice Competencies. Specifically, rather than comparing data of master’s- versus doctoral-level training programs as in the prior report, we compare training rigor by the nature of programs’ institutional setting (e.g. Carnegie classification of the university), and perceived career outcomes of students (research versus practice). The report also integrates results of the two reports by considering relationships between the indicators of rigor from the first and those for sustainability and diversity in the current report. We believe that considering overall health perceptions in the context of all three types of health indicators in the two reports – sustainability, diversity, and rigor – provides a relatively comprehensive account of the health of training programs based on the 2016 survey data. As we subsequently describe, there is some indication that these types of indicators are not always associated with one another or program health overall, so all need to be considered. The overlapping but separable contributions made by sustainability, diversity, and rigor in understanding health of training programs are depicted in Figure 1.

7_COE3_Figure_1.jpg

Figure 1. Three Possible Components of the Health of Graduate Training in Community Psychology

We hope our findings provide insights into the overall health of CP training programs as perceived by their program directors in the 2016 Survey, the variability in perceived health among programs, and possible indicators of specific types of program health. We also will suggest possible uses of these indicators for tracking and improving the health and quality of CP training programs in the future. Findings may also help to advance recent discussions on the listserv regarding critical sustainability issues (e.g., the pros and cons of ranking programs for improving the status of CP programs within their academic institutions), and on increasing external awareness of CP, an issue that was also addressed in multiple ways by the most recent SCRA Strategic Plan.

Method

Program Sample

For specific information regarding the sample, procedure, and survey items, see Haber et al., 2017a. Briefly, the sampling frame included graduate programs in community psychology identified by several means (formal SCRA affiliation, listserv requests, and program listings in the Integrated Postsecondary Educational Data [IPED] system [cf. https://nces.ed.gov/ipeds/]). To ensure inclusion criteria were met, a short screener was used to determine the extent to which training emphasized community psychology versus other types of disciplinary perspectives. The sampling process identified 56 programs from which a contact person (typically the program director) was invited to participate. Recruitment efforts yielded 52 programs with completed survey data assessing research and practice competencies and challenges (a 93% response rate), 50 of which also provided all data on the indicators summarized in the current report (89%).

Health Indicators

The 2016 survey assessed overall health by asking respondents to provide an overall “grade” of their programs’ performance (i.e., “A” through “F”). For purposes of correlational analyses, the reported grade was transformed into a numeric code with a range truncated at “C” due to the small number of programs giving themselves grades of D or F (i.e., “C” through “F”=  1; ”B”= 2; “A” = 3). Survey indicators of sustainability, defined as the capacity of programs to continue to offer their currently available training in community psychology, included the number of reported “difficulties” from the prior three years from a set identified by the survey (e.g., difficulties retaining faculty; see Figure 2 for complete list); and indicators of program size, competitiveness, and level of funding available to students. Diversity of training opportunities was examined by asking program directors to rank five possible career destinations in order of their likelihood among their students (“Academic”, “Professional Research”, “Community Practice”, and “Clinical” professions)  and, as a proxy for level of focus on as well as resources for research, Carnegie Classification (cf. http://carnegieclassifications.iu.edu/). Rigor was assessed through reports on levels of training in sets of “Specific Research” and “Specific Practice” competencies from the Research and Practice Competencies (e.g., for Specific Practice, “Program Development, Implementation, and Management”; for Specific Research, “Mixed Methods Designs”). Each competency area was rated on a scale from 1 to 4 (i.e., 1=none, 2=exposure, 3=experience, and 4=expertise. Crucially, as we will discuss later, “Experience” was defined as meaning “most students gain a basic ability to use” the competency.  Definitions and Descriptive data for the competencies are shared in our first report (Haber et al., 2017a); the current report considers their relationships with other indicators.

Findings

Descriptive statistics on indicators for the current report are provided below. Given that these were not distributed normally, Spearman’s rho was used to assess their interrelationships (i.e., of program grades with difficulties, and of grades and difficulties with indicators of sustainability, diversity, and rigor).

Perceived health, Difficulties and Competencies

Findings show a relatively flat distribution of reports by participants across the commonly reported overall grades (“A” through “C”; see Figure 2), suggesting that there is no “average” program. Roughly equal numbers of directors graded their programs as excelling (“A”), adequate (“B”) or struggling or failing (“C” and “D”).

7.COE3_Figure_2.1.jpg7.COE3_Figure_2.2.jpg7._COE3_Figure_2.3.jpg

Figure 2. Letter grades and types and numbers of difficulties reported by directors (%; N = 50) 

Of the 68% of programs identifying at least one difficulty, there was substantial variation in the number and types of difficulties reported (these data are summarized in detail in our first report). Number of difficulties was moderately to strongly associated with the overall grade (rs = .39, p < .01). The difficulty with the strongest association with grades was programs’ status “in the larger, department, college, or university”. Program grades were not associated with rigor of training in competencies (rs for Specific Practice and Specific Research Competencies = -.20, and .04, respectively; ns), though as reported previously in Haber et al. (2017), difficulties were associated with rigor in Specific Practice Competencies.

Objective Indicators and Relationships with Perceived Health and Competencies

Over the academic years 2014 through 2016, findings indicated that most programs (61.70%) were relatively small, admitting five students each year or less, and had relatively competitive admissions (M = 18.92%, Mdn = 14.63%). However, a minority were much larger, and/or had less competitive admissions, with roughly a fifth reporting admissions rates of 40% or higher (22.22%) and a fifth reporting enrolling over 10 students a year on average (20.83%). At the opposite extreme, a small but significant number of programs (27.66%) had no admissions in at least one of the last three years.

7._COE3_Table_1.jpg

Table 1. Applications, Admissions, and Enrollment for Academic Years 2014-2016 (N = 50)

Related to student funding, the distribution of the percentage of students with some funding was bimodal; specifically, most programs (n = 37, or 74%) reported covering at least some tuition and stipend costs for most (i.e., 75% or more) of their students, but the remaining schools (n = 13, or 26%) reported that fewer than 30% of their students were funded. None of these indicators were related to overall program grades or perceived difficulties.

Objective descriptive information about training programs is shown in Table 2, and relationships between these and other indicators are shown in Table 3.

7._COE3_Table_2.jpg

Table 2. Ratings of Importance of Criteria for Student Selection (%; N = 50)


7._COE3_Table_3.jpg

Table 3. Relationships among Program Health Indicators (Spearman’s rho).

Diversity Indicators and Relationships with Grades, Difficulties, and Competencies

Carnegie Classification and Relationships. While research-intensive or “Very High Research” status was the modal Carnegie classification for graduate community psychology training programs, almost half (46.3%) fell into other categories.  Notably, programs at “Very High Research” institutions tended to report fewer difficulties and higher grades (rs = -.39, p < .01 and .38, p < .05; see Table 3). “Very High” research intensive status was not significantly related to rigor of training in Practice and Research Competencies (rs = -.26 and .10, ns).  

Levels of Priority Attached to Student Characteristics.  In evaluating students, program directors were mostly likely to rate as “High” or “Very High” the importance of academic performance (88%), or interests in social change (80.0%). Of the remaining qualities, those most likely to be rated “High” or “Very High” were research experience (66% of directors) and fit with faculty interests (68% of directors). Program directors were split on the importance of GRE scores, with 24% of program directors rating their importance highly or very highly, 36% as “Somewhat High,” and 16% as “Somewhat Low” or “Very Low”. Clinical work was consistently given low priority, with only 8% of directors rating it “High” or “Somewhat High.” 

Student Professional Trajectories. Most respondents (51.9%) indicated that academic settings were one of the top two of the four post-training outcomes for students, with almost all (84%) reporting academic settings as being among the top three. Approximately half of respondents (50.0%) reported “professional research” in the top two and half (53.8%) “community practice”. Programs reporting “community practice” in the top two generally ranked it first, whereas programs rating “professional research” in the top two were split in ranking it first or second among the professional trajectories. Relatively few respondents (N = 9, or 17.3%) reported both professional research and practice as being in the top two outcomes for their students. (Even broadening the definition of a “research” career to include the “Academic” category, this figure only rises to 44%.) Programs highly ranking post-graduation research trajectories tended to rate the rigor of their training in Research Competencies more highly; those more highly ranking practice trajectories gave higher ratings to their training in the Practice Competencies. Although not related to program grades, rankings were related to the number of difficulties reported, with programs ranking community practice in the top two reporting more difficulties, consistent with findings in our first report. Note that the coefficient in the current report is non-parametric and therefore slightly different (higher) than the corresponding figure in our first report.

7.COE3_Figure_3.jpg

Figure 3. Programs Ranking Academic, Research and Practice Among Top Two Goals (%; N = 50)

Implications

What do findings say about the overall health of community psychology training?

Overall, our results suggest that characterizing the health of programs in a general fashion is a challenge and perhaps misguided. Findings make clear that there is no “typical” community psychology program either with respect to broad perceptions of health (the overall letter grade) or indicators of specific dimensions of health. Thus, other than providing some reassurance that “good” community psychology training is available, these results may have limited implications for assessing the health of programs in a general way. Instead, it would seem more helpful to use these findings to characterize problems faced by some types of programs (e.g., practice-focused programs), or apply this knowledge to target and tailor intervention to particular “troubled” programs that may be showing signs of difficulty on a variety of indicators. 

Relationships between Sustainability, Rigor, and Overall Health

Overall health (i.e., grades) was related to some sustainability indicators but not others. Specifically, grades related to the overall count of difficulties reported by programs, and difficulties with status of programs in their respective departments, colleges, and universities. However, neither grades nor difficulties related to objective indicators more typically employed in program surveys as proxies for sustainability or rigor (e.g., competitive admissions). Thus, it appears that the impressions of the specific types of problems concerning difficulties may better reflect overall perceived health than such objective indicators. Indicators of subjective difficulties should continue to be assessed in subsequent iterations of the survey to continue to assess the types of issues that may drive program directors to have relatively negative impressions of their programs. One objective indicator that did appear to predict overall impressions was Carnegie Classification. The richer resources that may be present in  “Very High Research” settings due to their prestige, external grant funding, etc. may help to ensure a certain level of successfulness for programs, though it is worth noting that the converse may also be true – weak programs may struggle to receive support from their universities where other programs and departments have relatively greater prestige and resources (and which may, consequently, not be inclined to continue to support struggling programs). This idea would appear to be supported by the stronger association of program grades with difficulties with “status in university” relative to other types of difficulties.  

Overall grades and specific difficulties were mostly unrelated to rigor as captured by levels of training in practice and research competencies, or had relatively weak or paradoxical relationships (e.g., the inverse relationship of number of difficulties with practice competency training; Haber et al., 2017a). Thus, although one would expect at least some overlap between subjective health perceptions and rigor, they appear distinguishable to a substantial degree. From the perspective of program directors, programs may be perceived as troubled but at the same time be seen as providing rigorous research and practice training. 

Diversity of Focus and Health

In these analyses, “diversity” was primarily represented as a focus on practice versus research, as captured in program directors’ responses about the professional trajectories of their students following their training and the Carnegie classifications of programs’ university settings. Relating these data to levels of training in practice and research competencies can provide some insight into whether graduate community psychology training programs are preparing students for their subsequent careers. Relating them to indicators of overall health and sustainability can contribute to an understanding of whether difficulties faced by certain programs relate to the types of preparation or goals characterizing their training. Findings clearly indicate that practice-focused programs reported training to higher levels of competence in practice areas, and research-focused programs reported training to higher levels of competence in research areas. This indication of specialization lends credence to the perspective that programs need not “serve all masters.” With respect to differences of opinion about whether levels of competencies achieved are adequate for achieving training objectives, discussed in the first report and in multiple sessions at the 2016 biennial (e.g., Haber et al., 2017b), the proverbial jury is still out. Levels of competency training across both research and practice domains averaged below the “experience” level, regardless of whether they aligned with students’ career trajectories (i.e., subsequent research versus practice focused career paths), meaning that on average, program directors indicated that training provided in a given competency area did not result in students acquiring a basic capacity to use the competency. If the goal of training programs is to provide students with skills that they can readily apply upon graduation, regardless of the type and level of post-graduate support received, levels of training consistently below the “experience” level (i.e., the level of application) are troubling. Conversely, if the goal of graduate training is merely to set the stage for continued, intensive training and mentorship beyond graduate school, it is possible that “exposure” levels of training are adequate for students to achieve their future career objectives. It will be important for the field to tackle the issues of specialization and quality given the equivocal data on rigor, as well as findings in our first report (Haber et al., 2017a) that trying to train students in an overly broad range of domains may be associated with challenges for some training programs. 

Although findings appear to suggest diversity of focus among programs in favoring research versus practice career destinations, one area in which programs did not vary greatly was their emphasis on social change interests in evaluating prospective students. We believe that this is positive news for the field, as our commitment to social change is one of the key qualities differentiating us from most other subdisciplines in psychology as well as other disciplines and their subdisciplines (cf. Prilleltensky, 2001; Trickett, 2009).

Future Directions

Based on these data, we recommend that subsequent surveys continue to track program health. The indicators of “difficulty” included in the 2016 survey appear to correspond well with overall health perceptions. Thus, these and other indicators discussed in the present and prior reports may provide reasonable proxies of overall health or potential problems, and in turn may be helpful in identifying programs that could be in need of outside assistance if they are to continue to thrive. These data also provide important context for furthering the discussion of program excellence, whether specialization or increasing competency levels is the preferred aim. Assessing training is an essential pursuit of a mature field and a critical activity for an academic discipline. We look forward to the potential for this second report of findings from the CoE 2016 Survey of Graduate Training programs to provide context for further discussion and action to advance the quality of graduate training in community psychology.

References

Christens, B. D., Connell, C. M., Faust, V., Haber, M. G., & the Council of Education Programs (2015). Progress report: Competencies for community research and action. The Community Psychologist, 48(4), 3-9.

Connell, C. M., Lewis, R. K., Cook, J., Meissen, G., Wolf, T., Johnson-Hakim, S., … Taylor, S. (2013). Special report – Graduate training in community psychology practice competencies: Responses to the 2012 survey of graduate programs in community psychology. The Community Psychologist, 46(4).

Haber, M.G., Neal, Z., Christens, B., Faust, V., Jackson, L, Kohn-Wood, L., Council on Education, Society for Community Research and Action (2017a). The 2016 Survey of Graduate Programs in Community Psychology: Findings on Competencies for Community Research & Practice and challenges of training programs. The Community Psychologist, 50(2). http://www.scra27.org/publications/tcp/tcp-past-issues/tcpspring2017/council-education/

Haber, M.G., Neal, Z., Faust, V., Christens B., Legler R, Connell C, and the members of the Society for Community Research & Action Council on Education (2017b). Mo’ competencies, mo’ problems?: Responses to the 2016 survey of graduate programs in community psychology. Symposium presented at the 16th Biennial Conference of the Society for Community Research and Action, Ottawa, ON., CA.

Prilleltensky, I. (2001). Value-based praxis in community psychology: Moving toward social justice and social action. American Journal of Community Psychology, 29, 747-778.

Trickett, E. (2009). Community psychology: Individuals and interventions in community context. Annual Review of Psychology, 60, 395-419.