Council of Education Programs

 medium_SCRA_logomark_4col.jpg

The
Community
Psychologist

Volume 50 Number 2
Spring 2017

Edited by Raymond Legler

rlegler@nl.edu
National Louis University

The 2016 Survey of Graduate Programs in Community Psychology: Findings on Competencies in Research & Practice and Challenges of Training Programs

Written by Mason G. Haber, Department of Psychiatry, University of Massachusetts Medical School & Judge Baker Children’s Center; Zachary Neal, Michigan State University; Brian Christens, Victoria Faust, and Lisa Jackson, University of Wisconsin-Madison; Laura Kohn Wood, University of Miami; Taylor Bishop Scott, University of North Carolina at Charlotte; Raymond Legler, National Louis University; and the Members of the Society for Community Research and Action Council of Education

Introduction

This report presents findings related to training in competencies from the 2016 Society for Community Research and Action (SCRA) Council of Education (COE) Survey of Graduate Programs in Community Psychology, including the Competencies for Community Psychology Practice (“Practice Competencies”; Connell et al., 2013) and Competencies for Community Research (“Research Competencies”; Christens et al., 2015). The report also examines findings on training programs’ challenges and possible relationships between these challenges and the breadth and types of competency training they offer. Subsequent reporting will present findings related to other aspects of the survey, including questions regarding composition of faculty and students and career paths of students post-graduation. The 2016 COE survey is the eighth such survey since 1987 (cf. Connell et al., 2013). The last COE survey (Connell et al., 2013) was the first to include the Practice Competencies. These competencies were developed by the SCRA Practice Council and COE to help establish community psychology practice as a legitimate area of study and professional focus, distinct from but of comparable importance to academic scholarship (Dalton & Wolfe, 2012). As shown in Table 1, they include five Foundational Competencies,1 said to undergird all areas of practice, and well as 11 Specific Competencies in three practice areas. The practice competencies are now widely disseminated and used as bases for resources and tools for educators, programs, and professional training activities in community psychology (cf. Scott & Wolfe, 2015).

The success of the practice competencies stimulated interest in whether a similar and complementary set of competencies could be used to help refine training in community psychology research. In 2014, the COE (at that time, the Council of Educational Programs [CEP]) began working on developing research competencies. This worked included semistructured interviews with 19 senior and early career SCRA researchers focusing on the most important research skills in the training of community researchers and in designing and conducting impactful studies, subsequently described in a 2015 issue of TCP (Christens et al., 2015). In similar manner to the practice competencies, the research competencies include five Foundational Research Competencies, or competencies underlying all areas of research, as well as Specific Research Competencies in the areas of research design, analysis, and theory (Table 2), with large numbers of specific skills in each of these categories, reflecting the heterodoxy of community research (Tebes et al., 2014). Questions regarding the extent of training in these competencies were included in the 2016 Survey of Graduate programs for the first time. Aims of the report include examining: 1) the extent of training in practice and research competencies and challenges reported by programs; 2) differences in competencies and challenges by program degree level (masters vs. Ph.D.) and type (standalone vs. other); and 3) the ways in which training in research and practice competency areas relate to challenges.

Methods
To prepare for the 2016 survey, the COE compiled contact information on all graduate programs in community psychology affiliated with SCRA, and sought to identify any unaffiliated programs via requests to the SCRA listserv. A short preliminary survey was used to help ensure that the additional programs identified through the listserv were adequately focused on community psychology training to qualify for inclusion. These efforts identified 56 programs, whose contact person (typically the program director) received an email invitation to participate in the COE survey. Following a series of follow-up email messages and phone calls, 52 programs completed the survey (93% response rate). For each individual research and practice competency, the survey asked respondents to indicate the level of training their program provides, on a 4-point scale (1= None, 2=Exposure, and 3 = Experience, 4=Expertise). The survey also asked respondents to identify any challenges theirprogram had experienced in the past year from a list of six, including challenges related to attracting and recruiting high quality student applicants, faculty recruitment and retention, and the status and regard for their programs in their respective departments and university. Table 3 reports means and standard deviations among all respondents for practice and research competency areas and program challenges, as well as correlations among these variables.

Findings
Overall Levels of Training & Challenges
Overall Research vs. Practice Foundational & Specific Competencies. Figure 1A compares averages of: 1) research versus practice Foundational Competencies, the basic competencies that enhance all aspects of research or practice (e.g., “Ethical Reflective Practice”, “Research Questions & Leverage Points”; see definitions in Tables 1 & 2), as well as 2) Specific Competencies, or particular types of skills in research (e.g., Mixed Methods, SEM) versus practice (e.g., “Community Leadership and Mentoring”, “Public Policy Analysis”). These averages represent differences in basic research and practice training skills, versus the more specific applications of these skills, respectively. Among the Foundational Practice Competencies, ratings were, on average, between the level of “Experience” (3) (i.e., a level such that “Most students gain a basic  ability” to use the competency) and “Expertise” (4) (i.e., “Most students gain an advanced ability to use this competency”). Somewhat lower ratings were found among Foundational Research Competencies, closer to the “Experience” (3) level (paired t = 3.72, p < .01). Among Specific Practice and Specific Research competencies, practice averages also surpassed those for research (paired t = 2.54, p < .05).

Comparisons among specific research & practice competency types
Figures 1B and 1C show comparisons among specific practice and specific research areas, respectively. Among practice areas, competencies focusing on programs, or “Community Program Development” were rated more highly (i.e., closer to the “Experience level” [3]), than those focusing on work with a broader range of stakeholders and settings, including those falling in the areas of “Community & Organizational Capacity Building” (paired t vs. Community Program Development: -2.4, p < .05), and “Community & Social Change” (paired t vs. Community Program Development: -2.79, p < .01). Among specific research competency areas, mean scores on both Research Design and Research Theory were greater than those for Research Analysis (Design vs. Analysis paired t = 6.04, p < .001; Theory vs. Analysis paired t = 4.99, p < .001). Challenges Reported by Programs. A majority of programs (35 of 52, or 67.3%) reported experiencing one or more challenges, with almost three quarters of these (25 of 35, or 71.4%) reporting problems with attracting student applicant pools of adequate size, quality, or both. Over half (20 of 35) reported problems with attracting or retaining high quality faculty. Fewer respondents (10 or 19.2%) reported challenges related to their programs’ status in their departments or universities.

Program Group Differences
No significant differences in extent of research and practice training in standalone versus other community programs were detected, though some were detected between master’s and Ph.D. programs. Table 4 shows results of independent samples t-tests comparing program degree types. Respondents from both master’s and Ph.D. programs indicate training students to similar degrees of competence in both research and practice Foundational competencies, and in specific practice competencies. In specific research areas, respondents from Ph.D. programs gave higher ratings for two of the three areas (i.e., Research Design and Analysis, but not Research Theory).

Relationships of Competencies to Challenges
As shown in Table 3, only training in specific practices was related to the number challenges reported by programs, such that respondents who reported engaging in more training in specific practice areas also reported experiencing more problems. The strength of this association varied greatly for master’s and Ph.D programs however, with a stronger association between specific practice competencies in master’s programs (r = .60, p < .05) than in Ph.D programs (r = .14, ns).

Discussion
Historically, SCRA graduate program surveys have tended to focus on broad descriptive or indicators of program health or performance, such as number of students, faculty, degrees conferred, or professional outcomes. The attempt to comprehensively assess training in specific competencies for practice and research in the 2016 survey is a first for the field. Given the newness of this work, the findings should be interpreted tentatively, but suggest potentially important avenues for discussion. Several sessions at the upcoming SCRA biennial in Ottawa focusing on 2016 survey results, including a program directors meeting, a roundtable, and symposium, will provide opportunities to examine more detailed findings from the 2016 survey, as well as compare this iteration of the survey to similar findings from the 2012 survey (e.g., on the Practice Competencies). To help drive these discussions, brief comments on possible implications of findings, below, are accompanied by questions for further consideration or empirical investigation. We hope that TCP readers can consider these prior to the conference and bring their ideas to these sessions.

Is there a need to strengthen training in practices focusing on community-level action (i.e., activities beyond working with specific programs or interventions)? Overall, levels of training reported for practice competencies exceeded those for research competencies. This result should be encouraging to those concerned with placing community practice on an equal footing with research in community training programs. Respondents indicated providing more training in competencies focusing on program-level work, versus those concerned with agency capacity building, neighborhoods, etc. Descriptions of community psychology emphasize multiple levels of ecological analysis as one of the defining characteristics of the field. But can community psychology consider multiple levels (beyond those of individuals and programs) when many of its training programs provide only “Exposure” to, rather than “Experience” with competencies that would support such work?

Should certain programs (e.g., master’s level) narrow the focus of specific research design, analysis, or theoretical competencies addressed by their training? Generally, graduate programs appear to be addressing the broad range of research skills and perspectives examined by the survey fairly well. Both master’s and Ph.D. programs are providing a strong grounding in the Foundational Research Competencies, but average Research Design and Analysis ratings were lower for master’s than Ph.D. programs, generally indicating an “Exposure” level of training, or merely acquainting students, rather than providing a basic ability to use the competency. Would master’s students be better served by a more in depth focus on a subset of research areas?

Mo’ competencies, mo’ problems – it depends? The breadth of perspectives and skills encompassed by the Practice and Research Competencies may be unavoidable given the pluralistic nature of the field. Attempting to train students to higher levels of expertise in the full array of areas covered by the Practice and Research Competencies, however, could strain programs’ resources. This could result in challenges for recruiting students and faculty who can learn or teach skillfully in all of these areas, or decrease support for the caliber of work necessary to protect programs’ statuses in their departments and universities. Do survey results support this concern? Our findings suggest that the answer may be determined partly by the nature of the program involved, as Ph.D. programs failed to show evidence of such strain (i.e., no association of training and challenge levels), but a very strong (r = .6) relationship of this kind was shown among master’s programs. Findings suggest further exploration of this issue may be important to ensuring quality and sustainability of training, especially in master’s level settings.

We hope our readers consider these questions (and pose others), and we look forward to sharing more of the survey results and engaging in discussions of their implications with you at the upcoming Biennial. See you in Ottawa!

Footnote
1 In the original publication of the practice competencies (Connell et al., 2013), these are referred to as Foundational Principles (rather than Foundational Competencies). Since the Foundational Principles are also considered competencies, they are referred to as such in the current paper for economy of presentation (i.e., rather than Foundational Practice Principles, they are referred to as Foundational Practice Competencies.

References
Christens, B. D., Connell, C. M., Faust, V., Haber, M. G., & the Council of Education Programs (2015). Progress report: Competencies for community research and action. The Community Psychologist, 48(4), 3-9.

Connell, C. M., Lewis, R. K., Cook, J., Meissen, G., Wolf, T., Johnson-Hakim, S., … Taylor, S. (2013). Special report – Graduate training in community psychology practice competencies: Responses to the 2012 survey of graduate programs in community psychology. The Community Psychologist, 46, 5-8.

Dalton, J., & Wolfe, S. M. (2012). Joint column: Education connection and the community practitioner: Competencies for community psychology practice. The Community Psychologist, 45(4), 7-14.

Scott, V. C., & Wolfe, S. M. (Eds.) (2015). Community psychology: Foundations for practice. Thousand Oaks, CA: Sage. 

CEP-Sponsored Agent-based Modeling Workshop

Written by Jennifer A. Lawlor and Zachary Neal

Michigan State University

Simulation modeling has recently gained traction in community psychology as a research tool. One approach to simulation modeling is particularly useful for incorporating context: agentbased modeling (ABM). While it is becoming more common in community psychology, there are few training opportunities available for community psychologists to learn how to incorporate it into their work. To address this, we offered a one-day comprehensive workshop to introduce community psychologists to agent-based modeling, with support from a Council of Education Programs Mini-Grant and from the Michigan State University Psychology Department. Here, we will provide some additional information about this approach to research, report out about our event, and discuss next steps for disseminating training about ABM.

What is agent-based modeling?
Agent-based models are simulation models that allow researchers to explore a number of scenarios using a computer. In particular, this approach can be used to examine situations where agents (which can be defined as any type of entity, individual or organization) interact with each other or their environment and generate macro level phenomena (Neal & Lawlor, 2015). For example, a researcher working with an intervention might want to examine how a different recruitment strategy where participants use word of mouth to enroll others in a program might affect long-term program outcomes. Modeling this process allows the researcher to have some insights into how this change might influence outcomes without engaging in trial and error in the community setting. Thus, researchers can save resources and community members’ time by using simulation modeling as a first check to see how a problem is operating or how an intervention might influence it. This approach has also been used in collaborative contexts with communities to build participatory models to represent problems of interest to them, making it a great fit for the research orientation that man community psychologists take with their work.

The Workshop
We offered a one-day workshop broken up into several parts: (1) learning about ABM and appropriate uses for it, (2) building a basic model as a group, (3) modifying our model to test interventions. First, we discussed ABM methodology and explored a few simple agent-based models that help to establish a background understanding of how they work and what kinds of problems it is useful for addressing. Next, we generated a model of a zombie apocalypse as a group. While this may seem silly, it is an easy metaphor that can be extended to think about problems community psychologists address, like the spread of an illness in a community or the diffusion of information. This provided an opportunity for participants to learn basic programming in Netlogo, a free resource for building agent-based models that is popular in ABM research (Lawlor & Neal, 2016; Neal, 2015). After generating a basic common model, participants had the opportunity to work independently and in groups to create interventions to address the zombie apocalypse. They implemented solutions in their models to see what would be most helpful for saving humanity from zombie-ism and everyone had a chance to present their findings at the end of the workshop. Solutions ranged from vaccinating against zombie-ism to sending in military reinforcements to protect the population from those who were infected. Running the models helped demonstrate how different solutions to the problem could generate a variety of outcomes. An example of a zombie apocalypse simulation: gray agents are zombies, black agents are humans

Reflections & Next Steps
We were thrilled to have participants from a variety of universities around the Midwest and from a diversity of programs promoting psychological and community-based research. Surveys from the workshop indicate that many participants felt that it was worthwhile and helpful in finding ways to use this approach in their own work. In particular, participants reported benefitting from generating their own model interventions. One participant said, “I enjoyed the hands on experience of building a novel zombie model. It was nice that at the end of the workshop, I had some facility with Netlogo syntax.” Another indicated that the intervention development process “gave me a greater breadth of coding than the rest of the day, which I think will be helpful going forward.” However, the workshop provided an introductory overview of ABM and further training opportunities will be critical to give participants the depth of knowledge they may want or need to use ABM in their own work. In order to support further exploration and use of ABM, we are taking a few actions. First, we are planning a series of happy hour meet ups for local ABM users to gather and discuss their projects and workshop problems in their own models. We also intend to have an informal gathering at the upcoming SCRA biennial to bring together community psychologists with interests in simulation modeling. Finally, we are exploring options for offering continued training opportunities to community psychologists to reach a wider audience and to provide more depth in content for those who want to go beyond the beginner level. We are very grateful to the Council for Education Programs for making it possible to offer the first workshop with mini grant funds and we’re excited to continue facilitating this type of educational programming in the future!

Learn More
If you would like to learn more, you can access the materials from our workshop, including powerpoint slides and example models at https://msusimulation.wordpress.com/. For an introduction to ABM for community based research, you can also take a look at our recent chapter in Jason & Glenwick’s Handbook of Methodological Approaches to Community-based Research (Neal & Lawlor, 2015).

References
Neal, Z.P., & Lawlor, J.A. (2015). Agent-based models. In L. Jason & D. Glenwick (Eds.), Handbook of Methodological Approaches to Community-Based Research. New York, NY: Oxford University Press.

Lawlor, J. A., & Neal, Z. P. (2016). Networked Community Change: Understanding Community Systems Change through the Lens of Social Network Analysis. American Journal of Community Psychology, 426–436. http://doi.org/10.1002/ajcp.12052

Neal, Z. P. (2015). Making big communities small: Using network science to understand the ecological and behavioral requirements for community social capital. American Journal of Community Psychology, 55(3–4), 369–380.