
































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Guidance for substance misuse prevention practitioners on using SAMHSA’s Strategic Prevention Framework (SPF) to select effective programs and practices. It covers the importance of strategic planning, building a logic model, and identifying priority risk and protective factors. Prevention planners are encouraged to collaborate with key stakeholders and invest in program implementation and continuous improvement.
Typology: Lecture notes
1 / 40
This page cannot be seen from the preview
Don't miss anything!
SAMHSA’s Strategic Prevention Framework (SPF) is a proven strategic planning model comprising five steps:
Step 1. Assessment involves gathering and using data to identify a priority problem, factors influencing this problem, and resources and readiness to address it. Step 2. Capacity involves building resources and readiness to address the priority problem and its associated factors. Step 3. Planning involves developing a comprehensive plan that details prevention priorities, programs and practices selected to address them, and anticipated outcomes. Step 4. Implementation involves moving the prevention plan into action by fine-tuning selected programs and practices and delivering them as intended. Step 5. Evaluation involves examining how programs and practices are working and using lessons learned to improve them and the plan overall.
These five steps are typically presented in circular form (Figure 1) because the SPF process is iterative and dynamic; planners often cycle back to earlier steps and engage in multiple steps simultaneously. For example, they may need to adjust their comprehensive plan if ongoing assessment efforts reveal shifting prevention priorities or build additional capacity to support a specific program or practice once it is underway. In addition, the overall SPF process is guided by two principles that should be integrated into each step:
Cultural competence , which is the ability of an individual or an organization to interact effectively with members of diverse population groups Sustainability , which is the capacity of a community to produce and maintain positive prevention outcomes after initial funding ends and over time
Together, these principles dictate that all prevention efforts must be informed by and responsive to the unique cultures of those involved, and that individuals, families, and communities should continue to reap the health- related benefits of prevention efforts over time. Successful completion of each step and integration of both principles require the active participation of and collaboration among diverse community stakeholders. These individuals and institutions may change as a prevention initiative evolves, but the need for prevention partners will remain constant.
Figure 1. SAMHSA’s STRATEGIC PREVENTION FRAMEWORK
Successful movement through the early steps of the SPF generates the information that planners need to set prevention priorities, identify intended outcomes, and build a logic model to inform the selection of programs and practices. Each of these functions is described below.
Every community struggles with multiple substance use-related problems, but no community can address them all—at least not all at once. Setting clear priorities requires understanding which problems are most important for a community to address first, and which problems a community is most capable of changing. By engaging in a thorough assessment of local prevention needs and capacity, and in a collaborative prioritization process, planners can identify their community’s priority problem. This begins to focus their prevention initiative.
But a community cannot address a substance use-related problem directly; it must work through the underlying factors that influence this problem. For this reason, planners also need to identify the priority risk and protective factors they intend to address in order to influence their priority problem. This requires an understanding of which risk and protective factors are present and most urgent at the local level, and which of these factors the community is in a strong position to change.
For example, an assessment of prevention needs and capacity may reveal a community’s priority problem to be the nonmedical use of prescription drugs (NMUPD) among youth. This assessment may further reveal that the best way to address this problem is by reducing two priority risk factors: perception among youth that prescription drugs are safer than other drugs and youth access to prescription drugs; and by strengthening two priority protective factors: positive familial bonds and parental disapproval of prescription drug misuse.
By setting clear prevention priorities, planners begin to articulate what their community intends to accomplish through its prevention efforts. To inform the selection and evaluation of programs and practices to address these priorities, planners must take this a step further and specify what their community intends to
Multiple Factors, Multiple Levels Risk factors are associated with an increased likelihood that a person will experience a problem_. Protective factors_ are associated with a decreased likelihood. Both types of factors operate at different levels of a person’s experience.
When complete, a logic model for prevention reveals a community’s plan for addressing its priority substance use-related problem. Some communities may only have the capacity to support a single prevention program or practice; these communities can be strategic about selecting the one that is likely to have the greatest impact. But when and where possible, there is added value in taking a comprehensive approach to prevention. This type of plan includes multiple programs and practices designed to address different risk and protective factors in different community settings—including homes, schools, health care facilities, neighborhoods, and more.
The best candidates for inclusion in a community’s comprehensive prevention plan are programs and practices with strong conceptual fit, practical fit, and evidence of effectiveness.
Conceptual fit is the degree to which a program or practice is a good match for the job that needs to be done; for example, a saw is a good match for the job of cutting a piece of wood—better than a hammer or screwdriver. Practical fit is the degree to which a program or practice is a good match for the people involved and the community overall; for example, a handsaw is a good match for someone who wants to cut wood but who cannot afford or comfortably operate a power saw. Evidence of effectiveness is the proof that a program or practice can (or cannot) do the job that needs to be done; for example, watching someone use a handsaw to cut through wood is evidence of that specific saw’s effectiveness.
Figure 3 presents a process for identifying best-fit programs and practices—that is, those with strong conceptual fit, practical fit, and evidence of effectiveness. Each step in this process is described below.
Figure 3. IDENTIFYING BEST-FIT PREVENTION PROGRAMS AND PRACTICES
Communities want to establish prevention programs and practices that work. To maximize the chances of this happening, planners should limit their initial search to those for which there is some evidence of potential to produce positive prevention outcomes. By focusing on these evidence-based programs and practices (EBPPs), planners will have concrete information to support their decision-making process. They can find this information in systematic reviews of the effectiveness of available EBPPs and, as needed, individual evaluation studies of EBPPs.
Many groups with expertise in and commitment to evidence-based prevention conduct systematic reviews of the effectiveness of available programs and practices. These reviews are excellent sources of information for planners searching for EBPPs that may be a good fit for their communities. Findings from such reviews can be found in searchable online databases and publications from federal agencies, other prevention and public health organizations (e.g., national nonprofits, university-based research centers), and peer-reviewed journals.
Expertly conducted systematic reviews offer prevention planners a valuable snapshot of information across multiple evaluation studies of EBPPs. However, some planners may not find an option with strong conceptual and practical fit for their communities in these reviews. If this happens, planners may want to look closely at findings from evaluation studies of individual prevention programs and practices. This information can be found in peer-reviewed journals as well as in reports written by those involved in the implementation and evaluation of programs and practices at the local level (e.g., evaluation reports for funding agencies and to support prevention planning, doctoral theses).
No matter how much evidence of effectiveness exists for an EBPP, it will only be appropriate for a community if it is actually the right fit. There are two types of fit: conceptual and practical.
To determine conceptual fit, or how well-suited a program or practice is for doing a specific job, planners can look closely at their community’s logic model for prevention. An EBPP with strong conceptual fit is one that:
Directly addresses the community’s priority substance use-related problem as well as one or more priority risk and protective factors associated with that problem Has been shown to produce positive outcomes among members of the community’s focus population(s)
Prevention planners who are unable to find any EBPPs with strong conceptual fit and strong practical fit may want to adjust their search criteria and/or process. For example:
If there are no EBPPs that directly address their priority problem and factors, planners may want to consider options that address the same priority factors (e.g., parental disapproval) for a different priority problem (e.g., underage drinking rather than NMUPD). If there are no EBPPs that address their priorities among members of their focus population(s), planners may want to consider options that address a different priority problem among their focus population(s) (e.g., alcohol misuse rather than opioid misuse among Native American adults). If there are no EBPPs that their community is willing and able to support at this time, planners may want to work on building capacity for prevention prior to continuing their search.
As mentioned earlier, an EBPP with strong conceptual fit is one with evidence that it directly addresses local priorities and can produce intended outcomes. But not all evidence is created equal. Considering the strength of an EBPP’s evidence of effectiveness involves closely examining how the evidence was gathered and determining how much confidence it deserves.
Evidence of effectiveness falls along a continuum, from strong to weak. The stronger the evidence, the more confidence it deserves. Strong evidence that an EBPP is, or is not , effective comes from strong evaluation studies; the more scientifically rigorous, numerous, and varied the studies, the more compelling the evidence.
The following criteria are often used to assess the strength of evaluation evidence:
Research design describes the approach and structure of the research study. Its purpose is to ensure the study yields information that can answer the research question both meaningfully and unambiguously. As scientific rigor of the research design increases, so too does confidence in the information that is gathered and shared. Internal validity is the degree to which a program or practice can be considered responsible for producing the outcomes measured in an evaluation study. As scientific rigor of the research design increases, so too does confidence in the internal validity of the results.
Independent replication is the degree to which a program or practice found to produce results with one set of participants consistently produces the same results when rigorously implemented and evaluated by independent practitioners or researchers with other similar sets of participants. External and ecological validity is the degree to which a program or practice found to produce results with one set of participants consistently produces the same results when rigorously implemented and evaluated with other different sets of participants (external validity) and under real-world conditions (ecological validity).
For example, a program for suburban White parents would have very strong evidence of effectiveness for this population if rigorous independent evaluations with members of this population under real-world conditions all demonstrate consistently positive results. However, this program would only have strong evidence of effectiveness for members of a different population—such as rural Latino parents—if it achieved similar results for this population through a similarly rigorous set of evaluation studies.
Prevention planners can use the criteria presented above to understand common categorizations of EBPPs in resources such as federal registries and reports. For example:
EBPPs with the strongest and most favorable evidence of effectiveness are typically referred to as well- supported, model, or exemplary. EBPPs with weaker yet still favorable evidence of effectiveness are typically referred to as supported, promising, or emerging. EBPPs with insufficient empirical evidence to draw meaningful conclusions about their effectiveness are typically referred to as inconclusive or undetermined. EBPPs with unfavorable evidence of effectiveness are typically referred to as unsupported (strong evidence that they do not produce desired outcomes) or harmful (any evidence, regardless of scientific rigor, that they produce negative outcomes).
What Does Rigor Look Like? Different research designs possess different degrees of scientific rigor. An experimental design is typically considered the most rigorous. In this design, study participants are randomly assigned to an intervention (i.e., program or practice) group or to a control group. Results from both groups are compared, both before and after the intervention. This ability to compare groups enables the researchers to isolate and identify any effects produced by the intervention and rule out other possible explanations for these effects.
unique prevention priorities, as well as strong fit for the community overall. A community that finds an EBPP with these ideal characteristics can usually adopt it outright and implement it with fidelity—that is, with strict adherence to its original design. This course of action increases the community’s chances of reproducing the positive outcomes this EBPP produced elsewhere.
There are many important ways to promote implementation fidelity and effectiveness. These are described later in this section, in Maximizing Potential: Building Supports for Successful Implementation.
While implementation fidelity is strongly associated with effectiveness, some departures from an EBPP’s original design and delivery are inevitable. According to Janevic et al. (2016), “The need to modify evidence- based interventions when they are implemented in new practice settings is somewhere between common and universal” (p. 1). Durlak and DuPre (2008) acknowledge this as well; in addition to emphasizing the importance of fidelity, they state that it is unrealistic to expect perfect implementation in real-world settings and that positive outcomes can be achieved even at implementation levels well below 100 percent. In fact, some changes, or adaptations, can even improve the potential of an EBPP to produce positive outcomes—in particular, those adaptations that are carefully planned and executed.
Planned adaptations can help improve an EBPP’s potential effectiveness by addressing recognized deficiencies related to fit. For example, if prevention planners find an EBPP that was designed to address their community’s priority problem among members of a different focus population, they might consider ways to improve its cultural fit—that is, the relevance of the language, attitudes, beliefs, values, and experiences reflected in the EBPP’s design. When planning adaptations of an EBPP, it is important to strive to retain its core components—that is, the specific elements that are required and responsible for producing positive outcomes. The following guidelines can
Fidelity and Effectiveness According to Durlak and DuPre (2008), “The difference favoring programs with apparently better as opposed to poorer implementation is profound, and has resulted in mean effect sizes [differences between groups] that are two to three times higher, and, under ideal circumstances, may be up to 12 times higher” (p. 330). These findings underscore the importance of implementation fidelity in prevention.
What Core Components? Making meaningful changes while retaining core components seems like an ideal way to balance the need for real-world fit and high fidelity—but core components are not always readily apparent. If you are unsure about the core components of your selected EBPP, seek guidance from the original developer(s) and/or others who have used and evaluated it.
help communities make adaptations that retain core components and boost, rather than compromise, effectiveness:
Preserve the setting. It may be unrealistic or impossible to make an EBPP designed for one setting (e.g., schools) appropriate for a different setting (e.g., health clinics).
Maintain the dosage, including the number, length, and spacing of sessions. Sufficient participant exposure may be essential for effectiveness.
Add new content if the need for content changes arise, rather than subtract existing content. This will prevent the removal of core content.
Make any design or delivery changes with intention and care. Work closely with the original developers (if implementing a program), members of and leaders from your community’s focus population, and other experts in prevention and program evaluation to execute adaptations— including the addition of new content.
For example, in one American Indian community looking to offer a parenting program to prevent youth substance use, community members worked with university researchers to culturally adapt an evidence-based program that was originally developed for Latino parents. They selected this program because it reflected many of the risk and protective factors prioritized by all involved, including supporting youth in different cultural environments. Together, they incorporated American Indian cultural values, worldviews on parenting, and family challenges specific to Native experience as well as cultural elements like storytelling that are common across diverse tribal communities. Participants in the adapted program reported increases in their parenting skills and Native cultural identity and decreases in negative behaviors among their children (Kulis, Ayers, & Baker, 2015).
EBPPs: Even when no EBPPs are the right fit for a community, prevention planners can still benefit from looking closely at those with some relevance to their community’s priority problem and/or focus population(s). Doing so can help them understand the wide range of options, explore varied best practices, and crystalize their thinking about what will—or will not —work well for their community.
Associations: Prevention planners can also consult with groups at the local, state, regional, and national levels dedicated to advancing best practices in addressing substance use-related problems and/or supporting specific cultural populations. These groups include professional associations focused on specific prevention strategies, such as the National Association of Drug Court Professionals, and/or specific disciplines, such as the National Association of Addiction Treatment Providers. These groups also include diverse cultural centers and associations.
For example, members of a university-tribal partnership in the Pacific Northwest developed an innovation to promote cultural identity and prevent substance misuse among tribal youth based on a cultural practice known as the Canoe Journey. This intertribal tradition, which has included nearly 100 tribes in a given year, includes the formation of Canoe Families within each tribe comprised of youth, their families and extended families, and other tribal and nontribal community members. Each Canoe Family meets throughout the year, participating in drug-free cultural events and fundraising efforts to support the annual Canoe Journey. Many tribal youth and Canoe Family participants refer to the Canoe Journey as their most highly valued cultural best practice for prevention (Hawkins, Cummins, & Marlatt, 2004).
Virtual Communities Prevention planners can also consult with fellow planners working in other communities to develop innovative prevention and public health interventions. Virtual communities of practice, such as NNEDshare from the National Network to Eliminate Disparities in Behavioral Health, offer valuable opportunities for information-sharing and support.
Adoption, Adaptation, and Innovation: Distinct Yet Overlapping Paths It is important to keep in mind that adoption, adaptation, and innovation are not mutually exclusive. For example, minor adaptations are common when adopting an EBPP; adaptations can turn an EBPP into something new and innovative; and developers of a new program or practice want their innovation to be adopted.
The most carefully selected or crafted prevention effort will only work well if it is implemented, from the very start and over time, with the same degree of care. So whether adopting, adapting, or innovating a new program or practice, prevention planners and communities will need to invest in its implementation and continual improvement to ensure its success. Specifically, they can consider important factors and take decisive actions in each of the following areas:
Provider selection: The providers responsible for implementing a new program or practice should be committed to its delivery, qualified and confident in their ability to implement it, a good cultural match for the focus population, and willing to learn—before and throughout its implementation.
Provider preparation and support: Essential learning opportunities for providers include pre- and in- service trainings to promote the knowledge and skills needed to implement the program or practice as intended, as well as ongoing consultation and coaching to provide on-the-job support and assistance.
Process and outcome evaluation: By closely monitoring the delivery of a program or practice, communities can make sure it is being implemented as intended and improved as needed. By assessing program or practice outcomes, communities can determine whether it is working as intended and worth investing in and continuing over time. By sharing this information, communities can help build the evidence base for programs and practices—thereby contributing to the prevention literature and giving other communities more valuable information to support prevention planning.
Organizational leadership and prevention champions: Strong, dedicated leaders can foster an organizational culture supportive of change, including the use of new prevention programs and practices; help keep all involved coordinated and energized; and proactively remove on- site implementation barriers. These leaders, along with other prevention champions, can also work with systems beyond the implementation site to ensure the continuation of policies, funding, and other supports conducive to implementation and continual improvement.
Implementation guidance: According to the Centers for Disease Control and Prevention (as cited in Puddy, & Wilkins, 2011, p. 19), “Implementation guidance includes any and all services and/or materials that aid in the implementation of a prevention strategy in a different setting, including but not limited to: training, coaching, technical assistance, support materials, organization/system change
Best-fit prevention programs and practices are those with strong conceptual fit, practical fit, and evidence of effectiveness. This checklist includes some key considerations in each of these areas. While not exhaustive, this checklist can help prevention planners create useful snapshots of program and practice viability for their communities. Planners can complete one checklist for each program or practice they review, then compare strength of fit and evidence of effectiveness across multiple programs and practices.
Name of program/practice:
Source(s) of information used to complete this checklist:
use-related problem our community has prioritized.
risk or protective factors our community has prioritized.
focus population(s) for prevention efforts.
its impact on our community’s anticipated short- and/or long-term prevention outcome(s).
training and technical assistance) are available that detail its content, specify its requirements, and can aid in its implementation. Please note: This guidance can help you consider the items below.
those who will be responsible for its implementation, and others with relevant decision-making power in our community.
community, including those who may not be directly involved in or affected by its implementation but are invested in our priority problem and/or focus population(s).
meet this program’s/practice’s requirements for use—including funds for materials and training, time and space, and access to qualified staff and evaluators as well as intended participants.
implementation site and supports/enhances other prevention efforts at this site and in the broader community.
If your information sources include a systematic review of multiple evaluation studies, complete this item. If not, skip to the next set of items.
Systematic review designation: Based on the strength of evidence supporting its effectiveness in producing intended outcomes, the following best describes the reviewers’ categorization of this program/practice:
effectiveness)
If your information sources include one or more evaluation studies, complete the following items.
design. Please note: This can increase confidence that the program/practice evaluated is the one you are considering.
Please note: This can increase confidence that the program/practice itself was responsible for producing outcomes of interest (i.e., internal validity).
anticipated short- and/or long-term prevention outcomes.