Posts Tagged ‘evaluation’

Employee feedback: the gift that keeps on giving

It happened again. The phone rang a few weeks ago on a Friday afternoon at 4:30. It was an executive director who was at her limit with an employee who had worked at the organization for more than two decades. The caller proceeded to tell me this employee had never performed very well and could she fire her? We talked for a bit. I asked the ED whether she had ever given feedback to this employee. Did the employee receive regular performance appraisals or just general information about job performance? The ED paused and said, “”No,” and agreed to have a conversation with this employee in which she would create job goals and give specific and concrete examples of what constitutes excellent performance.

I happened to see this ED two weeks ago and asked about her problem employee. The ED said that much to her surprise, the employee was doing great. When the ED asked the employee what accounted for the improved performance, the employee said she finally understood what was expected of her. The difference that good feedback makes cannot be overestimated.

Performance management is the process of giving feedback to help employees progress toward achieving predetermined goals in their job and for the organization. Usually performance management is a human resource system where individual employees receive a regularly scheduled performance appraisal from their supervisors. Often the performance appraisal is done on an annual basis.

Use it or lose it

Organizations spend a lot of time designing the perfect form. Yet, the most perfect form doesn’t do anything if it isn’t used. Supervisors and employees report that they don’t like the process generally. Supervisors complain about the time, the discomfort of giving feedback and the fact that in this day of tight budgets, pay increases don’t usually follow. Employees don’t like the process since feedback often comes too late to correct a negative problem and supervisors forget to recognize the good work employees generally do.

Good performance management, whether in the form of feedback or a formal performance appraisal, helps employees know they are valued, first. Second, they learn which behaviors they should continue and which they should stop. Some statistics suggest that when employees are terminated, almost 50% of the time, employees say they didn’t know what was expected of them. And, employees often report that they are motivated by appreciation for a job well done. Something a performance appraisal process can embed is what the manager does to clarify expectations and to give positive feedback.

What might a good evaluation form include?

Nonprofits frequently ask is there a “best” form? There is no one right form. A few considerations as to the form’s construction can include:

Does the form measure what is important to the organization?

Are the evaluation criteria job related?

Do the criteria reflect the highest priorities of the organization, department and job?

Do employees and supervisors regard the form as relevant?

Do supervisors and managers understand and buy into the purpose of the form?

Does it include:

Important identification information

Purpose

Instructions for completion

Defined performance criteria

Performance levels/ratings

Specific performance examples supporting ratings for each criterion

Space for employee comments

Signatures with dates (employee, supervisor, higher level of manager and/or human resources)

Keep it simple so it’s easy to use

Most human resource books include sample forms as well as some websites. Wikipedia references a great, long publication from the Department of the Interior that serves as a guidebook and also includes a sample form. Again, the point of the form is to use it. Don’t make the form too long or too complicated with the calculations of the ratings. This will keep the most diligent supervisor from using it.

Once a form is designed that is appropriate for your organization, your culture and systems, the hard part of a good performance appraisal is your feedback session with the employee.

The most productive sessions will include the supervisor:

Providing a comfortable and uninterrupted setting, indicating how important these discussions are.

Being candid and truthful.

Describing the specific expectations and how the employee did or did not meet them.

Focusing on the job, not the person, by using specific examples of what leads the supervisor to give the particular feedback. Make sure the feedback describes job-specific behaviors to support comments.

Asking employee for his/her input or comments.

Being an active listener.

Regular evaluations prevent panic

In smaller organizations, conducting regular feedback sessions can be done without an elaborate form. In larger organizations, a unified approach with a consistent form is a good idea. When done on an ongoing basis, performance appraisals are much more than just another human resources’ “to do.” Evaluations can acknowledge good employees and help retain them as well as serve as a corrective tool to limit poor performance. In addition, performance appraisals are good documentation if the employee/employer relationship goes badly. And performance appraisals can help you not make those panicky calls late on Friday afternoons.

See also:

Winning with a Culture of Recognition

Nine Minutes on Monday: The Quick and Easy Way to Go from Manager to Leader

Image credits: lerablog.com, shawngraham.me, themefuse.com

Leave a reply

Online surveys: Capture data beyond the vocal minority

Too often, marketing is a one-way street, an avenue for organizations to merely “talk at” their members and supporters. Smart marketers, heads of development and executive directors understand that it’s just as important to know what their constituencies are thinking about all the issues that impact their favorite causes. Ignoring constituent feedback, or only sporadically collecting input with no real plan or intent to put that feedback to use, is a risky proposition that can cause you to become disconnected from your most vital audiences and severely impact your organization’s mission.

 

Online surveys are an effective tool for gathering feedback from the widest range of your constituents, not just the most vocal minority. Spending time with volunteers, talking with members and connecting at events is a great way to stay connected. However, those casual conversations are too often seen as a replacement for understanding the thoughts and concerns of those you serve. An online survey program lets you accurately understand the point of view and opinions of every constituent who spares you the few minutes it takes to complete a survey. Additionally, the anonymous component of an online survey is a draw for many members to share feedback that they may not otherwise share.

 

Determine your focus

 

The first step in creating a successful survey program is to decide what you would like to learn from your members. A regular series of surveys can be a roadmap for your communications, but only if you know where to turn. Are you looking to expand your membership, get closer to your donors, increase donation or inspire more volunteers? A survey can help determine your direction. Some popular topics for regular surveys include:

 

Membership satisfaction

Donor loyalty

Membership needs assessment

Fundraising feedback

Post-event attendee satisfaction

     

    Once you’ve identified the main objective for your survey, the next step is to write your questions. Use the following list as a guide to help you craft your survey questions, and you’ll be well on your way to creating an effective survey that delivers results you can act on immediately.

     

    1. Write questions that are easy to understand and to the point. The goal is to write a question that your members can easily understand, without having to reread it. Use simple language and phrase the question as if you were talking to a friend.

     

    2. Reduce ambiguity. Avoid words and phrases that are left to the survey participant’s interpretation. Words like most, numerous, many, several, etc. mean different things to different people. You want to use words that are more commonly understood, such as almost all, a majority of, almost none and a few, to get better results.

     

    3. Limit the number of ranking options. When you ask your respondents to rank items in order of preference or importance, try not to surpass six items. Asking them to rank a long list can result in an abandoned survey. If your list is longer, think about breaking it into two questions to help ensure completed surveys.

     

    4. Avoid questions that could have two meanings. It’s easy to do this without realizing that you’re actually asking for one answer to more than one question. Here’s an example: “How much would you be willing to donate or spend on an auction item at an event?” This type of question is problematic because it asks the respondent to give one answer for two different questions. In this case, someone might be willing to spend more money on an auction item than a straight donation (or vice versa). By asking two different questions, you will get a much more accurate answer.

     

    5. Offer an “out” for questions that don’t apply. Some members can’t or won’t answer certain questions because they don’t have the experience or aren’t really sure how they want to respond. For these situations, you should offer a “Does not apply” or “Don’t know”

    option for them to select.

     

    6. Have some fun! Of course, the goal of your survey is to gain valuable information from your membership to better the actions of your organization. But your survey does not have to be all business. Include at least one question that will help you to understand the personalities and other interests of your membership. This, in turn, may help you determine future events or topics for your member communications. For example, consider asking what kind of social gathering members prefer – a wine tasting, art gallery reception or outdoor event, for example.

     

    Once you’ve completed the questions in your survey, match them against the list of best practices above and keep in mind your original reason for gathering feedback. Focus on the information you hope to gain from your members, and eliminate questions that don’t lead to answers supporting your main goals. It’s also worth it to have a colleague review your questions for tone and ease of reading, and let this person test the survey before sending it out to make sure it works properly.

     

    Once you begin receiving regular feedback from a broader segment of your constituency, you’ll begin to understand how you can best serve everyone, including those that care and matter but aren’t vocal enough to make their opinions known otherwise. Incorporating the feedback you receive into action will foster a sense of belonging in your members, increasing overall involvement and excitement in your organization.

     

    See also:

     

    Citizen Marketers: When People Are the Message


    Level Best: How Small and Grassroots Nonprofits Can Tackle Evaluation and Talk Results

     

     

    Image credits: shapard.com, knowledgeadvisors.com, appliedcorporategovernance.com

    Leave a reply

    An exercise to evaluate grant proposals

    Recently, nonprofit consulting firm, Amanda Johnston Consulting, released their Grant Writer’s Toolkit, an educational resource that provides strategies, worksheets, and case studies for grant writers who are passionate about winning more funding for their organizations. Amanda Johnston is generously sharing excerpts from this resource with CausePlanet readers. This is the first: an exercise designed to help you evaluate your grant proposals. Find out more about her toolkit at Amanda Johnston Consulting.

    Use the scale below to score various components of your proposal narrative. This tool will help you identify the strengths and weaknesses in your application. Add the points from each section to see your total score.

    Use the following scale:

    1 Totally Agree

    2 Agree

    3 Neutral

    4 Disagree

    5 Totally Disagree

     

    ______ Proposal demonstrates a real need/problem.

    It does so by using data, case studies, interviews, focus group results, media attention, etc.

    ______ Proposal demonstrates timeliness and/or urgency.

    It does so by showing that the investment in this work is urgent, pressing; uses recent data, events, press attention.

    ______ Proposal provides clear and tangible outcomes.

    It does so by specifically explaining the objectives and desired outcomes for the project. Examples include improved client status, greater public awareness, new or improved systems, etc.

    ______ Proposal uses sound methodologies.

    It does so by using methods, approaches and strategies that are realistic, effective and outcome-oriented.

    ______ Proposal demonstrates organizational credibility.

    It does so by clearly explaining and demonstrating that the organization has strength in this type of work, name recognition, a track record of achievements, a unique position and/or providing letters of support.

    ______ Proposal clearly explains staffing for the project.

    It does so by appropriately allocating human resources to this project specifically, including internal staff, use of consultants, advisory committee, etc.

    ______ Proposal clearly explains participation for the project.

    It does so by identifying the stakeholders, partners, clients, beneficiaries and funder representatives who will participate in the planning, implementation and/or evaluation of the project.

    ______ Proposal clearly explains collaboration efforts for the projects.

    It does so by including new partnerships and an overall collaborative approach.

    ______ Proposal demonstrates creativity and uniqueness.

    It does so by using a concept that is innovative and not redundant with other projects.

    ______ Proposal demonstrates the projects’ multicultural/intergenerational efforts.

    It does so by providing a clear recognition of the value of diversity and use of multicultural and/or intergenerational approach.

    ______ Proposal has a solid evaluation plan.

    It does so by clearly explaining who, what, where, when, why and how the project will be evaluated.

    ______ Proposal demonstrates a project dissemination plan.

    It does so by explaining how the results of the project will be effectively disseminated – through peer review journals, press events, mailings, websites, etc.

    ______ Proposal demonstrates project replicability.

    It does so by clearly explaining how the proposed model can be replicated in other organizations.

    ______ Proposal demonstrates sustainability.

    It does so by clearly explaining how the project is sustainable, including what funding has been awarded and what further funding is being pursued.

    ______ Proposal explains in-kind contributions.

    It does so by clearly explaining what in-kind contributions have been awarded or what further in-kind contributions are being pursued. (funding, staffing, equipment, office space, etc.)

    ______ Proposal explains the organization’s use of technology.

    It does so by demonstrating the organization’s willingness to use the most up-to-date and emerging technologies.

    ______ Proposal demonstrates overall value of the project.

    It does so by explaining how the overall value of the project (relationship of benefits to cost) is high. The overhead rate is reasonable and competitive.

    ______ Proposal demonstrates how the project fits with the funder.

    It does so by clearly explaining how the project fits with funder priorities and parameters. It is based on research and pre-submission conversations with the funder.

    ______ Proposal demonstrates clarity, organization and completeness.

    It does so by ensuring the proposal content is well organized with a local progression of ideas. The writing style is concise and the funding guidelines are strictly followed.

    ______ Proposal is presented in a visually appealing manner.

    It does so by including graphics/charts/tables, exhibits, marketing materials. Proposal is neat and orderly.

     

    Total Score:

    ______/100

    Reviewer: ______________________

    Comments: _____________________

    © 2013, Amanda Johnston All Rights Reserved

    Excerpt from the Grant Writer’s Toolkit

    Available at: www.amandajohnstonconsulting.com

    See also:

    Storytelling for Grantseekers: A Guide to Creative Nonprofit Fundraising

    The Ultimate Insider’s Guide to Winning Foundation Grants

    How to Write Fundraising Materials That Raise More Money

    Leave a reply

    Creating a new habit: Incorporating program evaluation into your daily operations

    You are already busy enough. In fact, you’re busy running your programs. You don’t want to steal time away from actually doing the work and spend it on evaluation. Let’s face it: evaluation takes staff time, some expertise and money.

    At the same time, you know that evaluation is at the very least a necessary evil. I’ve been hearing this comment repeatedly, “More and more funders are demanding information about outcomes, not just outputs.” And, in your heart of hearts, you know it can be a force for good if you use it to improve your program.

    So, what to do? Simply start asking yourself two key questions on a regular basis. For any and all of your programs, new or ongoing, ask yourself:

    What are we trying to achieve with this program?

    What will I see and hear that will indicate to me whether we’re achieving what we want to achieve?

    Let’s take an example from my home life. If I were planning a yard sale and I asked myself the first key question (What did I want to achieve?), I would tell you that I wanted to

    a) get rid of all my useless stuff,

    b) make a little spending money,

    c) accomplish (a) and (b) without driving myself nuts and wearing myself out in the process.

    If you then asked me the second key question (What would I see and hear that would indicate that I was achieving my desired outcomes?), I would say that at the end of the day, I would see an empty yard and a full cash box, and I would not be exhausted from the process. As far as quantifiable outcomes, I might tell you that 80% of my stuff would be gone from the yard and there would be $50 in the cash box.

    So, what is the significance of these two key questions, and why are they so powerful when it comes to incorporating program evaluation into your day-to-day operations? Simply because program evaluation is, above all, more than using surveys and interviews and focus groups to measure outputs, outcomes and impacts. Program evaluation is a mindset. Program evaluation is a manner of thinking in evaluative terms. When you start asking yourself about achievements and predicting what you will see and hear that will help you understand your achievements, you are thinking about–-no, you are actually doing–program evaluation.

    At this point you may be wondering, “What good is a mindset when my funders are asking for numbers and touching stories about how we are changing lives? And more numbers?”

    There are two advantages of having and practicing a mindset:

    Without it, program evaluation is pure tedium, and besides, you’re probably doing it wrong and thus wasting your resources.

    With a mindset of evaluative thinking, you are on the right track to the reliable numbers and valid touching stories that your funders want.

    To start practicing right away, ask yourself the two key questions in the following situations:

    at a staff meeting as the season’s work begins.

    at a board meeting when considering which program(s) to cut and which to grow.

    with a funder who shows you an RFP for a potential program which may be a good fit for your organization.

    with your colleagues in social situations while you’re brainstorming ways to change the world for the better.

    by yourself on a weekend when you can’t help wondering why a particular program feels stale and another one animates you whenever you think about it.

    I guarantee that if you begin asking yourself these questions on a regular basis (this means at least a couple of times each week), evaluation practices will naturally follow. Without any additional drudgery on your part, you’ll find yourself doing things like…

    debriefing programs with your staff by asking specific questions such as, “What did you see and hear, and what does that tell you about whether we’ve achieved what we wanted to achieve?” These are even more powerful than the excellent common question, “What worked and what didn’t work and what should we do differently next time?”

    building a budget for a proposed program that allocates 11 or 12 hours per week of your program director’s time instead of 10 hours, so that he or she has some additional time to think evaluatively.

    designing and distributing quick surveys at your events. There’s a strong possibility that you already do this, and it’s equally likely that your current survey isn’t asking the questions that get at what you really want to know. Once you’ve asked the two key questions about any program, you’ll naturally refer back to those questions and your surveys will become significantly more useful.

    Before you know it and without any sense of resistance or undue burden, you’ll be doing exactly what you need to do to make your funders happy and gather useful information to improve your program.

    So, start incorporating program evaluation into your day-to-day operations today. Place the two key questions on your computer desktop so that you’ll see it every day. Then, take a minute to focus on one of your programs and begin your new habit by asking yourself, “What are we trying to achieve with this program?”

    Resources:

    If you would like to participate in an online Google Doc forum to help build the habit of asking yourself the two key questions, go to http://www.maggiemiller.org/, then go to the Links Page, then follow the link to the Google Doc.

    See also:

    Level Best

    Leap of Reason

    Leave a reply

    The grant proposal: one document – several audiences

    No matter how sophisticated your grant seeking process is or your foundation relationships are, you seldomly have the chance to ask program officers or foundation board members to tell it like it is. And if they do, how often will the answer be filtered for their own purposes? I asked former foundation CEO and featured author, Martin Teitel, about the proposal screening process.

    CausePlanet: What would grant seekers find most surprising about how their proposals are handled once submitted?

    Martin Teitel: It’s often the case that incoming proposals are moved up the staff hierarchy, from bottom to top. So the people who are most distant from actual decision-making do the greatest amount of screening. Picture the process as funnel-shaped: proposals are rejected, in many cases, at each level as they move along. This fact is one of the reasons writing proposals is so difficult: you have to entice the first readers, so you can stand out from the throng. But the same document then needs to later impress a steely-eyed program officer who will push hard against the details. And the proposal might eventually have to wow a foundation board. One document – several distinct audiences. Writers of successful proposals should give themselves great big pats on the back for making it through this thicket. And by the same token, people who worked hard for a long time, only to have their proposal rejected by a form letter, should try to not take it personally, because getting through the proposal mill is a thorny combination of chance and arcane skill.

    You can read the complete interview in our Page to Practice summary feature of “The Ultimate Insider’s Guide to Winning Foundation Grants” by former foundation CEO Martin Teitel this week at CausePlanet. Or, you can learn more about this book and others at www.emersonandchurch.com.

    CausePlanet subscribers: Don’t forget to register for the author interview on Wednesday, August 29 at 11 a.m. CST.

    See also:

    The Foundation: A Great American Secret; How Private Wealth is Changing the World

    Leap of Reason: Managing to Outcomes in an Era of Scarcity

    Level Best: How Small and Grassroots Nonprofits  Can Tackle Evaluation and Talk Results

    Leave a reply

    Request free copies of “Leap of Reason” for your board and funders

    Rarely do I have the opportunity to tell my readers they can request free print copies of books we feature. This is one of those opportunities. I asked Leap of Reason author, Mario Morino, about his advice for making the case for overhead support in our CausePlanet interview. His answer will give you glimpse of what the rest of the book delivers.

    CausePlanet: What advice do you have for nonprofit leaders who want to make a case for overhead support so they can engage in more meaningful information gathering to drive relevant outcomes?

    Mario Morino: Great question. I don’t want to sound self-serving, but I would encourage them to write to us at info@leapofreason.org for free print copies of Leap of Reason they can distribute to their boards and key funders. Here are some relevant passages from the book they might want to bookmark and highlight for their key stakeholders:

    Page 2: “The cold reality is that in our present era of unsustainable debts and deficits, our nation simply will not be able to justify huge subsidies for social-sector activities and entities without more assurance that they’re on track to realize results. Public funders—and eventually private funders as well—will migrate away from organizations with stirring stories alone, toward well-managed organizations that can also demonstrate meaningful, lasting impact.”

    Page 41: “The magnitude of the combined hit—greatly reduced funding and increased need—will require organizations to literally reinvent themselves. Incremental responses will be insufficient. I agree wholeheartedly with Dr. Carol Twigg, President and CEO of the National Center for Academic Transformation, who concludes, ‘We will have to produce significantly better outcomes at a declining per-unit cost of producing these outcomes, while demand for our services will be increasing.’”

    Page 42: “We need to be much clearer about our aspirations, more intentional in defining our approaches, more rigorous in gauging our progress, more willing to admit mistakes, more capable of quickly adapting and improving—all with an unrelenting focus and passion for improving lives. It’s no longer good enough to make the case that we’re addressing real needs. We need to prove that we’re making a real difference.

    Email info@leapofreason.org for your free copies of Leap of Reason or learn more by visiting our summary library.

    See also:

    Level Best

    Nonprofit Sustainability

    Leave a reply

    Look at “what and why” instead of “how”

    Leap of Reason is a bold and wise look at a persistent problem in the nonprofit sector by one of our leading philanthropists. Managing to outcomes requires nonprofit leaders to take a candid look at what and why they measure instead of how. No one is left out of the equation in Morino’s analysis. Whether you represent government, business, or nonprofit, you’ll find Morino’s insights deeply provocative. While it’s impossible to predict how dismantled our economy will be in the coming years, we can ensure nonprofits are more durable than ever by making our outcomes indispensible through purposeful and enlightening measurement.

    In our CausePlanet interview, I asked Mario Morino about the set of conditions organizations must possess before they can successfully manage to outcomes. Here’s what he had to say:

    CausePlanet: You explain the real challenge in managing to outcomes is that organizations need a set of prerequisites: an engaged board, leadership with conviction, clarity of purpose and a supportive performance culture. These conditions appear to be best tackled at the top. Have you seen boards and CEOs successfully self-diagnose their level of engagement or conviction?

    Mario Morino: I agree with your premise. The top of the organization must value high performance and lead the way on the changes required to get there. That’s not to say you can’t get an initial spark from elsewhere in the organization. I’ve seen that happen a number of times. But if the top leadership doesn’t help to kindle that spark, leading by its own example, then the fire for performance will die out quickly.

    And yes, I have seen boards and CEOs self-diagnose their challenges and make the leap of reason! I’ve seen it up close quite a few times. For example, I saw this at the Lawrence School in Northeast Ohio, where I serve on the board and as an advisor, and some know me as “the parent from hell.” Lou Salza, a brilliant, passionate new headmaster and a highly committed board chair, Susan Karas, led a fundamental rethink and reinvention. I describe Lou’s role in Lawrence’s transformation in my recent speech, “Relentless: Investing in Leaders Who Stop at Nothing in Pursuit of Greater Social Impact” What I should have also pointed out was the important role Susan played and what happens when you have this kind of passionate, focused leadership leading the charge.

    I’ve also seen rethinking and reinvention in organizations that did not have an infusion of new leadership, such as:

    Camden Coalition of Healthcare Providers
    Friendship Public Charter School
    Maya Angelou Public Charter School
    Roca
    Saint Luke’s Foundation
    Share Our Strength
    Year Up
    Youth Villages
    The SEED School and others.

    Watch for more highlights of our interview with author and philanthropist, Mario Morino, next week.

    Leave a reply

    Hire from within for your best evaluation team

    One of your funders wants to see hard data, and another set of stakeholders needs touching stories that aren’t just cherry-picked anecdotes. You know you should evaluate your program. In fact, you genuinely want to know what’s working in your program and what needs to be tweaked. But you have a slightly queasy feeling in your stomach because despite your best intentions, you simply don’t have the time or expertise to conduct extensive program evaluation. Here’s the good news: you may already have the resources to do program evaluation right in your office.

    Take a minute to list all your key staff, volunteers and hands-on-type board members.

    Here’s the next step: See if any of them match any of these descriptions:

    Clear thinker: someone who really “gets” the program.

    Go getter: someone who doesn’t mind face-to-face interaction with perfect strangers and who knows how to just be aggressive enough to engage people without turning them off.

    Accurate and tolerates tedium: a person who doesn’t mind doing slightly monotonous work and does it accurately.

    Sees number patterns: someone who can look at a set of numbers and see patterns.  (You know these people when you see them in action.)

    Sees comment patterns: this person can look at a set of comments and see what they have in common.

    Good writer: a person with writing skills.

    Champion: a person who believes in your program and has the ear of your stakeholders.

    Here are five other requirements for people who help you with evaluation:

    1. They have to be able to commit some amount of time to their work and maintain that commitment.
    2. They have to be willing to put aside their own opinions about the program.
    3. They have to be teachable.
    4. They have to be able to maintain confidentiality.
    5. They have to understand the limits of their evaluation job so they don’t run amuck and make outlandish suggestions to your board just because they’re involved in evaluation.

    Now, here’s a simple outline of a soup-to-nuts evaluation process:

    Phase One: Planning for Evaluation (a two-step process)

    Step 1: Develop a Logic Model (a 1-page description of your program’s activities, intended outcomes and how you think they relate to each other)
    Step 2: Develop an Evaluation Plan (a list of the evaluation tasks you’ll do, such as surveys, focus groups, interviews, etc., that will help you understand your progress toward your intended outcomes)

    Phase Two: Doing Evaluation (a six-step process)

    For each evaluation task you do,

    Step 1: Design the survey (for example)
    Step 2: Distribute and collect it
    Step 3: Enter the data
    Step 4: Analyze and synthesize the data
    Step 5: Write up a report
    Step 6: Use the findings

    Here are six hints about who can help with each task:

    1. Clear thinker can help you with your logic model.
    2. Go getter can help you distribute and collect surveys. With training, he/she can also conduct interviews (if he/she is removed enough from the program to be impartial).
    3. Accurate-And-Tedium-Tolerant member is just the person you need to do data-entry.
    4. People who can see number and comment patterns will LOVE helping you with data analysis.
    5. The writer can help you write your report.
    6. And when it’s time to use your data, you’ll need your champion.

    When you recruit people with these characteristics, keep in mind that people (staff and volunteers) love doing what they’re good at in service of things they believe in. And most people, especially teachable ones, like learning new things. So, while they’ll be doing you a favor, you’re also giving them a great opportunity.

    Here are three caveats.

    1. There’s an additional skill-set which is very specialized: facilitation. If you have a facilitator on your team, lucky you. Put him or her to work running a focus group (if he or she is removed enough from the program to be impartial).
    2. If you have stakeholders who should be involved but don’t have any of the characteristics listed above, it may be a good idea to find an appropriate role for them so they don’t feel left out.
    3. Even with all these great resources, it’s handy to bring in an evaluation expert who can help you anticipate costs and timing and help with designing your evaluation plan and instruments.

    Yes, there’s a lot to think about. But you are already far ahead of where you were 10 minutes ago! Even if you’re not ready to jump into evaluation yet, you can be confident you won’t be diving in alone when it’s time to take the plunge.

    See also:

    Leap of Reason: Managing Outcomes in an Era of Scarcity

    Level Best: How Small and Grassroots Organizations Can Tackle Evaluation and Talk Results

     

     

    Leave a reply

    Raise the bar beyond evaluations; seek alignment

    This week, we posted a terrific article called “Evaluating the executive director” by Jan Masaoka. While I consider myself lucky to have served on boards where the ED review was faithfully executed every year, I know many colleagues who have experienced otherwise. Unfortunately, many executive directors go “un-reviewed” for long periods of time, according to Masaoka. She also shares that the most important reason to conduct a review of your executive director is to get on the same page with the board.

    This month’s Page to Practice™ book summary called The Three Laws of Performance by Zaffron and Logan speaks to the same point. Performance is heightened when a team shares the same view of their circumstances. The inherent challenge is that our life experiences alter how things occur to us individually. Additionally, our performance has a tendency to fulfill what the authors call a personal “default future” unless we make the effort to calibrate our perceptions with coworkers and rewrite a future everyone wants.

    For example, early in the book, Zaffron and Logan explain how the newly appointed CEO, Brad Mills, “transforms an impossible situation” at Lonmin (a publicly traded company in South Africa) by changing how the situation “occurred” to thousands of his employees. The authors explain that, “the Three Laws of Performance is the relationship between how a situation occurs and the actions that are naturally correlated. By ‘occur’ we don’t merely mean how a situation is perceived. We also include the significance and meaningfulness that comes with the experience of the situation. The breakthrough comes from using these ideas to shift how situations occur, allowing for powerful new actions to naturally emerge. In real life situations, people can’t try to remember what actions to take. Life is like a tennis ball coming over a net at 100 miles per hour. For a professional tennis player, the movement of the ball occurs as ‘hittable.’ For most people, it would occur as a blur. Shifting how situations occur for people is akin to having a tennis ball that used to occur as a blur occur as hittable.

    For those of you who sit on a nonprofit board, consider the importance of evaluating your ED to gain mutual alignment. By do so, you’ll also be ensuring that your ED’s performance will circumvent a “default future” and follow the future you collaboratively rewrite together.

    Leave a reply

    How strong is your case for support?

     

    Authors Marcia Festen and Marianne Philbin appropriately title their book “Level Best because it means to make your very best effort, and nonprofits can aspire to this level when their performance is backed by solid program evaluation. “Solid evaluation is the first step toward increasing effectiveness and, in turn, successfully marketing and documenting your work,” say the authors.

    Level Best demystifies the evaluation process and offers a practical five-step framework that enables more confident decision-making, sound planning and increased credibility (to the community and funders). We asked Festen and Philbin why evaluating impact seems like a daunting prospect to nonprofits and we also asked where to start. Here’s what they had to say:

     

    CausePlanet: Your book does an excellent job of clarifying how to go about evaluating nonprofit programs. What makes evaluation such a tricky proposition to begin with?

    Festen and Philbin: Nonprofits often begin the evaluation process prompted by specific pressures from board members or funders who dream of or demand answers to questions that are way beyond the scope of what an evaluation can reveal. Evaluation is not research. You can evaluate whether or not a program purporting to teach teenagers about safe sex provided useful materials in accessible language or attracted the intended audience, but you’re never going to know what the program participants actually did on prom night. And even if you could, it would be another leap to be able to definitively claim that your program, as opposed to a thousand other factors, was the key influence that shaped their behavior. As our fellow consultant Susie Pratt has said, “Evaluation at best is about providing evidence; it is not about providing proof.”

    CausePlanet: The “flow of nonprofit work and the nature of evaluation” is a terrific way to look at the three components of a nonprofit’s work that can be evaluated. Is there one that stands out as an easier place to start?

    Festen and Philbin: What you choose to evaluate depends on what you want to learn. You may want to look at what you do and how to do it better, or what happened as a result of your work, or both.  Generally, the fundamental pattern of how nonprofits function is: there is work, there are results, and, over time, there is impact. At any given time, you may want to evaluate one or more of these three dimensions. Evaluating true “impact” — that is, the cumulative influence of multiple outcomes over time– tends to be beyond the scope of any single evaluation. So in that regard, in answer to your question, evaluating the work you do (your process) or its results (your immediate outcomes) is easier than evaluating impact, which is a long-haul proposition, and can be pretty subjective.

    Learn more about Festen and Philbin’s book or our Page to Practice book summary of Level Best.

    Leave a reply

    Welcome! Please provide your log-in information below.
    Forget your password?
    Enter your email or user name and your log-in information will be sent to the email on file.