No long term business endeavor can successfully be sustained without regular structured assessment. Emergency response is no different. In fact it may be more important to accurately assess emergency response programs. Emergency response does not generate income, nor is generating income a goal of emergency response programs. Therefore it differs from private business in that a simple bottom line fiscal evaluation does not exist.
It is quite possible that a private business can generate profit while not being run at its most efficient levels. The goal is often to make money, not provide the absolute best product. An emergency response organization has a goal of doing their best. They are evaluated by not only what they do, but also on their capabilities to provide expertise and care that may never be needed. How do you assess an organization’s ability to do something they don’t do regularly, if at all? How do you assess an organization’s ability to do something they have never done?
Capability Assessment for Readiness and Emergency Response
The answer to those questions is in what the Federal Emergency Management Agency calls Capability Assessment for Readiness or CAR. While it may be called by different names, the emergency response community uses capability assessments as the standard for determining the readiness of responders.
Most people will say that our responders are doing their best. This is true, in that they are doing their best given the systems they are given to operate within. Would they do better if they used systems that were more accurately assessed, that were modernized regularly, and that used the latest technology had to offer to make them more safe and efficient? If they were striving to be better and reach higher goals on a consistent basis?
Why would responders not be aiming to reach higher goals? Perhaps they are being told, “You’re doing fine. You passed your assessment.”
The issue is most certainly not with the effort of our responders, but rather the manner in which they are assessed. Accurate assessments would change the objectives and give our responders the tools they need to continue to improve and be as safe as they deserve to be.
Our responders are put in dangerous situations on a consistent basis. Unlike in business, they are required to make life or death decisions in an instant. They are well certified and trained, and use the equipment they have to the best of their abilities. But they are still injured and killed far too often. Police, Fire, EMS, Hazardous Materials Technicians and many others accept the risks associated with their jobs as a part of daily life.
The ability to assess whether or not they are being given the best chance to do their jobs as safely and efficiently as possible rests in our ability to provide adequate and accurate assessments.
The Problems with Internal Emergency Response Assessments
Most public response agencies and jurisdictions do internal assessments. They use the tools and standards set out by agencies such as FEMA, the International Association of Emergency Managers (IAEM) and the National Fire Protection Association (NFPA). While there are positive aspects to these assessments, this kind of assessment also has many faults. First and foremost is that most internal assessments are often nothing more than a regulatory compliance check.
Meeting regulations is a good thing. It sets a minimum. However there is so much more that responders can do. Best Practices very commonly far exceed minimum regulatory standards. Standards are developed over time. They lag behind the most current technologies. Standards are usually median level objectives. At best, meeting standards might put you in the middle of the pack.
Internal assessments also have another common aspect that can limit the effectiveness of the team in the long run. They very often do nothing but measure the ability to meet internal standards. If you continue to measure yourself against internal standards only, you will fall behind the curve. You will be meeting outdated goals and objectives.
A most troubling aspect of organizational assessments however is how they tend to be a chance for middle management to prove how capable they are to senior management. The objective should be to accurately evaluate capabilities by identifying shortcomings and creating corrective actions. It is not at all uncommon for managers to prepare responders to excel in assessment activities. Prior to assessments, scenarios are practiced and personnel are scheduled to provide a greater number of responders than normal. The goal is positive results. The results are false positives.
What ensues is an inaccurate assessment that overstates the response capabilities. Gaps that exist will not be filled. Personnel will not get training they need. They will not get tools or equipment they need. Senior management is happy that their responders are so capable. Middle management will be happy they are such capable leaders.
The Problems with Third Party Emergency Response Assessments
Outside or third party assessments are available to alleviate some of those issues, however they come with some issues of their own.
- Outside assessors don’t have the knowledge of the organization that internal assessors can have.
- Outside assessments can also stress meeting minimum statutory or regulatory standards as opposed to best practices
- Often, outside assessors speak only with management level personnel
- Outside assessors often do not get information from ground level responders who have first-hand knowledge of the capabilities of the organization.
Many consulting firms who do assessments lack focus to provide the best assessments to specific agencies. They provide assessments for a wide range of disciplines using templates that might not best measure the true needs and capabilities of each. Fire response is different from medical response, which is different from public safety response and other types of emergency response.
So How do you Assess Emergency Response Preparedness?
In his paper, The Problem of Measuring Emergency Preparedness, the Need for Assessing “Response Reliability” as a part of Homeland Security Planning, Brian A. Jackson makes the point that to truly assess response capabilities, not only should we consider what a response agency is capable of, but also ask what is the reliability that they will be able to respond in a certain manner. An agency might be able to meet specific response criteria. However there is a difference if they can meet it only 30 percent of the time versus 80 percent of the time. Maybe they can do it during day time shifts Monday through Fridays only. There can be many reasons for an organization to be able to meet specific criteria on at certain times. Manpower differences, traffic, weather are some of the many variables.
So what is the answer to the dilemma of emergency response capability assessment?
One thing that most agree on is that a comprehensive cycle of drills, training and exercises are required. Since we don’t know when and where emergencies are going to happen, it is difficult to observe actual responses in an assessment. Large organizations that respond to a high number of incidents can use actual incidents. An analysis of recent incidents can be included for most agencies, if accurate incident reports are available. Smaller organizations need to rely on the drills and exercises, but the results of these should still be critically evaluated.
Accurate Emergency Response Capabilities Assessments
There are some key elements to obtaining an accurate capability assessment. The main consideration in determining how to perform an assessment is to start with “Why you are performing an assessment?”.
The reason is to accurately determine to what extent your organization can respond to the given situations they might face.
There are many levels of situations that must be considered, each with separate approaches – how do they operate in routine emergencies, what are their capabilities during major emergencies and how will they perform in a worst case scenario?
These measurements must be compared to minimum standards, organizational goals, regulatory requirements, best practices and the newest modernized technologies. When organizations are given all this information to compare themselves to, they can decide on a proper course of action to get to where they want to fall on the scale from “Minimum Standards, to the Best That Modern Technology can Afford.”
Assessments should be highly focused. There are so many different parameters specific to each discipline and each jurisdiction. Individual attention to detail is important. All levels of personnel should be included.
So many times I have seen mid-level managers and above involved in an assessment of what responders can do-without the representation of a single responder! Several responders should have the opportunity to give honest and anonymous information as to what they can and can’t do. Assessment reports tend to be very dry and very predictable. It is sometimes hard to believe that someone paid for that opinion. Thank you Mr. Obvious.
Taking the SMART Approach to Emergency Response Assessments
Assessments should be SMART (Simple, Measurable, Attainable, Realistic and Timely). This is a slight modification of the traditional SMART objectives, although I changed the first component.
Rather than have “S” represent “Specific”, I prefer to make sure that specific is included in some of the other components. I feel that more important than specific, that assessment results should be Simple. Simple can best be described as focused. Focus is important in making results meaningful.
The M component is Measurable. Specific parameters should be reported in measureable formats. (I told you that specific was incorporated).
The A is Attainable. In the assessment world, attainable has a dual meaning – first measure capabilities against specific metrics that can be attained. The Mayberry FD will not be able to attain all the same capabilities that FDNY will. Secondly it means that you should make the results gradual – so that they can be attained on a consistent basis, building success on success.
The R is for Realistic. The assessment should reflect what the organization can do on a consistent basis. Set goals that the organization has the budget for.
Finally T is for timely. Set deadlines for improvement on specific objectives. Not a single deadline, but many. If you are going to drive from New York to San Francisco, you don’t set out with just one final destination. You route your trip through cities between New York and San Francisco. You set a time to reach each one. If you leave on Monday and wish to be there by Saturday, you might want to be in St. Louis by Wednesday and Denver on Friday. Short term goals allow morale to build as goals are reached and celebrated. Success is built on success.
What should happen is that tools to make meaningful change should be identified. Quantifiable results that meet the SMART test should be presented and a roadmap to graduated improvement should be provided as the result of any emergency response assessment.