Military Assessments, A Cursory Overview

Michael Shepard
9 min readFeb 16, 2021

In early 2019, I wrote about the value of assessing and reviewing in the context of journaling and, in general, just living your day-to-day life. The main point of that article was that the Bullet Journal system — or any journaling system, for that matter — only works when it involves a consistent and thorough assessment and review process. For the Bullet Journal, as its creator Ryder Carroll notes, “Migrating content is a cornerstone of the Bullet Journal. The purpose of Migrating is to distill the things that are truly worth the effort, so we can become aware of our own patterns and habits, and to separate the signal from the noise.”

As I’ve mentioned before, I’m a planner in the Army. I use the planning tools and methodologies created by and for the Army and the Department of Defense to do my job. As a planner, my role is to focus my critical and creative thinking toward systematically understanding a situation, developing a desired outcome, hammering out a path toward that outcome through a maze of tangible and conceptual obstacles, and conveying that information clearly and succinctly to a given audience.

Eisenhower famously said, “Plans are worthless, but planning is everything.” Plans are worthless because, in the end, the opposing forces we face, whether they be an actual enemy force in military operations, other cars on the highway during a morning commute, or even just something seemingly trivial like which way the wind blows, all have some effect on our environment from areas and in ways outside of our control. I spoke more about the value of the act of planning then, whereas, here and now, I want to focus briefly on the value of assessing.

This article will focus on assessments made by organizations, using the Army as a proxy for any given organization. While elements discussing staff or organizational decisions are less directly applicable, this article can be adapted by the reader to fit individual assessment processes, as well.

What are Assessments?

Before we look at why assessing is important, it’s critical to note what assessing is and what it’s not. Please note for those involved in military planning that this treatment will exclude as ancillary a discussion of the intersection of assessments and indicators with intelligence preparation of the operating environment and with the relationship indicators have with elements of joint or army planning such as its links with tasks, effects, objectives, and end state.

U.S. Army Publications Directorate, Field Manual 6–0, Commander and Staff Organization and Operations

Military plans, much like all plans for all purposes, are predicated on facts and assumptions about the environment within which we operate, what we have to get the job done, and things that are working against us in completing our mission. This information serves as the basis for developing how (the Ways) we will use our resources (the Means) to accomplish our objectives (the Ends) and is associated with indicators, which are the items to be assessed.

Let me be pedantic and provide a DOD Dictionary definition first, if only to show that assessments can seem more difficult at first glance than they really are.

Assessment
1. A continuous process that measures the overall effectiveness of employing capabilities during military operations.
2. Determination of the progress toward accomplishing a task, creating a condition, or achieving an objective.
3. Analysis of the security, effectiveness, and potential of an existing or planned intelligence activity.
4. Judgment of the motives, qualifications, and characteristics of present or prospective employees or “agents.”

Despite these definitions, assessments are NOT some esoteric arcane knowledge that only grand wizards know and use effectively. Instead, assessments are simple, straightforward analytical actions continuously performed to help determine if we’re doing the right things in the right way. They help organizations determine where they are, what is happening, why, and the implications of those events, and what the organization should do next.

Below, the Army’s Operation Process reflects when and where assessments occur. Hint: it’s constant.

U.S. Army Publication Directorate, Army Doctrine Reference Publication 5–0, The Planning Process

So what comprises assessments and how are they developed?

The Components of Assessments

As can be inferred from Eisenhower’s famous quote, a plan’s worth degrades during operations because it can never account for the nearly infinite variables that affect an operation. To that end, assessments are critical to all operations to ensure that they remain adaptive and flexible to their changing environments as they continue to pursue their stated objectives.

Indicators are the central elements of assessments. Here, again, definitions make more complex what is simple. An indicator is defined as “a specific piece of information that infers the condition, state, or existence of something, and provides a reliable means to ascertain performance or effectiveness.” In short, an indicator is information we need to know and evaluate to determine progress. While there are a lot of ways to depict indicators in plans, I prefer a simple, short, direct narrative description of what is to be assessed.

When I develop indicators, I pair each with a Metric and a Target — terms I borrowed from the civilian business world and the Key Performance Indicators (KPIs) system of assessments. Though the terms differ, these conceptually align with DOD doctrine and its use of “units of measurement,” “scale,” and “upper and lower bounds.” Each indicator has an Information Category, shown below, which is a snapshot of whether you’re looking for qualitative or quantitative information and whether its data and corresponding analysis is objective or subjective. In my plans, a metric is the result of determining the indicator’s ideal measurement based on this Information Category. For example, how many cups of water a marathon water station hands out is an objective, quantitative indicator with a Metric of # of cups handed out (and perhaps also # or % of cups remaining) to inform logistical support efforts. Targets are simply the desired outcome for each indicator. Sometimes it makes sense to have a target… sometimes it doesn’t. This depends primarily on the indicator being assessed. In the water station example, it is more useful for the water station to focus on handing water out (a yes/no metric with a target of yes) versus trying to hand out a specific number of cups of water — particularly since they cannot control who among the runners takes or refuses water.

U.S. Department of Defense, Joint Publication 5–0, Joint Planning

Note: For those more steeped in data science, doctrine also covers Information Types, including Nominal, Ordinal, Interval, and Ratio. These are not covered in this article.

For the purposes of this article, there are two overarching categories of indicators: Measures of Performance (MOPs) and Measures of Effectiveness (MOEs). I’ll forego the dictionary definitions to simply say that Measures of Performance are indicators of our actions. The driving question here is, “are we doing X?” Continuing with the water station example, some Measures of Performance include the distribution of water to runners and the resupply of cups and water to the station when supplies are low. A Measure of Effectiveness, on the other hand, is an indicator that describes the effects of our actions, with the driving question being, “by us doing X, are we achieving the desired effect (or the Target)?” Some example MOEs include the number of dehydration-related medical responses after our water station and the constant availability of both water and water cups at the station. Note that, for ease of explanation, my indicators were generally quantitative and objective. In reality, different operations and tasks require different indicators — something that becomes easier to ascertain and develop with experience and with the help of subject matter experts during planning and assessment criteria development.

The final doctrinal element of assessments to describe is the requirement for all assessment indicators to be Relevant, Observable, Responsive, and Resourced. Those familiar may liken these requirements to those of SMART Objectives: Specific, Measurable, Attainable, Relevant, and Time-Based.

  • Relevance is the validity of an indicator’s relationship to the desired effect, objective, or end state. Although I am not addressing elements of planning to which this requirement relates in this article, it should be sufficient to state that a relevant indicator correlates directly to its utility in determining the organization’s progress toward its goals.
  • Observable indicators are those that can be detected and analyzed. Sometimes relevant indicators cannot be observed, perhaps for reasons of resource shortfalls. Indicators can also be developed that, while relevant, may be impossible to obtain, such as if an organization wanted to track what people do with its publication after they download it.
  • Responsive indicators are those that reflect change in a timely manner. Sometimes a relevant indicator may take far too long to change to be useful for guiding organizational decisions or may change too fast to see each change, resulting in erroneous data supporting decisions.
  • Resourced indicators are those that an organization has the time and resources to observe. For example, if a water station has two attendants, both individuals can serve as resources to monitor the distribution and stock levels at that station. They cannot, however, monitor any other stations without having other resources, such as a radio, car, or some other means of gathering that data.

Tips on Putting it All Together

  • Metrics cannot be developed appropriately without an understanding of the organization’s desired end state/goals, its current situation relative to its desired end state/goals, and an understanding of the tasks required to get the organization from where it is to where it wants to be. For military planners, this information is developed in Design, Mission Analysis, and COA Development.
  • For each task, ensure its indicators are relevant, observable, responsive, and resourced, as well as coupled with appropriate metrics and targets.
  • Recognize the role indicators and associated metrics have in driving organizational decisions. Using the water point example, assessing whether a table is handing out water (a yes/no metric with a yes target) does not inform decisions to resupply that table. Instead, an assessment of # or % of water cups and # of liters of water remaining do.
  • Pay attention to an indicator’s data collection/resource requirements. If an organization wants to count how many downloads its new report got from its website, that’s likely a low demand on the organization. If that organization wants to also track what its report is used for by each downloading individual or organization, the resource requirements for this data collection effort can then become exponentially larger (and likely impossible).
  • Understand that certain data collection techniques will result in incomplete data sets to inform assessments and acknowledge the organizational risk inherent in incomplete and potentially biased data. For example, if a theme park relies on attendees to fill out a questionnaire, that park has access to data from only a small percentage of their patrons, many of whom are likely to have a strong reason, positive or negative, for which to provide that feedback. This should be accounted for during decision making based on this data.
  • Some indicators can provide input to multiple Measures of Performance or Effectiveness. While these should develop organically rather than being forced, they can permit a discerning staff officer/assessment criteria developer to discover and implement efficiencies in data collection by reducing the number of disparate collection requirements for the organization.

Parting Thoughts

In this article, I provided a cursory review of assessments and its components from a doctrinal perspective, noting where in doctrine I did not go into detail. Hopefully this article provided the lay person in assessments development a sense of what the critical elements of assessments are, how they’re used, and why they matter, in addition to why assessments themselves are important.

For further reading, I would refer those interested to joint and army doctrine, if applicable, as well as Measure What Matters by John Doerr, and Key Performance Indicators by Bernard Marr. For the practitioner, though, developing effective assessments comes with practice and… well… a meta-assessment of your indicators. After developing criteria and seeing them in action during assessment of whatever project, mission, or task your organization is performing, do not neglect to also keep tabs on the assessment criteria itself to see if it accurately captured useful information and make notes for improving future assessment criteria during the next development phase.

--

--

Michael Shepard

Army strategist, writing infrequently about random topics.