The Insider Threat Program Maturity Framework, released by the National Insider Threat Task Force (NITTF) earlier this month, is designed to enhance the 2012 National Insider Threat Policy and Minimum Standards. It succeeds in some respects, but leaves important gaps elsewhere.
The framework’s stated objectives are to enable U.S. government departments and agencies “to increase the effectiveness of program functionality, garner greater benefit from Insider Threat Program (InTP) resources, procedures, and processes, and tightly integrate InTP procedures and objectives with their distinct missions and challenges.”
The 14-page document, which is segmented into 19 ‘maturity elements’ (MEs) organized according to the Minimum Standards, reveals few specific tactics that could be adopted to meet the stated objectives. Moreover, the framework does not:
- Provide different levels of maturity, unlike most maturity models.
- Identify quantitative measures to use when gauging whether a program has met the aspirations stated in each ME.
- Show contrasts between lower and higher levels of maturity.
- Enable departments and agencies to improve programs. (Such enablement would include specific, measurable and realistic recommendations — perhaps in the form of a roadmap — and not just high-level direction.)
That said, there are three themes running through the 19 MEs that are worth exploring, because they showcase the U.S. government’s evolution in thinking about the insider threat challenge. The themes are:
1. Expansion of User Activity Monitoring (UAM): The framework explicitly recognizes that monitoring unclassified systems — in addition to classified systems — is helpful for detecting anomalies and understanding context. This expansion marks a significant change from the 2012 Minimum Standards, which specified monitoring behaviors only on classified networks.
The NITTF recommends, at a minimum, that each department or agency “assess for inclusion in UAM” all classified and unclassified systems — meaning that a department or agency may decide not to implement UAM on certain unclassified systems because of, for example, budgetary constraints. This assessment aligns to strategic initiatives at the Department of Defense and other departments that focus on risk-based approaches to insider threat and security.
Nonetheless, the NITTF considers that a program is fully mature when it: “Establishes a user activity monitoring (UAM) capability on all USG end-points/devices and government-owned IT resources connected to USG computer networks accessible by cleared personnel.” (ME 11) Thus, the NITTF may not consider prioritization as mature as full coverage, even though prioritization is the smarter approach to take when resources are constrained.
2. Dynamic, Changing Nature of the Threat: The NITTF makes it clear that tomorrow’s threats will be different from today’s. InTPs always should be adjusting their programs to: adapt to changing policies, organizational structures, architectures, etc. (ME 3); maintain situational awareness on new threat vulnerabilities (ME 4); and continuously train personnel on updates in the behavioral sciences and in analytical methodologies (ME 6).
This theme implies that the programs themselves need to be flexible, holistic and dynamic. Flexible programs require well-documented, tested and repeatable processes (discussed in ME 8-10) as well as standardized approaches, such as risk-scoring, to consistently identify and prioritize threats (ME 16).
However, continuously updating programs is difficult to do, and the framework does not specify how best to implement and manage such changes. ME 2 discusses the importance of employing metrics to determine progress (and therefore to adjust when, for example, the metrics are outdated), but leaves it to the individual programs to come up with their own metrics and measurement approaches.
3. Adoption of Advanced Analytics Tools: ME 14 and 15 recommend that InTPs adopt data aggregation and analytics tools and employ behavioral science methodologies and expertise in their InTP activities. The NITTF thus is explicitly recognizing that insider threat is ‘human-centric,’ and by extension that behavioral science is a critical competency. ME 16 recommends using tools to generate risk scores, establish baselines and identify anomalous behaviors. ME 18 recommends using case management to track and respond to concerning behaviors.
This section of the framework is a step up from the corresponding section in the Minimum Standards, which focused mostly on data integration and centralized ‘hub’ responses rather than on methodologies and analytics approaches.
Nothing in the framework document should be new to long-time insider threat practitioners, although it does provide a solid summary of commonly held beliefs about the importance of data access and integration, stakeholder involvement, employment of risk management processes and training and awareness.
The challenging questions — such as how to measure the effectiveness of insider threat programs, how to implement the programs in a systematic way, moving from one level of maturity to the next and how to do so with help and guidance from the NITTF and other agencies — are not addressed. Six years after the Minimum Standards were released, it is time for the NITTF to provide specific answers to these questions.
# # #
Tom Read is Vice President of User Behavior Analytics at Haystax Technology, a Fishtech Group company.
NOTE: For more information on Haystax’s ‘whole-person’ approach to behavioral analytics, download our in-depth report, To Catch an IP Thief.