It’s zero hour for zero trust.

While the concept has been around for years, the clock is now ticking for the federal government to implement it. Championed as a solution for securely delivering mission-critical data at the speed of battle, a Biden administration memorandum requires federal agencies to achieve specific zero trust security goals by the end of Fiscal Year 2024. Further, the Department of Defense is working toward implementing its zero trust cybersecurity framework by FY 2027.

The good news? In the face of escalating cyber threats, data shows 72% of government agencies are already deploying zero trust security initiatives. Yet there’s a remaining roadblock on the horizon that could pose dire national security implications: data tagging standardization.

Current approach to data tagging invites risk

Data lives in various formats, including structured, unstructured, and differing file types and classification levels. Currently, agencies take their own unique approaches to the data discovery process, building a pipeline to classify and determine tags – the metadata tags assigned to data for organizational and access purposes. Many still rely on manual tagging which is cumbersome, while others are moving toward leveraging AI and ML software that allow for adaptive data tagging.

While there has been some movement toward a standard enterprise data header tagging method among members of the intelligence community, the complexity of differing data type collection paired with siloed processes continues to result in an inefficient and insecure mode of data sharing. Across agencies, sensitivity tags appear in different fields and formats, making them difficult to classify and creating challenges when enforcing policy between agencies. The fact that there is no consistent approach to tagging and classifying data – especially sensitive data – is a significant obstacle to zero trust models.

For example, this lack of standardization makes it challenging for the DOD to address data rights management around mission partner interactions with other Five Eyes nations. Establishing set marking methods around sensitivity tags at a minimum – so agencies know where to look, and then how to proceed – would reduce risk and advance data-centric decision making.

An all-government approach to data tagging

Data is the foundation of U.S. intelligence. Amid ever-increasing numbers of communication channels, devices, and open-source intelligence, the data deluge presents common opportunities and risks across the federal government. Among those risks is the notion that data is a valuable resource for nation-state threat actors who are seeking to steal or disrupt access to the data.

In the face of evolving cyber threats, the legacy, siloed approach to data tagging can be problematic. If the public sector had more consistent tagging of sensitive information, then automated encryption mechanisms could be deployed to reduce risk. The outcome would be a reliable and risk-based encryption approach that would target encryption for most sensitive data, not all data, in the enterprise.

Defense agencies must work together to develop a unified standard of data tagging that ensures data access to those that need it while protecting against those that don’t. A data-centric security approach is critical to accelerate mission outcomes, and a whole-of-government approach to data tagging formats and meta-data standardization must be seen as an essential next step in the federal government’s zero trust journey.

Implementing standardization to eliminate roadblocks

Suggestions for eliminating this roadblock and embracing a zero trust mindset include:

Learn from pilot programs. The Office of the Director of National Intelligence (ODNI), Cybersecurity and Infrastructure Security Agency (CISA), and DOD are already pursuing improvements to data tagging like establishing clear marking requirements to make it easier to train AI/ML algorithms. We can benefit from those already investing in this work and apply those learnings to other agencies.

Implement working sessions. To ensure a unified approach, CISA and the DOD Chief Information Officer should help broker a conversation across all federal agencies, DOD components, and the Intelligence Community by working with each agency and component’s chief data officer. For zero trust to be effective, we must initiate all-government working sessions on this topic.

Prioritize what should be standardized. It’s not about boiling the ocean, as agencies will continue to have mission-specific data – so prioritizing a unified approach to headers and sensitivity tagging is a great place to start. The main concern should be focusing on format standardization with the ability to customize tags based on unique mission and agency requirements.

Leverage tech for good. AI/ML tools can help eliminate human error by catching misclassifications or suggesting a change to a sensitivity level or tag based on what AI has analyzed within the document. But these tools are only as powerful as the data tags they can decipher. Therefore, this all-government approach must also apply to standardizing how these tools read and act on data tagging. Once that is established, it is this technology that will accelerate progress toward the nation’s zero trust implementation goals.

Existing network, data, and communication standards such as TCP/IP, XML, 802.11, and ODNI’s Trusted Data Format demonstrate there is a precedent for setting unified standards. By establishing such standards for data tagging, the federal government can take a significant step toward achieving its zero trust goals.

Now is the time to act.

Ryan Zacha is a Principal Solutions Architect and Michael Lundberg is a Vice President at Booz Allen Hamilton focusing on defensive cyber solutions and zero trust architecture.

Share:
More In Opinion