BLOG POST

Top 10 Challenges for Federal Agencies Integrating AI at Scale Amid New Mandates

Federal agencies face unprecedented pressure to integrate AI at scale while meeting federal mandates. This post outlines the top 10 challenges—rooted in governance, data visibility, procurement, and risk—and explains why these mandates are so difficult to meet across public sector environments.

AI adoption in the public sector has moved from experimentation to obligation. With the release of Executive Order 14110, followed by OMB Memoranda M‑24‑10 and M‑25‑22, federal agencies are under pressure to build, govern, and scale AI systems responsibly—and quickly.

The deadlines are aggressive. By December 1, 2024, agencies were supposed to have conducted risk assessments and applied safeguards for AI systems that affect rights or safety. By October 1, 2025, all new or renewed federal contracts involving AI must meet procurement standards aligned with responsible AI principles. Meanwhile, Chief AI Officers must be appointed, inventories completed, governance boards established, and auditability built in from the ground up.

But unlike in the private sector, most federal agencies don’t have a clean slate or agile budget structure to work from. They must navigate outdated systems, fragmented teams, and procurement red tape—all while staying mission-focused and risk-averse.

This post outlines the 10 biggest challenges standing in the way of scalable, compliant AI integration—and why these hurdles are uniquely difficult for public sector organizations.

1. Data Visibility and Inventory

Mandate Link: OMB M‑24‑10 (AI Use Case Inventories and Safeguards)

Deadline: December 1, 2024

One of the most immediate federal mandates requires agencies to complete comprehensive inventories of all AI use cases—particularly those that impact individual rights, civil liberties, or safety—and either implement safeguards or pause use.

Why this is a challenge:
Most agencies lack a unified view of their data landscape. Data is scattered across aging infrastructure, shadow systems, contractor-held databases, and platforms that predate modern governance. Metadata is often missing or outdated, making datasets hard to classify and nearly impossible to audit.

Even defining what constitutes an “AI system” or “AI use case” creates friction. In some agencies, tools that rely on simple automation or machine learning models are not labeled as AI—so they’re excluded from inventories despite falling under the scope of the mandate.

Building a real-time, searchable, and standardized inventory is not just an IT project—it’s an organization-wide effort that requires cooperation across data owners, security teams, and legal advisors. Without full visibility, agencies can’t complete risk assessments, let alone comply with EO 14110 or M‑24‑10.

2. Governance and Chief AI Officer Roles

Mandate Link: Executive Order 14110

Deadline: 90–180 days post-issue (Spring 2024)

EO 14110 requires each federal agency to appoint a Chief AI Officer (CAIO) and establish governance mechanisms to oversee AI implementation and compliance.

Why this is a challenge:
In many agencies, the CAIO role is tacked onto an existing position—usually a CTO, CDO, or senior program lead—without formal authority, funding, or operational mandate. That individual is then expected to govern cross-functional teams, create risk protocols, interface with vendors, and respond to OMB compliance queries—all while maintaining their original role.

There’s also a deep cultural inertia. Public sector organizations often operate in silos, where governance is more policy-oriented than operational. Building real AI governance means creating mechanisms for cross-agency data sharing, risk flagging, documentation standards, and escalation paths—none of which currently exist at scale in most agencies.

Without defined responsibilities and institutional support, AI governance remains performative—not preventative.

3. Use-Case Risk Profiling and Documentation

Mandate Link: OMB M‑24‑10

Deadline: December 1, 2024

Agencies must evaluate every AI use case to determine whether it affects privacy, rights, or safety—and if it does, apply safeguards before continuing use.

Why this is a challenge:
Risk profiling is inherently subjective without standardized tools or frameworks. Agencies often don’t know how to evaluate risk—what constitutes “impact on rights”? Does AI-based résumé screening count? What about tools that prioritize citizen services or route calls?

Even when agencies recognize that a use case poses risk, they struggle with documentation. Some rely on outdated spreadsheets or manual review forms. Others don’t have a centralized risk register or mechanism for tracking changes as systems evolve.

Without clear scoring models or tooling, agencies either under-assess (risking noncompliance) or over-assess (paralyzing legitimate innovation). This creates inconsistent standards and further erodes confidence across legal, privacy, and mission teams.

4. Vendor and Procurement Compliance

Mandate Link: OMB M‑25‑22 (AI Procurement Guidelines)

Deadline: October 1, 2025

By October 2025, agencies must ensure that all new or renewed contracts involving AI systems comply with federal guidelines around transparency, auditability, and risk mitigation.

Why this is a challenge:
Most federal procurement officers are not trained to evaluate AI-specific risks or compliance clauses. Many current procurement templates are outdated and lack language on model explainability, dataset lineage, or system monitoring.

Furthermore, vendors themselves are struggling to meet these requirements. Some use third-party models they can’t fully explain. Others lack documentation, bias testing, or formal AI governance processes. This puts procurement officers in a difficult position: award to a low-bid vendor who lacks compliance maturity, or delay procurement entirely.

Agencies that move forward without fully reworking their procurement policies risk exposing themselves to legal, ethical, and operational liabilities.

5. Acquisition and Contract Amendments

Mandate Link: Implied by EO 14110 & OMB Memos

Deadline: Ongoing

Agencies are expected to review and update existing AI-related contracts to reflect current federal requirements—particularly for systems already in use.

Why this is a challenge:
Amending contracts mid-cycle is difficult in any setting, but especially so in the federal space, where even small changes require legal review, re-approval, and vendor coordination. Many contracts were drafted without foresight into AI-specific risk, meaning retrofitting clauses related to transparency or auditing may require entirely new terms and conditions.

This becomes even harder when working with legacy or sole-source vendors who may not have the capabilities—or incentives—to meet these new standards.

6. Data Privacy and Security Frameworks

Mandate Link: EO 14110 + NIST Risk Framework alignment

Deadline: Implied through ongoing operation and audit

Agencies must ensure that AI systems do not expose sensitive data, leak PII, or make decisions based on ungoverned datasets.

Why this is a challenge:
Many federal agencies are still building their foundational privacy programs. They rely on legacy classification systems and outdated DLP tools that don’t account for the volume or variety of data AI systems ingest and process.

AI models often operate as black boxes, and may unintentionally memorize or infer sensitive details from their training data. Without structured privacy impact assessments (PIAs) or data minimization frameworks, agencies run the risk of violating compliance standards, breaking public trust, or undermining mission outcomes.

7. Legacy Infrastructure Integration

Mandate Link: N/A (Operational dependency for all above mandates)

Deadline: Ongoing

Most AI solutions—especially those using modern LLMs or cloud-native infrastructure—require performant, flexible, API-friendly environments.

Why this is a challenge:
A significant portion of federal IT runs on legacy systems that were never designed for real-time data ingestion, model inference, or secure interconnectivity. This makes integrating AI tools labor-intensive, expensive, and often brittle.

Modernizing infrastructure is no small lift. Budget cycles are long. Vendor lock-in is common. And every change comes with cybersecurity review and governance overhead.

8. Talent and Training Shortfalls

Mandate Link: EO 14110 (AI governance and workforce training)

Deadline: Implied by audit and operational readiness expectations

Agencies are expected to build internal capacity to manage AI—from governance to engineering to compliance.

Why this is a challenge:
Government agencies are in a talent war—and often on the losing end. Top AI engineers, data scientists, and risk analysts gravitate toward better-paying, faster-moving private sector roles. Internally, most non-technical staff haven’t received basic training in AI concepts, bias detection, or model monitoring.

The result? Agencies lack the internal expertise to assess systems, push back on vendors, or fulfill governance requirements—slowing progress and increasing risk.

9. Monitoring and Audit Readiness

Mandate Link: EO 14110 + OMB Memos

Deadline: Ongoing

AI systems must be continuously monitored and fully auditable to meet compliance standards.

Why this is a challenge:
Many agencies rely on manual reviews or limited, vendor-managed dashboards that offer little transparency. Logging is inconsistent. Systems don’t track inputs, outputs, or inferences in a way that aligns with NIST or OMB guidelines.

Without structured monitoring and alerting, agencies cannot demonstrate due diligence—or identify when models drift, behave unexpectedly, or generate harmful results.

10. Change Management and Internal Alignment

Mandate Link: Implicit across all EO/OMB directives

Deadline: Ongoing

AI touches every part of the organization—IT, HR, legal, procurement, mission delivery—and must be governed accordingly.

Why this is a challenge:
Agencies lack the playbooks, processes, and change management infrastructure to roll out AI responsibly. Competing priorities, risk aversion, and siloed decision-making slow progress. Worse, many staff view AI as either “too technical” or “someone else’s responsibility,” creating a vacuum of ownership that undermines accountability.

Conclusion: Challenges Are Structural—Not Just Technical

AI integration in government isn’t just about building the right tools—it’s about aligning people, processes, and infrastructure under new and evolving mandates. These 10 challenges aren’t minor gaps; they’re systemic obstacles that require thoughtful planning, inter-agency coordination, and long-term investment.

In Part 2 of this series, we’ll outline 2–3 practical tactics for each challenge, including strategies for inventorying data, amending contracts, managing risk, and building audit-ready AI systems—all grounded in compliance and scalability.