Your team is using AI. That's not a guess. Someone is pasting contract language into ChatGPT to get a quick summary. Someone else is letting an AI assistant draft a proposal that pulls context from files saved on their desktop. And somewhere in that workflow, Controlled Unclassified Information just left the building, quietly, invisibly, without a single security alert firing. (Greypike) This isn't a hypothetical scenario whispered about in compliance circles. It's the documented daily reality of the Defense Industrial Base in 2025, and the contractors it's happening to don't know they have a problem until they're sitting across from a C3PAO assessor who asks them to walk through their data handling procedures. AI tools are genuinely productive. Nobody's arguing otherwise. But "productive" and "compliant" are not the same word, and in the defense contracting world, confusing them can cost you your contracts, your certifications, your security clearances, and under the False Claims Act, potentially a whole lot more than that.

What ITAR and DFARS Require

Here's what ITAR and DFARS require, stated plainly. The International Traffic in Arms Regulations, enforced by the State Department's Directorate of Defense Trade Controls, govern the manufacture, sale, distribution, and export of any item on the United States Munitions List. Civil penalties run up to $1,271,078 per violation or twice the value of the transaction, whichever is greater, and criminal penalties can reach $1 million per violation with up to twenty years imprisonment. (PreVeil) DFARS, the Defense Federal Acquisition Regulation Supplement, adds the cybersecurity layer. The CMMC 2.0 program, which became effective December 16, 2024, established three tiers of cybersecurity requirements requiring Defense Industrial Base contractors to do more than simply self-attest to compliance with long-established cybersecurity requirements. (Goodwin) Phase 1 of contractual enforcement began November 10, 2025. Phase 2, requiring third-party CMMC Level 2 certification for most contractors handling CUI, kicks in November 10, 2026. (Pivot Point Security) That clock is ticking. And every day your team uses an unapproved AI tool to touch defense-related work is a day you're accumulating potential exposure.

Where AI Creates ITAR and CMMC Violations

The intersection of AI and these frameworks is where most companies are getting themselves into trouble right now, and it happens in the most ordinary, banal ways imaginable. An employee is working on a bid response, in a hurry, opens ChatGPT, pastes in a section of a contract document asking it to clean up the language, and hits enter. That document contains Controlled Unclassified Information. In less than five seconds, CUI has been transmitted to a commercial cloud environment that is almost certainly not authorized under the company's CMMC boundary. (Openapproach) Compliance with ITAR in a cloud-first, AI-driven world is evolving. Shared files, cloud storage access, collaboration tools, and AI prompts can all trigger unauthorized exports of controlled technical data. (Concentric AI) Cloud AI providers with international data centers or foreign national employees may create ITAR compliance risks directly, because ITAR doesn't just regulate what your people do with data. It regulates who can touch it at all. (Iternal Technologies) The AI platform you're using almost certainly has employees somewhere outside the United States with access to the infrastructure that processes your prompts. That's a deemed export problem. That's an ITAR problem. And most small and mid-sized defense contractors haven't thought about it once.

The Enforcement Environment

The enforcement environment will not be forgiving of that oversight. In October 2024, DOJ and SEC announced settlements with Raytheon Company over AECA and ITAR violations, totaling more than $950 million, one of the largest joint settlements in recent years. (Venable LLP) That's Raytheon, a prime with armies of compliance officers and decades of institutional experience navigating these frameworks. The agency doesn't grade on a curve for smaller companies, either. On May 1, 2025, Raytheon and related companies agreed to pay an additional $8.4 million to resolve False Claims Act allegations that they submitted claims falsely certifying compliance with cybersecurity requirements in contracts and subcontracts with DoD, specifically because the company failed to implement required controls on an internal development system used to perform unclassified work. (ConsensusDocs) In December 2025, the DOJ announced its first settlement targeting the defense supply chain when a precision machining subcontractor agreed to pay approximately $421,000 to resolve allegations that it failed to provide adequate cybersecurity protections for technical drawings supplied to prime contractors. The case originated as a qui tam action filed by a former quality control manager. (Holland & Knight) That last one is the one that should keep you up at night. A precision machining shop. A former employee with inside knowledge. A whistleblower statute that pays out handsomely. This is the environment you're operating in.

What an Honest AI Self-Assessment Looks Like

So what does an honest AI self-assessment look like? It doesn't start with AI. It starts with your data. Contractors must first identify every AI tool in their environment, including commercial AI assistants used by employees on work devices, and categorize them by whether they are deployed on-premise, in a private cloud, or in a commercial cloud. They must then determine whether the tools can access, process, or store CUI. If the answer is yes, they have to look at whether the tool's backend is authorized by the FedRAMP program to process CUI. (Washington Technology) One instance of CUI entering a non-compliant system creates a CUI spillage requiring immediate incident response. Once CUI enters a non-compliant system, you cannot undo the exposure. (VSO) From there, your System Security Plan must document every AI tool identified as an in-scope asset. Your acceptable use policy for AI must define which tools are authorized, which categories of information are forbidden from entering any AI tool at all, and what the approval process looks like for adding new tools. Then you have to train your people, because abstract policy without context doesn't change behavior. The NDAA directs the DoD to incorporate an AI/ML security framework into DFARS and CMMC to ensure that contractors developing, deploying, storing, or hosting AI for DoD comply. The framework will apply to "covered" AI/ML, defined as AI acquired by DoD and all associated components, including source code, model weights, and the methods, algorithms, data, and software used to develop it. (Governmentcontractslegalforum) That definition is broad. It's intentionally broad. If your company is building any AI capability for a DoD customer, you are already inside this regulatory perimeter whether your compliance program reflects that or not.

The Only Architecture That Clears These Frameworks

The only AI architecture that clears these frameworks cleanly is one you control, physically, on infrastructure you own or operate within a properly bounded environment. Public AI platforms do not provide the data handling, logging, or contractual protections required under DFARS and CMMC when handling CUI or export-controlled information. (Hdtech) On-premises, locally deployed AI, running on sovereign infrastructure within your facility, eliminates the cloud transmission problem entirely. It eliminates the deemed export problem. It eliminates the training data ingestion problem. It eliminates the FedRAMP authorization chase. A contractor that signs an annual CMMC affirmation without verifying the accuracy of its compliance status, or that ignores known gaps, may be accused of acting with reckless disregard sufficient to establish False Claims Act liability. (Holland & Knight) You can't affirmatively certify compliance with a straight face while your engineers are prompting ChatGPT with design specifications. And increasingly, you can't hide that practice from a motivated whistleblower, either.

Island Mountain builds AI infrastructure designed for exactly this environment: locally deployed, air-gapped where required, auditable end-to-end, with no data leaving your boundary. It's not a workaround. It's the architecture these frameworks are demanding. The window before Phase 2 enforcement closes in November 2026 is narrower than most people realize. The standard is no longer good faith effort. It is provable accuracy. (Mayer Brown) If you can't walk an assessor through your AI data flows, document your controls, and demonstrate that CUI never touched an unauthorized system, you don't pass. And failing that audit doesn't just cost you a certification. It costs you the contract, and potentially a whole lot more.

Summary: CMMC Phase 2 enforcement begins November 2026. Every AI tool that touches CUI must be documented, authorized, and within your security boundary. Cloud AI platforms don't meet that standard. On-premises local AI does.