Musk Confirms xAI Used OpenAI Data in Grok Training During Court Testimony

3 min readSources: Lex Blog

Elon Musk confirmed in court that xAI used OpenAI model outputs to help train its Grok chatbot.

Why it matters: In-house counsel and AI compliance leaders face mounting risks surrounding the protection of AI model outputs as intellectual property. Musk’s sworn admission highlights the challenges of enforcing IP boundaries and managing contractual obligations in the evolving landscape of AI collaborations and company restructurings.

  • On April 30, Musk testified xAI 'partly' used OpenAI outputs in training Grok, during cross-examination in Musk v. Altman.
  • Musk admitted he did not read details of OpenAI’s 2025 nonprofit-to-for-profit conversion, weakening his claims about breached obligations. (the Star)
  • The lawsuit seeks $134 billion in damages, alleging OpenAI and Sam Altman abandoned transparency and public benefit promises. (Axios)
  • Professor James Grimmelmann of Cornell Law emphasizes that Musk’s testimony illustrates real-world limits on 'contractual enclosure'—the ability of agreements to constrain how AI knowledge is used or spread. (Lexblog)

During testimony on April 30 in Musk v. Altman before Judge Yvonne Gonzalez Rogers (U.S. District Court, Northern District of California), Elon Musk confirmed that xAI "partly" used outputs from OpenAI models to train its Grok chatbot.

  • Direct admission: Musk acknowledged under oath that OpenAI’s outputs helped train Grok, illustrating how easily data can flow between organizations—even those with supposedly strict IP agreements.
  • Contract oversight: In the same hearing, Musk conceded he had not read "the fine print" during OpenAI’s addition of a for-profit entity in 2025, impacting his credibility on claims about the company’s nonprofit commitments. (the Star)
  • Legal stakes and claims: The ongoing lawsuit, filed in 2024, seeks $134 billion in damages and removal of CEO Sam Altman, arguing that OpenAI’s restructuring and secretiveness violated its founding mission. (Axios)
  • Expert commentary: Professor James Grimmelmann of Cornell Law writes that even sophisticated contracts often fail to truly enclose or wall off AI-derived knowledge, especially as staff or data move between affiliated organizations. He describes the Grok case as a "stress test of contractual enclosure." (Lexblog)

GCs and legal operations teams should scrutinize protections for model outputs, data governance protocols, and the viability of enforcing AI-related NDAs and IP clauses—especially when vendors or partners undergo structural shifts. As Professor Grimmelmann argues, even strong language may not suffice to prevent knowledge and know-how from crossing organizational or contractual lines.

By the numbers:

  • $134B — Amount Musk seeks in damages from OpenAI and Altman (Axios)
  • 2025 — Year OpenAI added its for-profit entity, a key issue in Musk’s claims (the Star)

Yes, but: Even robust contracts may not practically prevent AI know-how from transferring with personnel or via outputs, per Professor Grimmelmann.

What's next: Judge Yvonne Gonzalez Rogers is expected to rule on preliminary motions in the coming weeks, determining the path for Musk v. Altman.