AI Washing: When AI Marketing Gets Ahead of the Product

News

March 2026

By: Shawn Collins

“Artificial intelligence” has become a common selling point. Companies want investors and consumers to know they are using it, building with it, or incorporating it into their products. Sometimes those claims are fair. Other times, a product’s branding overstates or misrepresents its capabilities. That gap is creating legal risk.

Regulators and plaintiffs’ lawyers have increasingly focused on what has come to be known as “AI washing”—the practice of overstating, oversimplifying, or vaguely describing a product’s artificial intelligence capabilities in a way that may mislead consumers or investors. Recent lawsuits involving Apple’s promotion of certain “Apple Intelligence” features, along with SEC actions involving exaggerated AI-related claims, suggest that the issue is no longer theoretical.

The challenge is that, at least for now, “artificial intelligence” is not a legally precise term. There is no universal statutory definition that cleanly separates AI from adjacent concepts like machine learning, automation, data analytics, or natural language processing. That makes the issue harder to regulate—but not harder to litigate.

A Familiar Problem

This is not the first time courts and regulators have had to deal with an attractive but slippery marketing term.

Courts have seen this kind of problem before. “Natural” is a good example. Companies used the term for years without any clear, uniform definition, but that did not stop plaintiffs from arguing that consumers were being misled. Instead, the focus shifted to the reasonable consumer's perspective: what message did the label convey, and was that message likely to mislead?

AI claims appear to be headed down a similar path.

Even without AI-specific statutes, the basic framework for deceptive advertising is already in place. The question is not whether artificial intelligence has been perfectly defined in the abstract. The question is what consumers are likely to understand when they see the term used in advertising, packaging, investor communications, or product descriptions. As one attorney quoted in the Bloomberg piece put it, this is still a traditional deception analysis focused on the net impression the claim creates.

That matters because consumers may attach broad expectations to the term “AI.” A company may intend to refer only to a narrow feature—for example, limited natural-language functionality or a chatbot built on top of a third-party model. But if the overall presentation suggests something more sophisticated, autonomous, or proprietary than what is actually being offered, the legal exposure increases.

The Risk of Vague AI Claims

In many cases, the problem is not that a company uses the term “AI” at all. The problem is that the company uses it without explaining what it means in the specific context of its product. That is where legal risk tends to emerge.

If a product includes limited AI-assisted functionality, companies should think carefully about saying exactly that. If a company is building on a third-party large language model, it should carefully consider whether consumers are left with the impression that the system is proprietary. And if the product can handle only a limited set of functions, the marketing should not suggest something much broader.

This last point deserves particular attention, because many current AI products are built on top of foundation models developed by third parties, such as OpenAI or Anthropic. There is nothing inherently problematic about that. But if the marketing obscures that reliance, or suggests the company developed proprietary capabilities it does not actually have, the risk of a misleading net impression increases.

Put differently, companies should not rely on the term “AI” alone to do all the explanatory work. The broader and less specific the claim, the easier it is for consumers to take away something the company did not actually mean to promise. That is often what drives these cases.

A Recent Litigation Example

The Apple lawsuits are an early and highly visible example. According to reporting from Bloomberg, consumers alleged that Apple marketed advanced AI capabilities tied to the iPhone 16 line even though the company knew certain flagship features would not be available on the timeline suggested by the marketing. Shareholders followed with their own claims, alleging that misleading statements about AI capabilities harmed the company financially as well.

Bloomberg also points to earlier SEC settlements involving misleading claims about AI use, as well as charges against the former CEO of Nate Inc., whose company allegedly promoted an AI-driven shopping app that, in reality, relied heavily on human labor behind the scenes. The Nate Inc. case is particularly instructive: consumers were led to believe the product was powered by sophisticated artificial intelligence, when in fact much of the work was being done by people. That is AI washing in its most straightforward form—a direct misrepresentation of what the technology does.

These matters arise in different contexts—consumer protection, securities enforcement, and private litigation—but they share a common point: businesses face risk when they create an inflated impression of what their technology can do.

Why it Matters

AI is in a hype cycle. And that creates commercial pressure. It is not hard to see how this happens. Companies want to show they are moving with the market, and “AI” is now a powerful way to do that. The problem starts when the branding gets ahead of the product. A company that has integrated a third-party model into its workflow may find it easier to market the result as its own AI capability than to explain the underlying architecture. That is precisely why companies should slow down and consider their use of this term carefully.

That does not mean companies should avoid talking about AI. It means they should say only what they can support. The more definite the claim, the more important it is to have solid substantiation behind it.

Practical Steps to Reduce AI-Washing Risk

For businesses marketing AI-related products or features, a few practical steps can help reduce risk:

  • Define what “AI” means in your product. Do not assume consumers will understand the term the way your engineers do. If the product uses a limited form of machine learning, natural-language processing, or a third-party model, say so. If the core intelligence comes from a foundation model developed by another company, that provenance should be clear—not hidden behind branding that implies proprietary capability.
  • Align marketing with technical reality. Legal, marketing, and product teams should not operate in silos. Everyone involved should understand what the tool actually does, what it does not do, and how those limitations should be communicated.
  • Disclose limitations where appropriate. If the feature is narrow, beta-stage, delayed, or dependent on future rollout, that should be made clear.
  • Document substantiation. If challenged, companies should be able to show why their AI claims were accurate when made.

Conclusion

AI washing is likely to become a larger area of enforcement and litigation, not because AI is uniquely hard to regulate, but because the law already has tools to address vague or misleading claims.

For now, the absence of a universal legal definition of artificial intelligence does not eliminate risk. It shifts the focus to something more familiar: what consumers are being led to believe.

Businesses can still market legitimate AI capabilities. But the safer course is to explain what the technology actually does—including where the underlying intelligence comes from—define its limits, and avoid letting a powerful buzzword create expectations the product cannot meet.