Japan's Creator Coalition Pushes Back on AI Training
CODA, a group that represents 36 Japanese entertainment firms, has asked OpenAI to stop using their works for training Sora 2. The move comes as a growing wave of copyright concerns surrounds AI tools in the industry. Notable names on CODA’s list include Studio Ghibli, Bandai Namco, Square Enix, Aniplex, and Kadokawa. The stance puts public pressure on OpenAI and the wider AI field.
In a letter dated October 28, 2025, CODA argues that Sora 2 often echoes material owned by its member studios. It says the model’s outputs resemble works owned by these firms, suggesting their content may have been used to train the system. The group also claims OpenAI’s opt-out approach is not enough to dodge liability. Japan’s copyright system requires prior permission for using creative works, CODA notes, making the current method insufficient.
CODA's Demands to OpenAI
CODA lays out two clear requests.
– Stop using their content for machine-learning training unless they give explicit permission.
– Provide real and complete responses to questions about copyright issues tied to Sora 2’s outputs.
These asks aim to force a more accountable approach to how AI learns from existing art and stories. CODA argues that proper consent should come from the rights holders before any training occurs.
Japan Signals a Tighter AI Path
The government has weighed in as well. Ministers described manga, anime, and game IPs as irreplaceable treasures. They suggested regulations around generative AI could soon tighten. This stance shows the pressure on AI firms to respect creator rights. It also signals a possible shift in how AI systems are built and trained in Japan.
The case underscores a widening clash between creators’ rights and the rapid growth of AI tools. Japan exports many influential anime, manga, and video games. The potential for new rules could touch how models are trained around the world, not just in Japan. It is more than a matter of bad press; it could lead to licensing needs or new policies in AI development.
What This Could Mean for AI Makers
If publishers push hard for licenses, big AI models may need to secure rights before using protected art. That would set a new baseline for training data. It could slow some workflows but also foster clearer rules for handling creative works. The shift would shape how companies train future AI and what tools they can use.
This moment also raises questions about consent, responsibility, and who pays when a model produces something similar to a known work. As more firms join similar efforts, a pattern could emerge. The result might be a more transparent system for licensing and data use in AI.
A Bigger Picture for Creators and Code
This fight shows how quickly culture and code mix in today’s tech world. Creators want control over how their art is used. Users want useful, smart tools that learn from existing content. Governments watch closely to set fair rules.
The outcome could influence how AI learns from shows, games, and books. It may push companies to seek licenses or to publish clear notices about licensed material. Either path would change the way AI builds and grows.
As the debate continues, the focus stays on consent and fair use. The scene could soon reveal a standard for licensing that many firms adopt. It would help clarify who can train AI and under what terms. The road ahead will likely bring new guidelines for data use and model training. This is not just a Japan story; it could shape AI norms worldwide.
Looking Ahead: How AI and Creators Coexist
The dialogue between CODA and OpenAI highlights a broader shift. It asks the tech world to balance fast progress with respect for creators. The next steps will show whether a licensing system becomes the default when training data includes copyrighted works. The industry may need to adjust its routes for training data, or risk stricter rules across borders.
With regulators watching, firms might lean toward clearer contracts and better notices. The goal would be safer, more transparent AI that respects the art many people love. In time, the market could welcome a calmer, steadier path for training data and model development.
Please note that when you make a purchase through our links at GameHaunt, we might earn a small commission. This helps us keep bringing you the free journalism you love on our site! And don’t worry, our editorial content remains totally unbiased. If you’d like to show some support, you can do so here.





