Japan spent many years only talking about rules for artificial intelligence, but Japan AI regulation news today 2025 shows that those talks have now turned into real laws.
When people hear about new laws for AI, they often believe that the rules will be very strict. In Japan, it is not like that. The country chose a lighter system that is still serious. It tries to protect people, keep markets fair, and at the same time let AI grow and improve quickly.
Japan’s 2025 AI Law

Japan’s main law is called the Act on Promotion of Research and Development, and Utilization of Artificial Intelligence-Related Technologies, and it is now at the centre of Japan AI policy news today 2025. Many people simply refer to it as the AI Promotion Act or Japan AI Act.
The Diet passed this law on 28 May 2025 and the law fully went into force on 1 September 2025. When the Diet passed the law, many AI regulation Japan news updates asked how strict it would be in practice.
It is a promotion-first law. This means its main purpose is to help and support safe and useful AI, not to punish companies right away.
In simple terms, the law:
- Treats AI as very important for Japan’s economy, for society and for national security.
- Sets basic rules to make AI trustworthy, people-centric and responsible.
- Gives the government tools to collect information, take a look at AI risks and keep AI policy organised over time.
No Big Fines, More Use of Existing Rules
There are no big fines in this act. It is not like the EU AI Act. Instead, Japan mainly uses laws it already has. These include rules on privacy, consumer protection, product safety, competition and copyright. On top of this, the government also issues AI guidelines through different ministries and agencies.
These guidelines are not strict laws, but they give an idea of what the government expects from companies that use AI. Japan AI regulation news today 2025 explains that most real pressure still comes from old privacy, consumer and competition laws.
Who Runs AI Strategy in Japan
To manage this whole system, Japan created the AI Strategic Headquarters on 1 September 2025, and it now appears in most Japan AI policy news stories today as the main control tower.
The Prime Minister is in charge of it, and all cabinet ministers are part of it. This group acts as the main control centre for AI policy in Japan. It prepares and updates the AI Basic Plan, and it also works to keep Japan in line with the OECD AI Principles and the G7 Hiroshima AI Process.
Japan in the Global AI Law Trend
Around the world, more and more AI laws are being made by governments. A report in 2025 says AI mentions in legislative proceedings increase from 1,557 in 2023 to 1,889 in 2024, which is a 21.3% increase.
In this wave of new rules around the world, Japan is regarded as a country that has a light touch approach to regulation. This means that Japan tries to encourage innovation, and at the same time tries to have some clear protections in place for people and society.
What Happened In September, October And November 2025
September 2025: Law Starts And Control Tower Opens
On 1 September 2025, the AI Promotion Act entered full force and the AI Strategic Headquarters officially began its work. In Japan AI policy news today September 2025, the big questions were how to write the first AI Basic Plan and how to use AI safely inside government.
In the first meetings, the group discussed some simple but very important questions. They discussed how to write a national AI Basic Plan, how to use AI safely inside government and how to deal with high-risk areas such as deepfakes, misinformation and critical infrastructure.
This was the point in which Japan moved from simply talking about ideas in papers to having a real legal system and clear institutions to manage AI.
Read More: AI News Today December 2025: Major Breakthroughs Reshape Tech Industry
October 2025: Focus On Deepfakes And Real Harm
In October 2025 the story of the day was not new laws. Most of Japan AI regulation news October 2025 looked at how the new framework was used to deal with real harm, especially problems linked to synthetic media and deepfakes.
The Cabinet Office started a study on sexual deepfake pornography. The study measured how big the problem is and checked which existing criminal, privacy and obscenity laws can be used to respond. Police data revealed at least 79 cases of deepfake porn between January and September 2025; there were more than 100 cases in 2024.
Policy discussions in this period were focused on taking better action against non-consensual deepfakes, creating possible labelling rules for fake images and videos, and providing better support to victims whose photos or videos were misused by AI. Deepfakes became an early test case for generative AI regulation Japan today.
At the same time, Japan was still reacting to the DeepSeek shock. DeepSeek is a powerful low-cost model from China that brought new chances for AI progress and new worries about national security. The government again made it clear that it prefers a risk-based approach and wants to look at how AI tools are used in practice, not only at how large they are or where they are built, a message that shaped much Japan AI regulation news today 2025.
November 2025: Building The AI Basic Plan
Through late autumn, the AI Strategic Headquarters worked on the first AI Basic Plan, and Japan AI regulation news today 2025 November, followed each step of this process. The Cabinet approved this plan on 23 December 2025, but most of the writing and discussion happened in November.
The plan explains how Japan wants to become an “AI-friendly” nation. It aims to spread trustworthy AI in government and in businesses and to keep policy flexible and updated often, instead of using one rulebook that never changes.
By the end of the year, Japan AI regulation news was no longer just about a bill in the Diet. It was about a law already in force, an active central strategy body and a national plan that was beginning to take clear shape.
How Japan Treats Generative AI And Deepfakes
No Separate Generative AI Law (For Now)
Japan does not currently have a separate law for generative AI; therefore, generative AI regulation Japan today is still based on the AI Promotion Act, sector laws and guidance. For now, generative AI is subject to the same general rules as other AI. That means it is handled through the AI Promotion Act, sector-specific laws, and written guidance from ministries and agencies.
Under this setup, the government looks at cases where AI causes serious harm or violates people’s rights. It then publishes guidance on how to develop and use AI safely, and it can ask companies to cooperate when there are investigations or reviews after an incident.
Deepfakes As an Early Test Case
Deepfakes became one of the first big test areas for these rules, and Japan AI regulation news today often uses deepfake cases to explain high-risk AI. The study on sexual deepfake pornography and several criminal cases showed that Japan can already use existing criminal, privacy and obscenity laws against abusive AI-generated images.
There is still no special “deepfake law”. Instead, prosecutors make use of the laws that already exist. Policymakers view deepfakes as a high-risk issue under the AI Promotion Act, which empowers the government to play a role in monitoring AI issues and attempting to control them.
Government AI Policy And Soft-Law Guidelines
The AI Promotion Act is written at a high level. It gives a broad framework, not step-by-step, detailed rules. More detailed expectations are written in policy papers and guidelines.
These documents are very important in practice, because regulators often look at them when they decide what “responsible AI” should look like in Japan.
AI Basic Plan
The AI Basic Plan, approved in December 2025, sets medium-term goals for AI policy in Japan. It is built around four main pillars.
- The first pillar is to adopt AI in government services and in the entire economy. The hope is to ensure that AI becomes a standard part of everyday work and public services.
- The second pillar is to develop AI inside Japan by investing in research, good data and strong computing power. The idea is not to rely solely on foreign technology.
- The third pillar is to make AI more trustworthy by focusing on safety, security and ethics. This is so that people can feel safe when they are using AI systems.
- The fourth pillar is to work together with AI. This includes training people in new skills, updating labour policy and improving education, so workers and students can use AI properly.
The plan also looks outward to the world. It tries to keep Japan in line with the OECD AI Principles and the G7 Hiroshima AI Process. The idea is that Japanese companies can work more easily in other countries, and people in Japan can use global AI services with similar levels of protection.
Key Guidelines For Businesses
For companies, the rules they feel day to day usually come from several non-binding guidelines, not from the law itself.
The AI Guidelines for Business from METI and MIC give a practical checklist for risk-based corporate AI governance. They cover topics like safety, transparency and how to manage AI inside an organisation.
The Governance Guidelines for Implementation of AI Principles explain how to turn high-level ethical ideas into real internal processes, written documents and clear oversight. They help companies move from “nice words” to actual practice.
The Guide to Evaluation Perspective on AI Safety from the Japan AI Safety Institute focuses on how to test AI systems, how to check their robustness, and how to react when something goes wrong.
In December 2025, the AI Strategic Headquarters also approved a wide Guideline for Ensuring the Appropriateness of Research & Development and Utilization of AI-Related Technology. This guideline connects Japan’s approach with global standards and is meant to support “trustworthy AI” in every sector, not just in one industry.
Soft Law in Practice
These documents are examples of “soft law”. This means they are not legally binding. There are no direct fines if a company ignores them.
However, in real life they are still very powerful. They are now a central part of AI compliance in Japan. When a problem happens, courts, regulators and business partners are likely to ask if a company followed these guidelines. If a company has not followed them, it may have a harder time defending its behaviour, even if there is no specific fine written in the law.
Sector Rules: Healthcare, Elections And Media
Japan still depends a lot on sector-specific laws. This means that in areas like healthcare, elections and media, AI is mostly covered by the rules that already exist for that sector. There is not one single law that covers all AI uses in every area.
Healthcare And Medical AI
In healthcare, AI is treated mainly as medical technology and as a way of using sensitive data, and Japan AI regulation healthcare news today focuses on consent, data security and safety checks. It is not treated as a completely separate legal category.
The Ministry of Health, Labour and Welfare has Guidelines on Utilizing Medical Digital Data in AI Research and Development. These guidelines focus on a few key points: patients must give consent, data must be handled securely, and there must be an ethical review when patient data is used in AI systems.
If an AI system works like a diagnostic tool or helps doctors make treatment decisions, it usually falls under medical device rules. This means it may need formal approval before use, checks and monitoring after it is on the market, and clear human oversight. In this way, it is treated like other high-risk medical devices.
Elections, Campaigns And Democracy
There is still no special AI election law in Japan. However, political teams are already using generative AI to write speeches, target messages to voters and test how people might react to different policies. Experts warn that rules on political deepfakes will become increasingly important as such AI tools become more powerful and cheaper to use.
Regulators are now looking at whether current election laws are strong enough to deal with AI-generated ads and messages, synthetic media about candidates and data-driven targeting that could unfairly influence public debate.
These issues touch several areas at the same time, including election law, platform rules and broader AI oversight. Because of this, any changes are likely to appear first as new guidance and court cases, rather than as a single big “AI election act”.
Copyright, Media And Training Data
Japan also has a significant role to play in global discussions on AI copyright. Changes to the Copyright Act already provide for broad text and data mining for AI training as long as the use is not intended to enjoy the expressive content itself, such as watching a film or reading a novel.
In October 2025, the Content Overseas Distribution Association (CODA), which represents major anime and entertainment companies, sent a formal letter to OpenAI. In this letter, CODA asked OpenAI to stop using member content to train its Sora 2 model without permission. CODA also raised concerns about AI outputs that look very similar to Japanese works.
This dispute is likely to shape future rules on training data, transparency and revenue-sharing. The Cabinet has already started work on a Principle Code for the protection of intellectual property and transparency for generative AI. This code is expected to give clearer rules and expectations for both AI providers and creators.
How Japan Compares With EU, US And China

In global AI regulation news and recent global AI regulation news 2025, Japan is placed next to the EU, the US and China as one of the main models.
European Union
In the European Union, the EU AI Act is a long and detailed law that uses different risk levels. It fully bans some kinds of AI use and sets very strict rules for high-risk AI systems. The law will start to apply step by step over the next few years, with general-purpose AI rules coming in earlier and most high-risk rules coming in later.
United States
In the United States, Executive Order 14179 in January 2025 cancelled the earlier Biden AI order. This change pushed US federal policy more toward deregulation and the idea of “AI leadership”. Because of this, many people now see a clear gap between the stricter EU AI rules and the more flexible US approach.
China
In China, detailed rules for generative AI are already in place. The Interim Measures for the Management of Generative Artificial Intelligence Services require strong content controls, security reviews and clear labels on AI-generated content. In China, control over information and state security are main goals of AI regulation.
Where Japan is Between Them
Japan sits somewhere in the middle of these models. It has a national AI law, an AI Strategic Headquarters and many AI standards documents. However, it mostly uses soft law, sector rules and international standards instead of a long list of banned uses or a strict, formal list of high-risk categories. Many experts call this an innovation-friendly approach. It is built around voluntary guidelines and global cooperation, while still trying to keep people safe.
What This Means For Companies Using AI In Japan
Why this matters for businesses
For businesses, the main point is clear. AI is welcome in Japan, but the old idea of “move fast and break things” is now risky.
A 2025 survey says that around 78 percent of companies in the world already use AI in at least one important part of their work. In this kind of world, if a company has weak AI governance, regulators, investors and customers will notice it very quickly.
Practical steps for organisations
- Find where you use AI: Organisations in Japan should first make a simple map of where they use artificial intelligence. They should note which tools they use, what data these tools use, and what decisions they help to make.
- Update internal policies: They should update their internal rules and policies so they fit with the AI Guidelines for Business and the new appropriateness guideline approved by the AI Strategic Headquarters.
- Give extra care to high-risk areas: They should treat areas like healthcare, finance, HR, infrastructure, media and children’s services as higher-risk fields. These areas need stronger controls, better record keeping and clear human oversight.
- Follow official updates: It is important to watch announcements from the AI Strategic Headquarters and from key regulators. Many new rules in Japan will appear first as guidance or notices, not as big new main laws.
If companies follow these steps, they will build a practical AI compliance framework in Japan and will be in a better position to handle future changes and rules in Japan and in other countries.
Future Direction: AI Law After 2025
Looking at late 2025 and the years after, the AI Promotion Act is only the beginning of AI law in Japan. It is not the final or complete set of rules.
The AI Basic Plan is made to change over time. The government can update it as technology changes and new risks appear. This means the AI policy does not stay fixed in its 2025 version.
The government is also working on a Principle Code for IP and generative AI, and on new AI security guidelines. The privacy authority, the Personal Information Protection Commission, is reviewing how AI can use sensitive personal data under the APPI.
Regulators in different sectors, especially in finance and digital health, are studying more detailed rules for testing AI systems, keeping proper documentation, and deciding how to act when AI tools fail or cause harm. Because of this, work on AI rules in Japan will likely stay active through 2026 and beyond, even if there is no second large AI law right away.
Final Thoughts
If you look past the daily headlines, the story is quite clear. Japan wants to stay competitive in AI, without copying every single part of the EU approach. At the same time, it wants strong rules for powerful systems that affect health, money, elections and culture, so people are not left unprotected.
The AI Promotion Act, the AI Basic Plan and the growing set of AI guidelines give Japan a way to adjust its rules step by step. This lets the government tighten safeguards when needed, while still leaving room for new ideas and innovation in AI.
FAQs
Does Japan have one strict AI law like the EU AI Act?
No. The AI Promotion Act is a basic law, not a very strict and detailed law like the EU AI Act. It sets general rules, creates the AI Strategic Headquarters, and supports different guidelines and studies. Most detailed rules still come from other existing laws and official guidance.
What counts as “high-risk” AI in Japan right now?
Japan has not made an official list of high-risk AI systems. In practice, officials mostly watch AI used in healthcare, finance, HR, public safety, children’s services and elections, because these areas are very sensitive and already strongly regulated.
How is Japan handling generative AI and training data?
Japan allows wide text and data mining for AI training under the Copyright Act. At the same time, creators and companies are pushing back. The CODA letter to OpenAI about Sora 2 and the draft Principle Code on IP and transparency show that copyright and training data are now major topics in Japan.
What is happening with AI in healthcare?
AI in healthcare mostly follows normal health and medical device rules. AI tools for diagnosis or treatment are treated like medical devices and may need approval and checks after they are used. Guidance on medical digital data in AI research also asks for strong privacy, careful data use and ethical review.
Is Japan likely to move to heavier AI regulation later?
For now, Japan prefers a light-touch model and mainly uses guidelines and rules inside each sector. But the AI Promotion Act and the AI Strategic Headquarters give the government room to make stricter rules in the future. Stronger laws could come if current measures do not control serious risks like deepfakes, problems in critical systems, or cross-border AI services.


Comments are closed