The scope of AI’s Policy impact on a variety of industries is widening at a rapid pace, and regulators are beginning to take notice. As the evolving capabilities and new use cases for AI continue to shape the commercial landscape, everyone from business leaders to MBA online students must be prepared to navigate the evolving regulatory environment. AI-related rulings will touch several policies, laws, and precedents, most likely including updates to export controls for advanced semiconductors and associated tooling, national defence, and, perhaps most visibly, intellectual property rights.
Executive Order 14179 – Trump’s Strategy
The latest major policy shift in AI is Donald Trump’s recent Executive Order 14179 from 23 January 2025, titled “Removing Barriers to American Leadership in Artificial Intelligence.” In this order, Trump outlines his goal of reducing the impact of American federal government policy on the ability of American companies to develop larger, more advanced AI models in order to maintain the US’s lead in the field. The order also instructs the Assistant to the President for Science and Technology (the “APST”) to coordinate with other relevant presidential assistants and offices to develop and submit an action plan to the president to achieve this goal.
This executive order also outlines the methodology for the revocation of Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” made by Joe Biden in 2023. EO 14110 was an initiative largely focused on AI safety, but included a wide range of specific provisions and requests for further research and preparation.
The primary emphasis of the order was on the development of standards for developing, testing, and deploying AI systems. These standards included precautions for ensuring the safety and security of AI systems, protections against the use of AI in the workplace in ways that would threaten stable employment, ensuring that AI systems do not contribute to discrimination or inequity, and restricting the use of AI or AI-related digital infrastructure by foreign entities.
14110 also contained provisions designed to promote AI innovation, including funding government AI research, expanding US energy supply to meet the needs of AI datacenters, and promoting the use of AI in biomedical research. Lastly, the order sought to inform and implement further AI policy by creating a White House AI Council comprised of the heads of various Executive branch departments, primarily those related to international relations, defence, intelligence, law enforcement, and the economy. EO 14179 dictates that any actions taken under the auspices of EO 14110 be examined by the APST and other presidential advisors, assessed for consistency with EO 14179, and revoked or rescinded accordingly.
Apart from this rollback, the order as yet lacks specific regulatory or legal implications. But its title, the general tenor of its contents, and a subsequent OSTP call for information have been interpreted by the technology industry as indicative of the president’s desire to deregulate industries related to the development of artificial intelligence to encourage AI innovation and adoption. Accordingly, Google and OpenAI, widely seen as the two global leaders in the AI industry, have recently put forth their own proposed policy frameworks outlining potential updates to existing regulations and policies. Both proposals cover everything from the minutiae of copyright law to sweeping proposals for government adoption of and investment in AI.
Google and OpenAI Provide Guidance
Google’s framework proposes three main pillars to AI advancement and adoption: encouraging investment, government AI adoption, and promoting pro-AI policy abroad. The majority of their proposal – roughly half of the document – is spent focusing on elaborating how the government might act to encourage investment in American AI systems.
Google places special emphasis on the needs of AI companies in the realms of data and energy, calling for policies that expand and streamline access to both. Specifically, the search and software giant calls for the maintenance of energy transmission and permitting rules, and keeping in place existing protections regarding the use of copyrighted data that is available publicly for training AI systems. They also specify the need for stronger federal definitions of what constitutes different categories of publicly available and anonymous data in order to establish clear, enforceable legal distinctions regarding what different types of data can and can’t be used for.
Google also requests improved public access to the USPTO’s records of patents and improvements to the review process for patents, given its potential implications on the development of AI and related technologies. They emphasise the potential impact of foreign patents potentially restricting innovation by American companies, and the need for a strong, transparent, and accessible review process to ensure that patent trolling by foreign entities does not stifle American progress. Google’s proposals go on to address issues related to AI’s impact on the American workforce, democratization of AI research, the use of AI by the US government, and the need for the US government to champion market-driven technical standards, clear and unfragmented regulatory requirements, and disclosure requirements that do not unfairly burden companies.
OpenAI’s proposals are broadly similar, with slightly different implications. The company calls for “a regulatory strategy that ensures the freedom to innovate.” For OpenAI, this includes regulations that keep American AI companies free from international regulation, especially from China, and promote the export and global adoption of American AI systems. In addition to Google’s call for maintenance of the current intellectual property regimen that has allowed the development of AI, OpenAI emphasises the need for expanded copyright protections for AI companies, pushing for clear policy to grant AI developers the freedom to train on all publicly available data. They further advocate for “AI sandboxes,” places where AI companies can operate with explicit liability protection against future state-based regulation, and for the development of specialised infrastructure and processes to facilitate cooperation between government and AI labs of all sizes.
Publishers Defend Their Rights
But American Technology giants like Google and OpenAI are not the only businesses with a stake in the progress of AI. Publishers of print and video content and their affiliated organisations have filed numerous lawsuits and other forms of protest in an attempt to encourage regulation or rulings that require compensation for use of their intellectual property in training AI models, or producing content that is inspired or influenced by their property. West Publishing, an affiliate of Thomson Reuters, has sued Ross Intelligence, an AI developer, resulting in a ruling that provides some precedent for publishers’ rights over their data being used to train AI, although in this particular case, the primary objection was regarding the means through which Ross obtained intellectual property owned by West, and the data in question was not public.
Other attempts at regulation have come in the form of direct action by private companies to explicitly legally restrict the use of AI. The Writers Guild of America has explicitly disallowed the use of any scripts generated by AI unless explicit permission of the studio is obtained to use AI in the creation of a script, and ensure that writers are credited and compensated even if a studio instructs them to use AI. They also establish that AI-generated material is not considered “literary material,” denying it some of the specific protections normally offered by the WGA for scripts. Penguin Random House has explicitly forbidden the training of AI models on its material in its new copyright notices published in all new prints of the works it manages.
In the UK, executives of leading publishers are appealing directly to the government to intervene on their behalf in order to protect their intellectual property from AI training, which they deem copyright infringement.Some industry players have anticipated these strategies by publishers and have preempted them by directly paying for highly valuable data used to train AI. Microsoft is one such example, approaching the academic publisher Taylor and Francis to pay them for the right to train AI models on their works, and to reproduce content inspired by it. Since this strategy is unlikely to prove feasible for the vast volumes of data required to train the generally intelligent systems we see in Google Gemini or OpenAI ChatGPT, the commercial future of these systems – and the impact that future regulation may have on their ability to develop new models, or even offer existing ones for public use – remains uncertain until governments issue clear rulings on the status of existing AI models, the legality of using different classes of public data for training, and what rights to compensation, if any, publishers of public data possess.