Joe Biden's AI Legacy
Not much to undo anyway
With the Biden Administration soon coming to an end, I wanted to look back at the progress made in the last couple years in the tech space, specifically the meaningful regulation of Artifiicial Intelligence.
To recap the timeline, the Biden Administration took three major steps:
(1) the Blueprint for an AI Bill of Rights, which created a rubric for the effective governance of Artificial Intelligence.
(2) the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence which was the legal setting in practice of administrative policy informed by the blueprint.
(3) The National Security Memorandum on Artificial Intelligence, which fulfilled a directive set forth by the Executive Order to craft formalized strategy and policy for Artificial Intelligence in the National Security space.
Executive Order
I’ve written previously about how the Executive Order was and remains to this day the most significant policy initiative in the AI regulation space. It both encouraged the development and use of AI applications across the executive branch, while establishing guardrails to protect against some of the negative outcomes of algorithmic approaches.
In my opinion, the most consequential and long lasting product to come out of the Executive Order is the Office of Management and Budget (OMB) guidance on the management of AI systems in federal agencies. The guidance extends the EO’s directives on civil rights and critical safety by providing definition to the types of applications that will be considered within the scope of those areas. This gives the Department of Justice something to work with in investigating potential civil rights violations that arise from the use of algorithmic systems.
National Security Memorandum
The Biden Administration has not taken a strong stance on the use of lethal autonomous weapons. In 2023 there was a Political Declaration by the Biden State Department that the military use of AI should be in compliance with international humanitarian law. To be clear this was not a binding action in any way so much as a “hey guys we should do this” message to other nations.
The Executive Order then later reiterated the Political Declaration while setting no administrative restrictions on the development or deployment of lethal autonomous weapons. I thought at the time that this was a missed opportunity that was then missed again with the National Security Memorandum, which focused instead on what we should be doing rather than what we shouldn’t be doing, while briefly referencing again the non-binding political declaration.
It is disappointing that the Biden Administration made no effort to back up their loose declaration that lethal autonomous weapons are an area for concern. 60 countries have singed the Political Declaration committing to a shared desire to move on this issue, but there was no leadership to create any law international or otherwise to curb lethal autonomous weapons.
What comes next?
The Executive Order was a good start, and created a rubric for future administrations sympathetic to this issue to work with, unfortunately that is not what we are about to get. President-Elect Trump has already indicated that he will revoke the Executive Order and likely every other memoranda and guideline the Biden Administration has created around this issue in compliance with a broader deregulatory stance he has in most areas. This is a bad indication for any meaningful progress toward AI governance for at least the next four years, but doubly unfortunately there isn’t much to undo anyway. The Biden Administration made progress, but fell embarrassingly short on this issue, and we are entering 2025 with no meaningful law to control the development of AI.
