As 2023 draws to a close, it has been a monumental year for developments in AI ethics, safety, policy, and law. Society grappled with unprecedented questions as generative models like ChatGPT burst into the mainstream. Groundbreaking new systems demonstrated immense creativity yet surfaced alarming biases and limitations. Tech giants rolled out offerings leveraging these models while scrambling to implement guardrails. Governments worldwide initiated regulatory efforts to rein in potentially hazardous applications.
Through it all, one thing remained clear – we have entered a new era of artificial intelligence that will fundamentally transform how we work, create, and communicate. The stakes could not be higher as researchers pressed ahead developing AI that may one day eclipse all human capabilities. Success requires navigating uncharted territory with utmost care guided by ethical and moral compasses.
The past year marked only the beginning of this techno-social reckoning. As the pieces below highlight, 2023 yielded critical advances and debates across all facets of responsible AI development – spanning ethics, safety, laws, and policy.
AI Ethics
- Google published a whitepaper in May with a proposal for A Policy Agenda for Responsible Progress in Artificial Intelligence
- Palantir CEO Alex Karp wrote an Op-Ed in the New York Times arguing that the US must develop AI weapons despite ethical concerns.
- The Arthur L. Carter Journalism Institute at New York University launched a new ethics initiative, backed by a $395,000 grant from OpenAI.
- Google DeepMind developed a new digital watermarking technique called SynthID that can imperceptibly label images as being created by AI.
- In the summer, the New York Times updated its terms of service to explicitly forbid the scraping of its content to train AI models. Last week, they filed a lawsuit against Microsoft and OpenAI for copyright infringement.
- Getty Images sued Stability AI for allegedly using Getty's copyrighted material without consent to train its AI model.
- Tik-Tok offered tools for creators to optionally label AI-generated content.
- Adobe led the roll-out of the Content Credentials Pin to embed tamper-evident metadata directly into media. This metadata can include details like the creator, creation date, editing steps used, and whether AI generation was involved.
- We began seeing more widespread use of AI in politics:
- NYC mayor used AI to make himself speak different languages on robocalls
- Former Pakistan prime minister Imran Khan used an AI-generated “voice clone” to deliver a campaign speech from jail.
- Meta began requiring advertisers to disclose when political or social issue ads had been digitally altered or created using AI.
- Microsoft rolled out new cybersecurity offerings to help safeguard electoral processes against AI-generated misinformation campaigns.
- Anthropic began exploring democratizing AI alignment using public input.
- Google proposed a framework for Social and Ethical AI Risk Assessment.
- Universal Music Group, Concord Music Group, and ABKCO Music filed a lawsuit against Anthropic alledging that it illegally copied and distributed copyrighted song lyrics through its Claude chatbot.
- Microsoft outlined the core tenets governing its approach to privacy protection in tools like Microsoft Copilot.
AI Safety
- OpenAI announced their Superalignment initiative to steer and control AI systems much smarter than us.
- Google showed off its work on CoDoc, a medical AI model that knows when to defer to human experts.
- Anthropic urged labs developing frontier AI models to implement stringent cybersecurity measures to prevent theft or misuse of this potentially world-altering technology.
- Google, Microsoft, OpenAI and Anthropic launched the Frontier Model Forum, a new industry initiative focused on promoting the safe and responsible development of frontier AI models. In October, they launched a $10 million AI Safety Fund to support independent research into AI safety techniques.
- OpenAI showed that GPT-4 was very effective at moderating content.
- New research revealed major flaws in popular safety and alignment techniques:
- We saw the launch of various testing and evaluation platforms for AI systems:
- OpenAI put out an open call for experts from diverse fields to join a new "Red Teaming Network" focused on rigorously evaluating and stress testing their AI models.
- Google launched a 3-month program to accelerate startups leveraging AI to innovate in cybersecurity.
- Foundation model providers invested in bug bounty program:
- The Partnership on AI (PAI) released its "Guidance for Safe Foundation Model Deployment"
- OpenAI announced the launch of a Preparedness Challenge, an effort to identify potential risks associated with highly advanced AI systems. In December, they published the "Preparedness Framework" that outlines processes to continually evaluate risks from its AI models.
- MLCommons announced the creation of an AI Safety (AIS) working group focused on developing standardized benchmarks to evaluate key aspects of AI system safety.
- Meta disbanded its Responsible AI team and moved the members directly to the Generative AI team and AI Infrastructure units.
- Researchers at Lasso Security were able to uncover over 1,500 exposed API tokens on Hugging Face.
- Meta launched a new initiative called Purple Llama, aimed at empowering developers to build safe and responsible generative AI models.
AI Policy & Laws
- New York City passed a new law around Automated Employment Decision Tools that regulate the use of AI in hiring practices.
- Google DeepMind, OpenAI and leading academic institutions published a whitepaper in which they proposed and explored international institutions to manage the opportunities and risks posed by advanced AI systems.
- The US Copyright Office began soliciting public feedback on issues regarding AI and copyright laws.
- Foundation model providers began providing legal protection for customers against AI-generated copyright claims:
- China issued the world's first comprehensive regulatory framework governing generative AI. In October, the government proposed more regulations that would enact more oversight and stricter controls over the data and models used to build generative AI services like chatbots.
- In July, top AI companies in the US met at the White House and commited to voluntary governance measures. By September, 8 more companies joined the list.
- In October, President Biden signed a landmark executive order to establish new standards and safeguards for AI systems.
- Google put forward a comprehensive vision for how governments can maximize the benefits of AI for society in a new policy paper titled “An AI Opportunity Agenda.”
- In December, the European Union reached a provisional political agreement on the Artificial Intelligence Act that aims to ensure AI systems marketed and used in the EU are trustworthy and respect fundamental rights.
As we flip the calendar to 2024, monumental questions persist about the responsible development and deployment of artificial intelligence. Yet the discourse and actions undertaken this past year provide some hope. Researchers made strides towards reliable alignment techniques. Policymakers initiated regulatory guardrails. Industry leaders instituted new protocols responding to public concerns. And society as a whole engaged more critically with AI's risks alongside its remarkable potential.
To all those pushing towards an equitable AI future - whether scientists, ethicists, lawmakers, journalists or everyday citizens - we thank you. It is through your tireless efforts that the promise of this world-altering technology may someday be realized for the benefit of all humanity. We look ahead with cautious optimism that 2024 will see more ground covered on this winding road.
Additionally, we want to extend our heartfelt thanks to you, our readers, for engaging with these critical discussions and developments. Your interest and insights are vital in navigating the ever-evolving landscape of artificial intelligence. Looking ahead to 2024, we stand on the cusp of further exciting advancements and challenges in AI. Together, let's continue to explore, understand, and shape this dynamic field, ensuring that AI's growth is aligned with ethical standards, safety, and societal well-being. Here's to a new year filled with promise, discovery, and responsible innovation in the world of AI.
One Love,
🖤❤️