2024

Elon Musk’s Controversial New Image Generation Tool Exposes the Gaps in Ethical AI Governance

TL;DR
On August 13, 2024, Elon Musk's AI company XAi released a controversial text-to-image generator that can create unrestricted content, sparking ethical concerns after users generated offensive images, such as Hitler memes with Disney characters.

The situation highlights the broader challenges of enforcing ethical norms in AI, which have been a focus since AI technologies began showing bias and other problems in the mid-2010s. Ethical frameworks, though widely espoused, have limited enforceability due to the subjectivity of ethical norms, the resources needed to implement them, and the difficulty of translating them into coherent product design. In addition, XAi's lax moderation challenges the assumption that the ethical AI movement will encourage companies, particularly prominent ones, to self-regulate responsibly.

This blog argues that India must craft an exceptional approach to AI governance that focuses on specific harms rather than abstract ethical principles, aiming for practical, enforceable solutions that balance innovation with risk mitigation.

On August 13, 2024, Elon Musk’s AI company, XAi, released a new text-to-image generator, which creates images based on prompts from users. The release made headlines after users reported being able to create Hitler memes using Disney characters and images of Pokemon wielding assault rifles. Such reports bring to bear the limitations of ethical norms in prompting the effective governance of AI, even in well-organized parts of the technology industry.   

Ethical AI emerged as a buzzword in the mid-2010s as the capabilities of AI technologies became apparent, along with accompanying problems. AI systems began producing results with gender and racial biases. A prominent illustration is Amazon’s AI recruitment tool that “penalized” CVs with the word “women”, because it was trained on the resume data of mostly male applicants.  

A discourse around ethics sought to foster accountability around the development and deployment of AI. It also prompted the creation of several normative statements and charters by industry, governments, academia, and civil society.  

Ethical AI spun off a cottage industry of actors involved with “trustworthy”, “responsible”, and “human-centric” AI. These efforts, mirrored in India, espoused the adoption of high-level principles to tackle real-world concerns such as safety, bias, and privacy. The premise of these efforts was simple: If you usher in enough voluntary ethical frameworks, they will encourage proactive industry adoption.  

Musk’s XAi challenges such a notion. Illustratively, limited content moderation seems to be one of its image generation tool’s unique selling points. Users tout its unrestricted image generation as revolutionary, noting that its ability to produce NSFW (not safe for work) content sets it apart from other AI models that impose strict ethical guidelines. XAi permits the creation of images of real people such as Taylor Swift in lingerie, something that is both unethical and lawfully prohibited without their authorization. Comparatively, if ChatGPT is asked to create such an image, it refuses do so.  

XAi also defies the assumption that well-known companies are motivated to be responsible about AI to avoid damaging their reputation. XAi is not a nominal actor. Its image generator is currently available to a subset of the X (formerly Twitter) platform’s 650,000 premium users. Going forward if Musk makes XAi tools available to X’s 368 million monthly active users, many of whom are Indian, their user-base could eclipse ChatGPT, which currently has around 180 to 200 million users.  

XAi forces us to reckon with the limitations of ethical AI frameworks. Their first failing is that they are rooted in morality, even though moral expression is subjective. Morals bring us into the realm of right vs wrong, and good vs bad – and views on these considerations vary from person to person. They are, therefore, amenable to corruption, as we often see in Indian politics.  

Second, the translation of ethics into risk assessment for AI requires multidisciplinary teams with diverse backgrounds such as law, policy, safety, security, and engineering, to ensure that the analysis of potential threats is thorough. Can we realistically expect most AI companies to manage such a human resource demand? Most Indian AI companies struggle to find the right engineers, let alone social sciences expertise.  

Third, coding ethics into product design is not easy. It is something even well-resourced companies struggle with. Google faced public scrutiny when its Gemini image generator depicted specifically white individuals, such as the US Founding Fathers, as persons of color. Google’s image generator was attempting to rebalance for a longstanding problem of bias in generative AI, namely that it reinforced racial stereotypes. However, the method deployed by Google led to an over-correction.  

A fourth failing of ethical AI frameworks is evident from their transmutation into laws. Scholars like Sangchul Park, Associate Professor at Seoul National University’s School of Law, point out that these frameworks tend to treat AI as a monolithic technology, even though it is highly variegated.[1] Generative AI like ChatGPT is very different from an industrial robot. However, Park further notes that laws based on these principles, like the EU AI Act, which is much-admired in India, follow suit.[2] As such, it presumes that problems in riskier (but very different) use-cases such as self-driving cars and credit scoring have the same causes.[3]  

Consequently, the EU AI Act prescribes a uniform set of rules, steeped in ethical principles, to these different AI use cases, even though they may not be contextually appropriate. Park notes that existing European laws require vehicles to be safe and credit scoring to be transparent.[4] But the EU AI Act requires both to be safe and transparent, just because AI is involved.[5] The consequence of this lack of nuance: overregulation and add unnecessary compliance costs.  

India must craft an exceptional approach to AI governance and move towards targeted and contextual governance of the technology focussed on real-world harms, rather than abstract ethical principles. For instance, in the case of image generators, decision-makers could prescribe the adoption of an acceptable use policy which spells out the dos and don’ts of what content may be generated, in accordance with the law. This is similar to the content prescriptions set out for social media companies regarding their user terms of service in the country’s Information Technology law.  

India has the unique vantage point of being a large digital technology hub that is not yet on a deterministic path to AI governance. An attempt to codify ethics in a practical way may make for an enforceable framework that balances innovation imperatives against the need to mitigate technological risks.

[1] Park, Sangchul. 2024. “Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework. https://arxiv.org/abs/2303.11196.”

[2] Park, Sangchul. 2024. “Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework.

[3] Park, Sangchul. 2024. “Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework.

[4] Park, Sangchul. 2024. “Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework.

[5] Park, Sangchul. 2024. “Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework.