Unsettling Trends: AI Ethics Take a Backseat Once Again

Staying updated with the fast-paced AI industry is a challenging task. Here is a roundup of recent stories in the world of machine learning. This week, the news cycle has calmed down ahead of the holiday season. A specific headline from the AP regarding AI image-generators has sparked interest. LAION, a data set used to train many popular open source and commercial AI image generators, contained thousands of images of suspected child sexual abuse. The Stanford Internet Observatory worked with anti-abuse charities to remove the offensive material. Now, LAION has pledged to remove the offending materials before republishing it. The incident underscores how little thought is being put into generative AI products as the competitive pressures increase.

The lower barrier to entry in training generative AI models poses ethical challenges. Many AI releases have been made without considering ethical implications. Examples include Microsoft Copilot, Google’s ChatGPT and Bard, and OpenAI’s image generator DALL-E. Harmful AI releases have been made in pursuit of dominance in the market. The passage of EU’s AI regulations brings hope, but there is a long road ahead. Other AI stories of note from the past few days include predictions for AI in 2024, Microsoft Copilot’s music creation, Rite Aid’s ban on facial recognition tech, the EU’s offer of compute resources, OpenAI giving board new powers, a Q&A with UC Berkeley’s Ken Goldberg, CIOs taking it slow with gen AI, news publishers suing Google over AI, OpenAI’s deal with Axel Springer, and Google’s expansion of its Gemini models.

Recent research includes life2vec, a Danish study that uses life data points to predict life expectancy, and Coscientist, an LLM-based assistant for researchers. Google’s AI researchers have made advancements in frontier domains such as FunSearch for mathematical discoveries, StyleDrop for replicating styles through generative imagery, and VideoPoet for generative video game.