The OpenAI chaos came to an end late Tuesday after Sam Altman returned to the company as CEO, capping a rollercoaster few days. But, according to Reuters, before Altman was fired, some within OpenAI had raised concerns with the board about an internal project that they believe could present a danger to humanity.
Two Reuters sources say OpenAI researchers sent a letter to the non-profit board, citing the risk of Q* (pronounced Q star), a project that is said to be part of the company’s push to develop artificial general intelligence (AGI) - or in layperson’s terms - a superintelligence.
The details of Q*’s capabilities are opaque, but the tool can reportedly solve certain mathematical problems, which some experts believe is a marker of AGI.
OpenAI has acknowledged the existence of Q*, and that a letter was sent to the board before Altman was kicked out, without commenting on the nature of the missive or his dismissal.
**Why it matters: **As highlighted in a petition by OpenAI staffers, the then-board had made clear the destruction of the company “would be consistent with the mission.” It seems then the ousting of Altman was an attempt to stop the advancement of Q*. But now Altman is back - and a new OpenAI board is in place - there will surely be more questions about Q*, and what it means for all of us.
The Washington Post **reports **Adobe and Shutterstock have allowed the mixing of photo-realistic AI-generated images alongside real photos on their stock image platforms.
The Washington Post highlights AI images depicting scenes of war, along with fake photos that show protest marches. As the report notes, some celebrities have shared such images on social media, oftentimes believing them to be genuine depictions of war or discontent.
Adobe Stock and Shutterstock allow users to upload AI-generated works. By contrast, Getty Images does not allow users to post AI content to its platform.
**Why it matters: **With wars in Europe and the Middle East, photos of events on the ground can help to shape a narrative. Adding AI to the mix can provide incentives for people to tell their version of events.
Google has added a new feature to Bard, allowing users to ask questions about the content of a video.
As Google explains, a user might want to know how many eggs are in a video recipe, to which Bard could extract the answer.
Google added YouTube content to Bard earlier this year, initially allowing users to search for video content via the chatbot.
**Why it matters: **YouTube is a giant repository of information. Giving Bard access to that data could not just be useful for users, but also for the development of the chatbot itself.
OpenAI's board might have been concerned about the fate of humanity.