Headlines This Week

  • The Pentagon wants to beef up its automated drone hordes with some AI. File that under Things That Will Keep You Up At Night.
  • OpenAI has admitted that AI text detectors generally do not work. Cool.
  • As fears of AI-fueled political manipulation grow, Google has announced that any political ads hosted on its platforms must disclose whether they used AI or not. I’m not sure that will really fix the problem, but it’s a nice gesture just the same.
  • UNESCO, the UN’s specialized agency that focuses on arts, culture, and education, has urged governments to regulate generative AI in schools. I suspect they’ve noticed just how terrible ChatGPT has been for colleges in the U.S., where students are using it to cheat like there’s no tomorrow.
  • Last but not least: two U.S. senators have introduced bipartisan legislation to regulate AI. One of them is Josh Hawley, who doesn’t have the best tech policy track record in the world.

Netflix Passwords, ChatGPT Can’t Detect AI, and No More CoTweets | Editor Picks

The Top Story: Elon’s Ultimate AI Corporation

Image for article titled AI This Week: Elon Musk's AI-Dominated Future

Illustration: thongyhod (Shutterstock)

For the past decade, Elon Musk has invested heavily in a slew of progressively weirder businesses, many of which have a dystopian hue to them. From his computer-to-brain interface startup Neuralink, to his pet Tesla project “Optimus,” the bipedal robot, to ChatGPT creator OpenAI (which Musk co-founded), Musk has helped spawn a pantheon of weird, sci-fi-tinged businesses that are actively flirting with the fringes of technological innovation. Now, Musk’s new biographer, Walter Isaacson, has put forth the conjecture that many of these businesses are part of a broader scheme by Musk to usher in a bold new era of artificial intelligence. In an article published in Time, Isaacson argues that most of Musk’s various startup investments and business ventures have been part of a broader strategy to spur the creation of “artificial general intelligence,” or AGI.

Not familiar with AGI? The concept is decidedly vague. It basically contends the advent of that scary AI future we’ve all had dreams (or nightmares) about—the “singularity” where artificial intelligence is not just a rote mechanism of human-led algorithmic manipulation (“stochastic parrots,” as the recent large language models have been called), but a self-teaching organic intelligence that mirrors—or even surpasses—the kind humans naturally hold.

During an interview with Isaacson, Musk apparently told the writer that he thought his many disparate business ventures—like Neuralink, Tesla’s Optimus, and a neural-network-training supercomputer called Dojo—could be tied together “to pursue the goal of artificial general intelligence.”

Pivotal to this supposed master plan may be Musk’s recent launch of yet another startup, xAI. Isaacson seems to think that Musk plans to fold many of his other businesses (including xAI and X, aka, the website formerly known as Twitter—which Musk purchased last year for $44 billion) into one big business. The result could be a major artificial intelligence corporation designed to push technological boundaries beyond their current restraints.

However, many critics maintain that AGI is quite a far way off. While Musk may have his sights set on being the techno-messiah who brings about the robot revolution that—according to countless science fiction films—will ultimately doom mankind, the jury still seems to be out on whether that’s actually possible in the near, or even, far, term. Isaacson’s book on Musk, on the other hand, is available in the short term. The biography is due out next Tuesday.

The Interview: Michael Brooks, on the Challenges That Lie Ahead for the Robotaxi Industry

Image for article titled AI This Week: Elon Musk's AI-Dominated Future

Photo: Center for Auto Safety

The interview this week involves a recent conversation with Michael Brooks, the executive director of the Center for Auto Safety. Michael’s organization has repeatedly expressed criticism of the self-driving car industry and offered concern for the potential road hazards posed by it. When GM’s Cruise and Google’s Waymo recently got the go-ahead to expand commercial operations for their robotaxis in San Francisco (a big step in the evolution of the self-driving car industry), we thought it would be a good opportunity to talk to Michael about the challenges posed by automated road travel. This interview has been edited for clarity and brevity.

What do you think of the recent developments with Cruise and Waymo in San Francisco?

There’s been a lot going on lately…I think San Francisco has really woken up to the fact that there’s a problem here. I think they’re starting to ask the question: ‘Why do we really need this? Why do we need additional vehicles on our roads clogging up traffic?’ But, you know, at the same time Cruise is expanding across America. They’re in Raleigh, they’re in Austin. There’s a lot of other cities in other states where they are going to have a presence.

Have you been monitoring how the self-driving car industry has been attempting to shape the regulatory environment around their vehicles?

Something that the auto industry has tried to do around the country is control policy at the state level. What that does is remove the ability of the fire chief or police chief in San Francisco to say, ‘Hey, these cars need to be off my roads today. There are safety issues.’ That’s at the heart of [what was expressed at] the DMV and Public Utilities hearings in SF…people who actually live in these cities and have to experience the negative effects of these cars don’t have a voice or any control over whether they’re deployed on their streets. This is a regulatory setup that autonomous vehicles companies love. There are sometimes significant political differences between cities and states and the car manufacturers know that it’s going to be very hard for cities to fight back [against the states] when they’re in a position like this. So they like the idea of the state regulatory environment—for now. Ultimately, they want a federal scheme that preempts the state from doing anything as well. I think the power that the states think they have right now may be fleeting.

There’s been a lot of talk about the potential for self-driving vehicles to reduce road mortality rates. Do you think that, hypothetically, there are some public health benefits here?

Hypothetically, there are. However, the vehicles will need to be tested at speeds higher than 30 miles per hour if they want to be deployed more widely (30 mph is the speed at which Cruise was recently approved for commercial operations in San Francisco; Waymo, meanwhile, was approved for travel at up to 65 mph). We see so much death and destruction at higher speeds—and that’s where a lot of the real human judgment and errors are made. Autonomous vehicles are going to have to address if that if they want to be something that humans can use across the country. Right now, the best case scenario for this technology is very short trips on closed courses where nothing’s going to scare them and they know they’ll have wifi signals and they’re not going to run through concrete. Things happen so fast in car crashes at higher speeds; without testing the cars in those environments and demonstrating there’s some sort of safety benefit to them, it’s hard to know what’s going to happen with these products in the future.

Catch up on all of Gizmodo’s AI news here, or see all the latest news here. For daily updates, subscribe to the free Gizmodo newsletter.

Read More