Blake Lemoine, an engineer who’s spent the last seven years with Google, has been fired, reports Alex Kantrowitz of the Big Technology newsletter. The news was allegedly broken by Lemoine himself during a taping of the podcast of the same name, though the episode is not yet public. Google confirmed the firing to Engadget.

Lemoine, who most recently was part of Google’s Responsible AI project, went to the Washington Post last month with claims that one of company’s AI projects had allegedly gained sentience. The AI in question, LaMDA — short for Language Model for Dialogue Applications — was publicly unveiled by Google last year as a means for computers to better mimic open-ended conversation. Lemoine seems not only to have believed LaMDA attained sentience, but was openly questioning whether it possessed a soul. And in case there’s any doubt words his views are being expressed without hyperbole, he went on to tell Wired, “I legitimately believe that LaMDA is a person.” 

After making these statements to the press, seemingly without authorization from his employer, Lemoine was put on paid administrative leave. Google, both in statements to the Washington Post then and since, has steadfastly asserted its AI is in no way sentient. 

Several members of the AI research community spoke up against Lemoine’s claims as well. Margaret Mitchell, who was fired from Google after calling out the lack of diversity within the organization, wrote on Twitter that systems like LaMDA don’t develop intent, they instead are “modeling how people express communicative intent in the form of text strings.” Less tactfully, Gary Marcus referred to Lemoine’s assertions as “nonsense on stilts.”

Reached for comment, Google shared the following statement with Engadget: 

As we share in our AI Principles, we take the development of AI very seriously and remain committed to responsible innovation. LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development. If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly. So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information. We will continue our careful development of language models, and we wish Blake well.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Source: www.engadget.com