spot_img
Tuesday, December 24, 2024
More
    spot_img
    HomeFuture PerfectChatGPT's flirty "her" voice is double the sexiness

    ChatGPT’s flirty “her” voice is double the sexiness

    -

    NEW YORK, NEW YORK – SEPTEMBER 28: Scarlett Johansson attends the Clooney Foundation for Justice’s 2023 Albie Awards at the New York Public Library on September 28, 2023 in New York City. (Photo by Taylor Hill/WireImage)

    If a guy calls you his Favorite Sci-Fi Movie his, Then revealed an AI chatbot with a voice that sounded uncomfortable like a voice from his, then Tweeted the single word “her”. Moments after release… What do you conclude?

    It is reasonable to conclude that AI is heavily motivated by voice his.

    Sam Altman, CEO of OpenAI, has done all of the above and his company Recently released A new version of ChatGPT that spoke to users with a flirty female voice – a voice that clearly resembles Scarlett Johansson, the actress who voiced the AI ​​girlfriend in the 2013 Spike Jonze movie his.

    Now, Johansson has come forward to object, writing a statement That the chatbot’s voice “sounded so similar to mine that my closest friends and news outlets couldn’t tell the difference.”

    Altman’s response? He claims that the voice “does not belong to Scarlett Johansson and was never intended to resemble her.”

    That, at first blush, is an absurd claim.

    While the voice may not have been literally trained or copied from Johansson — OpenAI says it hired another actress — there’s plenty of evidence that it may have been intended to resemble her. Apart from this, Altman’s love story his And her “her” tweet, has new revelations from Johansson: Altman, she says, reached out to her agent on two separate occasions to give her a chatbot voice.

    When the first request came last September, Johansson said no. A second request came two days before the new chatbot’s demo, asking him to reconsider. “Before we connected, the system was there,” Johansson said, adding that she hired a lawyer to demand an explanation from Altman.

    OpenAI is released A blog post It reportedly went through a months-long process last year to find voice actors — including the voice of “Sky.” many people Find similar to Johansson — before introducing some voice capabilities for ChatGPT last September. According to Altman, “We cast the voice actor behind Skye’s voice before any outreach to Ms. Johansson.” September, remember, is the month Johansson says Altman first requested to license his voice.

    If OpenAI did indeed cast the actor behind Sky before any outreach to Johansson, it still doesn’t necessarily follow that Sky’s voice never resembled Johansson. Nor does it necessarily follow that the AI ​​model behind Skye was only fed the hired actor’s voice, with nothing to do with Johansson’s voice. I raised these questions with the company. OpenAI did not respond to a request for comment in time for publication.

    OpenAI took down Sky’s voice “out of respect for Ms. Johansson,” as Altman put itAdding, “We’re sorry to Ms. Johansson that we couldn’t communicate better.”

    But if OpenAI isn’t doing anything wrong, why would it downvote the voice? And when Altman insists in the same breath that The Voice had nothing to do with Johansson, how much “respect” does this apology really convey?

    “He felt that my voice would comfort people”

    From Apple’s Siri to Amazon’s Alexa to Microsoft’s Cortana, there’s a reason tech companies have given their digital assistants friendly female voices over the years. From a business perspective, giving your AI that voice is smart. This will likely improve your company’s bottom line.

    This is the reason Research shows that when people need help, they prefer to hear it delivered in a female voice, which they perceive as non-threatening. (They prefer a male voice when they’re making authoritative statements.) And companies design assistants to be unfailingly enthusiastic and polite because of that kind of behavior. Maximizes a user’s desire to stay engaged with the device.

    But design choices are worrisome on an ethical level. Researchers say it reinforces sexist stereotypes of women as jobs who exist only to do someone else’s bidding — to help them, comfort them, and fatten their egos.

    According to Johansson, expressing a sense of relief was precisely Altman’s goal when he tried to license his voice nine months ago.

    “He told me he felt that with my voice, I could bridge the gap between tech companies and creatives, and help consumers get comfortable with the seismic shifts related to humans and AI,” Johansson wrote. “He said he felt my voice would comfort people.”

    It’s not just that Johansson’s breathy, flirty voice is soothing in itself. Johansson voices Samantha, the AI ​​girlfriend in the romance his, a story about how an AI can connect, comfort and uplift a lonely human being. Notably, Samantha was far more advanced than anything modern AI companies have released — so advanced, in fact, that it evolved beyond its human user — so adding the new ChatGPT to the film probably helps, too.

    There is a second level, which deals with a woman’s consent. Despite Johansson’s clear “no” to Altman’s request last year, she used a voice similar to Johansson’s and then, when she complained, told the world that the actress was wrong about the resemblance to her voice.

    I wasn’t sure what to call this, so I asked ChatGPT about this type of situation more generally. Here’s how the chatbot responded:

    This is part of a pattern in OpenAI. Can the company be trusted?

    Johansson’s controversy is one of a string of recent events that have caused people to lose confidence in OpenAI — and especially its CEO Altman.

    last year, the artist And the writer OpenAI sued for allegedly stealing their copyrighted material to train its AI models. Meanwhile, experts have raised concerns about deepfakes, which are becoming more worrisome by the day as the world draws closer to major elections.

    Then, last November, OpenAI’s board tried to fire Altman because, they said at the time, he “has not been consistently forthright in his communications.” Then former colleagues came forward Describe him as a manipulator Someone who talks out of both sides of his mouth — someone who claims he wants to prioritize safe AI deployment, but contradicts it in his own right. behavior. Since then, employees have increasingly come to the same conclusion, to the point that some are leaving the company.

    “I slowly lost confidence in the OpenAI leadership,” former employee Daniel Cocotazlo told me, explaining why he left his job last month.

    “It’s a process of trust breaking one by one, like dominoes falling one by one,” another person with inside knowledge of the company told me last week on condition of anonymity.

    Some employees have avoided speaking out publicly because they signed offboarding agreements with disparagement provisions after they left. After Vox reported on the deal, Altman said The company has been in the process of changing them. But the public might ask: Why would OpenAI have such restrictive provisions if it doesn’t do something it’s interested in keeping out of public view?

    And at a time when many of OpenAI’s security-conscious employees are jumping ship because they don’t trust the company’s leaders, why should the public trust them?

    Actually, according to A new poll From the Artificial Intelligence Policy Institute, nearly 6 in 10 Americans said the souped-up ChatGPT disclosure made them more worried about the rise of AI, while only 24 percent said it made them excited. What’s more, 52 percent of Americans now hold an unfavorable view of OpenAI.

    At this point, the burden of proof is on OpenAI to convince the public that it is trustworthy.

    Source link

    Related articles

    Stay Connected

    0FansLike
    0FollowersFollow
    0FollowersFollow
    0SubscribersSubscribe
    google.com, pub-6220773807308986, DIRECT, f08c47fec0942fa0

    Latest posts