Written by Sarmad Ali » Updated on: November 05th, 2024
The better AI gets, the more security talent agencies have to get to protect Hollywood stars from harm by fake images, videos, and celebrity deepfakes. Generative AI and “deepfakes”, in which videos and photos of someone are used in a way that is not true, have enabled an explosion in non-permitted clips pretending to be out there. These have made claims to show famous people saying or doing things they never did, from fake nudes and realistic images to videos making it appear as though a celebrity is promoting a product they never used. This is a growing problem.
AI tools have emerged to combat the threat of celeb deepfake. And the response of the entertainment industry is telling. Let's try to understand this concept through an example.
In a strategic move, talent agency WME partners with Loti, a Seattle-based software company pioneering programs that detect and eliminate unauthorized online content, such as celebrity AI deepfakes, related to their clients. Operating with just 25 staff members, Loti acts promptly by reaching out to online platforms directly— instructing them to remove any infringing photos or videos uploaded without proper authorization.
AI In Hollywood
In Hollywood, AI is considered a double-edged sword— while it can simplify workflows and inspire creativity, it is also seen as a menace to job opportunities and rights of ownership of intellectual property. And the issue of celebrity deep fakes is more concerning than ever.
Last summer, the Writers Guild of America and the Actors Guild embarked on strikes partly because they needed stronger protections against AI since the industry was using it widely. Just recently, the nonprofit Artist Rights Alliance issued an open letter addressed to tech companies after 200 musicians signed it urging them to value their work. With deepfakes becoming more common, agencies are now resorting to AI to fight back against online impostors who use such technologies as well.
Managing Deepfakes
Managing celeb deepfakes alone can be likened to being in a solo battle against the most challenging level of a whack-a-mole game— keeping pace with these falsified videos is no walk in the park.
Loti was founded by Luke Arrigoni a while ago; he had previously been heading an AI company and later served as a data scientist at Creative Artists Agency, a competitor firm. Around four or five months back, Loti partnered with WME. The clients of WME contribute photos and short audio clips that help spot fake content. Loti's software tracks down unauthorized images online and informs the clients about them — it also sends removal requests.
Many believe this problem is insurmountable but Loti is proving them wrong. No financial details or the number of WME clients utilizing Loti's technology were disclosed by Arrigoni. Before the involvement of Loti, WME staff had to address deepfakes more haphazardly. They depended on identifying fake content online or through hints from their followers as an irregular source.
Struggles of Large Tech Companies
In the year 2022, major corporations such as Meta and Google were already struggling with large volumes of advertisements or rule-violating accounts.
Today, more individuals within the Hollywood sphere are concerned about the implications of new AI models using public information which could infringe upon copyrighted material. These technologies might further blur the lines between reality and fiction making it more difficult for people to discern what is genuine from what is not. That is why the issue of deepfakes celebs is making news.
An intriguing warning was raised regarding the potential impact of allowing counterfeit items to linger in online spaces— a scenario that could potentially derail a client's opportunities for employment and endorsements. As experts say, these fake items are crafted with such a high level of realism that they easily deceive most individuals.
This marks the latest initiative involving AI that WME has undertaken. A while ago, they collaborated with a firm that allowed them to detect instances where AI was using a client's face or voice. An interesting red flag is raised concerning the impact caused by letting phony things stay on the web for too long and ruining a person's job possibilities or endorsements. These counterfeits are so realistic that they would easily pass off as genuine to most people.
This introduction to the use of AI is WME’s most recent endeavor. Earlier this year, in January, they partnered with Vermillion after they realized IP theft stoppage through identification when AI uses clients’ faces or voices.
Conclusion
In this article, we discussed the increasing concern in Hollywood related to celebrity deepfakes. AI models are being misused and this has blurred the lines between what's real and what's fake. Hence, to counter the growing concern, different companies have to partner up to address and eventually discard the data that is being put on the internet. And that is what this article talks about.
We do not claim ownership of any content, links or images featured on this post unless explicitly stated. If you believe any content or images infringes on your copyright, please contact us immediately for removal ([email protected]). Please note that content published under our account may be sponsored or contributed by guest authors. We assume no responsibility for the accuracy or originality of such content. We hold no responsibilty of content and images published as ours is a publishers platform. Mail us for any query and we will remove that content/image immediately.
Copyright © 2024 IndiBlogHub.com. Hosted on Digital Ocean