Studio Ghibli AI Art Trend Raises Serious Privacy Concerns: Experts Warn
CIOTechOutlook Team | Monday, 07 April 2025, 11:48 IST
While social media users get involved with the practice of rendering their personal images in the style of Studio Ghibli using new artificial intelligence tools, some experts are worried that a more troubling situation looms, where casual engagement can develop into unforeseen losses of person privacy or abuse of personal data.
Cybersecurity specialists have been advising the public for these tools may feel harmless, but terms of service are always entertaining ambiguity about what happens to photographs of users after they are used.
The trend began when OpenAI released its GPT-4o model upgrade that allowed the user to render their personal photos in the Studio Ghibli style of a popular Japanese animation studio...Some of the several platforms claim they do not retain the photos, or they remove them immediately after use, however, none of them do not explicitly define what "remove" means - does it mean to remove and destroy it, to remove all aspect of the photo, or even only portions?
Photographs contain much more than face data. Many photographs have hidden metadata, location coordinates, timestamps, and even information about the type of camera taken, which may systemically disclose sensitive personal information! AI photo editing technologies use neural style transfer (NTSc), also said Quick Heal Technologies, CEO, Vishal Salvi.
These algorithms separate content from artistic styles of uploaded photographs to superimpose the user's image on reference artwork. While the process seems harmless, there are risks to safety, such as model inversion attacks where adversaries could recreate the original pictures from Ghibli images, he said.
These algorithms split content from artistic styles in uploaded user photos before merging the user image with the reference artwork. While the process may seem innocuous, he said, serious risks - including the increasingly popular model inversion attacks in which adversaries may be able to reconstruct original photos, based on Ghibli images - exist.
"Even if companies claim they don't store your photos, fragments of your data might still end up in their systems. Uploaded images can definitely be repurposed for unintended uses, like training AI models for surveillance or advertising," Salvi cautioned.
The way these tools are designed makes it easy to overlook what you're really agreeing to, McAfee's Pratim Mukherjee said.
Eye-catching results, viral filters, and fast interactions create an experience that feels light--but often comes with hidden privacy risks.
"When access to something as personal as a camera roll is granted without a second thought, it's not always accidental. These platforms are often built to encourage quick engagement while quietly collecting data in the background.
"That's where the concern lies. Creativity becomes the hook, but what's being normalised is a pattern of data sharing that users don't fully understand. And when that data fuels monetisation, the line between fun and exploitation gets blurry," said Mukherjee, Senior Director of Engineering, McAfee.
CIO Viewpoint
Empowering Women: Shaping the Future of Industry
By CIOTechOutlook Team
Scaling AI: Finding the right Biztech...
By Sujatha Gopal, CTO - Communications, Media & Information Services (CMI), Tata Consultancy services
Gen AI: Transforming Cloud Solutions for...
By Matt Yanchyshyn, VP - AWS Marketplace & Partner Services, AWS
CXO Insights
Scalable and Cost-Effective Load Management...
By Shibu Paul, Vice President, International Sales, Array Networks
Why A Data First Approach Could Be Your...
By Geetha Ramamoorthi, Managing Director, India, KBR Inc
A Short Guide for Data-driven and...