Adoption comes at high security cost

0


The retail industry is among the leaders in generative AI adoption, but a new report highlights the security costs that accompany it.

According to cybersecurity firm Netskope, the retail sector has all but universally adopted the technology, with 95% of organisations now using generative AI applications. That’s a huge jump from 73% just a year ago, showing just how fast retailers are scrambling to avoid being left behind.

However, this AI gold rush comes with a dark side. As organisations weave these tools into the fabric of their operations, they are creating a massive new surface for cyberattacks and sensitive data leaks.

The report’s findings show a sector in transition, moving from chaotic early adoption to a more controlled, corporate-led approach. There’s been a shift away from staff using their personal AI accounts, which has more than halved from 74% to 36% since the beginning of the year. In its place, usage of company-approved GenAI tools has more than doubled, climbing from 21% to 52% in the same timeframe. It’s a sign that businesses are waking up to the dangers of “shadow AI” and trying to get a handle on the situation.

In the battle for the retail desktop, ChatGPT remains king, used by 81% of organisations. Yet, its dominance is not absolute. Google Gemini has made inroads with 60% adoption, and Microsoft’s Copilot tools are hot on its heels at 56% and 51% respectively. ChatGPT’s popularity has recently seen its first-ever dip, while Microsoft 365 Copilot’s usage has surged, likely thanks to its deep integration with the productivity tools many employees use every day.

Beneath the surface of this generative AI adoption by the retail industry lies a growing security nightmare. The very thing that makes these tools useful – their ability to process information – is also their biggest weakness. Retailers are seeing alarming amounts of sensitive data being fed into them.

The most common type of data exposed is the company’s own source code, making up 47% of all data policy violations in GenAI apps. Close behind is regulated data, like confidential customer and business information, at 39%.

In response, a growing number of retailers are simply banning apps they deem too risky. The app most frequently finding itself on the blocklist is ZeroGPT, with 47% of organisations banning it over concerns it stores user content and has even been caught redirecting data to third-party sites.

This newfound caution is pushing the retail industry towards more serious, enterprise-grade generative AI platforms from major cloud providers. These platforms offer far greater control, allowing companies to host models privately and build their own custom tools.

Both OpenAI via Azure and Amazon Bedrock are tied for the lead, with each being used by 16% of retail companies. But these are no silver bullets; a simple misconfiguration could inadvertently connect a powerful AI directly to a company’s crown jewels, creating the potential for a catastrophic breach.

The threat isn’t just from employees using AI in their browsers. The report finds that 63% of organisations are now connecting directly to OpenAI’s API, embedding AI deep into their backend systems and automated workflows.

This AI-specific risk is part of a wider, troubling pattern of poor cloud security hygiene. Attackers are increasingly using trusted names to deliver malware, knowing that an employee is more likely to click a link from a familiar service. Microsoft OneDrive is the most common culprit, with 11% of retailers hit by malware from the platform every month, while the developer hub GitHub is used in 9.7% of attacks.

The long-standing problem of employees using personal apps at work continues to pour fuel on the fire. Social media sites like Facebook and LinkedIn are used in nearly every retail environment (96% and 94% respectively), alongside personal cloud storage accounts. It’s on these unapproved personal services that the worst data breaches happen. When employees upload files to personal apps, 76% of the resulting policy violations involve regulated data.

For security leaders in retail, casual generative AI experimentation is over. Netskope’s findings are a warning that organisations must act decisively. It’s time to gain full visibility of all web traffic, block high-risk applications, and enforce strict data protection policies to control what information can be sent where.

Without adequate governance, the next innovation could easily become the next headline-making breach.

See also: Martin Frederik, Snowflake: Data quality is key to AI-driven growth

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.



Source link

You might also like
Leave A Reply

Your email address will not be published.