The intersection of Artificial Intelligence and blockchain technology is steadily transforming workplaces across the U.S., with ChatGPT emerging as a game-changer, a recent Reuters/Ipsos poll revealed.
Even as tech giants Microsoft and Google exercise caution, workers are increasingly turning to ChatGPT for everyday tasks.
Aptly coined the “AI conversation artist,” ChatGPT uses generative AI to engage in multifaceted conversations, prompting a surge of interest worldwide.
However, the amalgamation of AI and blockchain sparks concerns of intellectual property and strategic information leakage among security firms and corporations.
Witnessing the dawn of a transformative era, individuals are utilizing ChatGPT for diverse tasks, from drafting emails to summarizing documents and conducting preliminary research.
Astonishingly, 28% of respondents in the AI-focused poll acknowledged regular ChatGPT usage at their workplaces, outstripping the 22% whose employers explicitly sanctioned such external tools.
As the poll surveyed 2,625 U.S. adults, its credibility interval, an indicator of precision, hovered around 2 percentage points.
Notably, 10% reported prohibitions against external AI tools, while a quarter remained uncertain about their organization’s stance on this evolving technology.
Since its November launch, ChatGPT has soared to unprecedented popularity, igniting both excitement and apprehension.
Although privacy concerns, particularly in Europe, have cast a shadow on OpenAI, the developer, as human reviewers access generated chats.
Moreover, AI absorbing training data raises risks of replicating proprietary information.
VP of Customer Trust at Okta, Ben King, a corporate security firm, underscores the urgency of addressing data use and security gaps: “Users often misunderstand generative AI services’ data usage. Corporations need to assess risks comprehensively, especially since many AIs lack formal agreements due to their free nature.”
OpenAI refrained from commenting on individual employees’ ChatGPT usage, highlighting a recent blog post reassuring corporate partners of data usage boundaries.
Meanwhile, Google’s Bard garners data on user text and behavior, offering content deletion options. Microsoft and Alphabet-owned Google declined to offer additional insights.
Notably, a Tinder employee shared instances of discreet ChatGPT usage within the dating app, an anecdote reflecting a broader trend.
Companies like Coca-Cola and Tate & Lyle experiment with AI augmentation for operational efficiency while prioritizing security.
Amid this transformative landscape, vigilance prevails. Paul Lewis, Chief Information Security Officer at Nominet, underscores the need for caution saying, “While the benefits are undeniable, vulnerabilities can’t be ignored. Malicious prompts could exploit AI chatbots, unveiling sensitive information. We must tread carefully on this transformative journey.”