Close Menu
    Trending
    • Meghan Markle & Prince Harry Mark 7 Year Wedding Anniversary
    • The Costliest Startup Mistakes Are Made Before You Launch
    • Trump Signs Controversial Law Targeting Nonconsensual Sexual Content
    • Museo facilita el regreso de un artefacto maya de la colección de un filántropo de Chicago
    • Eagles extend head coach Nick Sirianni
    • New book details how Biden’s mental decline was kept from voters : NPR
    • Regeneron buys 23andMe for $256m after bankruptcy | Business and Economy
    • Cheryl Burke Blasts Critics, Defends Appearance in Passionate Video
    Messenger Media Online
    • Home
    • Top Stories
    • Plainfield News
      • Fox Valley News
      • Sports
      • Technology
      • Business
    • International News
    • US National News
    • Entertainment
    • More
      • Product Review
      • Local Business
      • Local Sports
    Messenger Media Online
    Home»Technology»An AI Customer Service Chatbot Made Up a Company Policy—and Created a Mess
    Technology

    An AI Customer Service Chatbot Made Up a Company Policy—and Created a Mess

    DaveBy DaveApril 19, 2025No Comments4 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    On Monday, a developer utilizing the favored AI-powered code editor Cursor seen one thing unusual: Switching between machines immediately logged them out, breaking a typical workflow for programmers who use a number of units. When the person contacted Cursor help, an agent named “Sam” instructed them it was anticipated habits below a brand new coverage. However no such coverage existed, and Sam was a bot. The AI mannequin made the coverage up, sparking a wave of complaints and cancellation threats documented on Hacker News and Reddit.

    This marks the most recent occasion of AI confabulations (additionally called “hallucinations”) inflicting potential enterprise harm. Confabulations are a kind of “inventive gap-filling” response the place AI fashions invent plausible-sounding however false info. As an alternative of admitting uncertainty, AI fashions usually prioritize creating believable, assured responses, even when meaning manufacturing info from scratch.

    For corporations deploying these techniques in customer-facing roles with out human oversight, the results may be instant and expensive: annoyed clients, broken belief, and, in Cursor’s case, doubtlessly canceled subscriptions.

    How It Unfolded

    The incident started when a Reddit person named BrokenToasterOven noticed that whereas swapping between a desktop, laptop computer, and a distant dev field, Cursor periods have been unexpectedly terminated.

    “Logging into Cursor on one machine instantly invalidates the session on every other machine,” BrokenToasterOven wrote in a message that was later deleted by r/cursor moderators. “It is a vital UX regression.”

    Confused and annoyed, the person wrote an electronic mail to Cursor help and rapidly obtained a reply from Sam: “Cursor is designed to work with one gadget per subscription as a core safety function,” learn the e-mail reply. The response sounded definitive and official, and the person didn’t suspect that Sam was not human.

    After the preliminary Reddit put up, customers took the put up as official affirmation of an precise coverage change—one which broke habits important to many programmers’ each day routines. “Multi-device workflows are desk stakes for devs,” wrote one person.

    Shortly afterward, a number of customers publicly introduced their subscription cancellations on Reddit, citing the non-existent coverage as their motive. “I actually simply cancelled my sub,” wrote the unique Reddit poster, including that their office was now “purging it fully.” Others joined in: “Yep, I am canceling as nicely, that is asinine.” Quickly after, moderators locked the Reddit thread and eliminated the unique put up.

    “Hey! Now we have no such coverage,” wrote a Cursor consultant in a Reddit reply three hours later. “You are in fact free to make use of Cursor on a number of machines. Sadly, that is an incorrect response from a front-line AI help bot.”

    AI Confabulations as a Enterprise Threat

    The Cursor debacle recollects a similar episode from February 2024 when Air Canada was ordered to honor a refund coverage invented by its personal chatbot. In that incident, Jake Moffatt contacted Air Canada’s help after his grandmother died, and the airline’s AI agent incorrectly instructed him he might guide a regular-priced flight and apply for bereavement charges retroactively. When Air Canada later denied his refund request, the corporate argued that “the chatbot is a separate authorized entity that’s liable for its personal actions.” A Canadian tribunal rejected this protection, ruling that corporations are liable for info supplied by their AI instruments.

    Slightly than disputing accountability as Air Canada had achieved, Cursor acknowledged the error and took steps to make amends. Cursor cofounder Michael Truell later apologized on Hacker News for the confusion in regards to the non-existent coverage, explaining that the person had been refunded and the difficulty resulted from a backend change meant to enhance session safety that unintentionally created session invalidation issues for some customers.

    “Any AI responses used for electronic mail help are actually clearly labeled as such,” he added. “We use AI-assisted responses as the primary filter for electronic mail help.”

    Nonetheless, the incident raised lingering questions on disclosure amongst customers, since many individuals who interacted with Sam apparently believed it was human. “LLMs pretending to be individuals (you named it Sam!) and never labeled as such is clearly supposed to be misleading,” one person wrote on Hacker News.

    Whereas Cursor mounted the technical bug, the episode reveals the dangers of deploying AI fashions in customer-facing roles with out correct safeguards and transparency. For an organization promoting AI productiveness instruments to builders, having its personal AI help system invent a coverage that alienated its core customers represents a very awkward self-inflicted wound.

    “There’s a specific amount of irony that individuals strive actually laborious to say that hallucinations should not a giant downside anymore,” one person wrote on Hacker News, “after which an organization that may profit from that narrative will get immediately damage by it.”

    This story initially appeared on Ars Technica.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleBulls have a new ‘Big Three’ but also have three big decisions to make
    Next Article 3vHabits That Made Me Sharper, Stronger and More Successful
    Dave

    Related Posts

    Technology

    Trump Signs Controversial Law Targeting Nonconsensual Sexual Content

    May 19, 2025
    Technology

    A Silicon Valley VC Says He Got the IDF Starlink Access Within Days of October 7 Attack

    May 19, 2025
    Technology

    12 Ways to Upgrade Your Wi-Fi and Make Your Internet Faster (2024)

    May 19, 2025
    Add A Comment

    Comments are closed.

    Top Posts

    Trump imposes tariffs on Mexico, Canada and China | Donald Trump News

    February 2, 2025

    ‘All My People Are Getting Attacked’

    January 28, 2025

    Michael Malone rips Nuggets’ effort after blowout loss to Knicks

    November 26, 2024

    From deportations to health care, state lawmakers are key for much of Trump’s domestic agenda : NPR

    February 25, 2025

    Watch: Bengals keep playoff hopes alive with walk-off OT win

    December 29, 2024
    Categories
    • Business
    • Entertainment
    • Fox Valley News
    • International News
    • Plainfield News
    • Sports
    • Technology
    • Top Stories
    • US National News
    Most Popular

    Army helicopter forces two jetliners to abort DCA landings : NPR

    May 3, 2025

    Carson Hocevar earns pole for Wurth 400 at Texas

    May 3, 2025

    Bulls offseason position analysis: Center of attention this summer

    May 3, 2025
    Our Picks

    Biggest concern for every American League team

    February 17, 2025

    The 30 Best Movies on Max (aka HBO Max) Right Now (January 2025)

    December 31, 2024

    Zach Bryan Debuts Apparent New Girlfriend, Hannah Duncan

    March 12, 2025
    Categories
    • Business
    • Entertainment
    • Fox Valley News
    • International News
    • Plainfield News
    • Sports
    • Technology
    • Top Stories
    • US National News
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Messengermediaonline.com All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.