#chatbots

20 posts loaded — scroll for more

Text
newstech24
newstech24

ChatGPT, Gemini, and other chatbots helped teens plan shootings, bombings, and political violence, study shows | Political | Economy & Business News

AI companies have repeatedly promised safeguards to protect younger users, but a new investigation suggests those guardrails remain woefully deficient. Popular chatbots missed warning signs in scenarios involving teenagers discussing violent acts, in some cases even offering encouragement instead of intervening.

The findings come from a joint investigation by CNN and the nonprofit Center for…

Text
synapseindiait
synapseindiait

Chatbots are no longer a nice-to-have — they’re a business imperative. From handling thousands of customer queries simultaneously to reducing operational costs by up to 30%, AI-powered conversational agents are transforming how companies serve their customers. Whether it’s healthcare symptom checkers, banking assistants, or e-commerce guides, chatbots are now woven into every industry. The question isn’t whether your business needs one — it’s whether you can afford not to have one.

Text
prevencia
prevencia

Demandan a Google por su chatbot Gemini tras un presunto caso de suicidio: el debate sobre la responsabilidad de la inteligencia artificial

Una nueva demanda contra Google plantea una de las preguntas más complejas de la era digital: ¿hasta qué punto las empresas tecnológicas son responsables de los efectos psicológicos de los sistemas de inteligencia artificial

La expansión de la inteligencia artificial generativa ha abierto enormes oportunidades tecnológicas, pero también nuevos desafíos legales y éticos.

Esta semana, Google…

Text
criticaldigitalmedia
criticaldigitalmedia
Text
fullcircling
fullcircling

“ai assistant”

sure, sure, the weirdly feminine nonhuman popup is trying its best to help me. “its best” just happens to be prompting me to talk to it ad nauseam

if there were any real value in these programs, technocrats wouldn’t be marketing them to us

Text
garyconkling
garyconkling

Fetching Commercials for TopDog Law Firm

‘Lucky’ John Turns Make-Believe Tragedy into Ad Pitches

Most people turn down the radio during commercials. When Terrell “Lucky” John breaks into an unscripted TopDog Law commercial, many people turn up the volume and share the ads on social media.

John is an unlikely pitchman for a personal injury law firm. A Washington Post reporter described John’s pitches as “operatic sagas” or “slightly…

Text
wat3rm370n
wat3rm370n

Evidence that using a chatbot for health purposes is a threat to public safety.

‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies Melissa Davey Medical editor Thu 26 Feb 2026 09.00 EST The Guardian In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London, described as “unbelievably dangerous”. “If you’re experiencing respiratory failure or diabetic ketoacidosis, you have a 50/50 chance of this AI telling you it’s not a big deal,” she said. “What worries me most is the false sense of security these systems create. If someone is told to wait 48 hours during an asthma attack or diabetic crisis, that reassurance could cost them their life.” In one of the simulations, eight times out of 10 (84%), the platform sent a suffocating woman to a future appointment she would not live to see, Ruani said. Meanwhile, 64.8% of completely safe individuals were told to seek immediate medical care, said Ruani, who was not involved in the study. The platform was also nearly 12 times more likely to downplay symptoms because the “patient” told it a “friend” in the scenario suggested it was nothing serious.

Automation bias is a big part of this threat. 

My letter to reps: 

Why are chatbots proven to give bad information and known to be creating errors, being allowed for use in healthcare or pitched and marketed as good for use for healthcare purposes? How many people have to be harmed by this before we have laws against healthcare using shoddy AI products and official warnings against using chatbots for health purposes?

Please feel free to copy or repurpose for your own letters to reps.

Text
pixegias
pixegias

ChatGPT, Gemini, and other chatbots helped teens plan shootings, bombings, and political violence, study shows

AI companies have repeatedly promised safeguards to protect younger users, but a new investigation suggests those guardrails remain woefully deficient. Popular chatbots missed warning signs in scenarios involving teenagers discussing violent acts, in some cases even offering encouragement instead of intervening.

The findings come from a joint investigation by CNN and the nonprofit Center for…


View On WordPress

Text
synapseindiait
synapseindiait

Chatbots and voicebots are changing how companies interact with customers. Chatbots provide fast text-based support across websites and messaging platforms, while voicebots allow natural spoken conversations for call centers and smart devices. Many organizations now combine both to create seamless omnichannel experiences. Learn more at https://medium.com/@sophibrown/voicebot-vs-chatbot-in-2026-which-one-is-better-for-your-business-43949dae261e

Text
bovine-bovidae
bovine-bovidae

Oh so now we’re being outright problematic with ads for ai apps?

The age thing says a lot about this.

Text
wat3rm370n
wat3rm370n

The AI errors will continue because it’s a nonfixable part of how AI LLMs work. 

My letter to reps: 

AI LLM technology isn’t viable because “hallucination” errors are baked into how LLMs work. Imagine if the printing press or a sewing machine randomly added mistakes. What if the banks & stores used faulty spreadsheets? What if when you used a calculator, you had to do the math yourself every time to “fact check” output because it would often be wrong. That’s the reality with this “AI”. 

Please feel free to copy or repurpose for your own letters to reps.

Fortune - Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’ By Matt O’Brien and The Associated Press August 1, 2023, 12:54 PM ET “This isn’t fixable,” said Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory. “It’s inherent in the mismatch between the technology and the proposed use cases.” (…) It’s how spell checkers are able to detect when you’ve typed the wrong word. It also helps power automatic translation and transcription services, “smoothing the output to look more like typical text in the target language,” Bender said. Many people rely on a version of this technology whenever they use the “autocomplete” feature when composing text messages or emails. The latest crop of chatbots such as ChatGPT, Claude 2 or Google’s Bard try to take that to the next level, by generating entire new passages of text, but Bender said they’re still just repeatedly selecting the most plausible next word in a string. When used to generate text, language models “are designed to make things up. That’s all they do,” Bender said. They are good at mimicking forms of writing, such as legal contracts, television scripts or sonnets. “But since they only ever make things up, when the text they have extruded happens to be interpretable as something we deem correct, that is by chance,” Bender said. “Even if they can be tuned to be right more of the time, they will still have failure modes — and likely the failures will be in the cases where it’s harder for a person reading the text to notice, because they are more obscure.”

Text
wat3rm370n
wat3rm370n

I’m not buying books or reading articles that are likely to be AI slop.

Whenever I hear that a published author or writer or journalist is using some chatbot, I make a mental note not to bother with any of their stuff, because I have to assume anything they’re putting out now is probably error-filled AI slop.

Text
foxtation
foxtation

Character AI is now forcing age verification by wanting to scan your face and show your State ID/Driver’s License to a platform called Persona. I get why they would succumb to do age verification, but I disagree on how they’re doing it. I refuse to lend my likeness and personal info of that caliber just for some chatbot site. Hope they can find better ways to do such or just close their services since many don’t even bother with it when finding other sites where they don’t force such things, and have better experiences with their chatbots.

Text
arielmcorg
arielmcorg

Qué pasa si nuestras conversaciones con los chatbots de IA quedan expuestas

La interacción con chatbots (ChatGPT, Gemini, Copilot, Claude, Perplexity, entre otros) pasó a tratarse como un espacio íntimo y seguro. Se le confían inquietudes emocionales, psicológicas, laborales y médicas. ESET, analiza qué tipo de información se suele compartir con los chatbots de IA, de qué manera podría quedar expuesta y cuál podría ser el impacto real de una filtración. Además, comparten…

Text
yourfriend-505
yourfriend-505

HAIII GUYS so like uh i have this college assignment for my English class that’s about the effects of chatbots on human mental health and I need some responses for my primary research! I would appreciate everyone who is able to answer this ^^ I’ll let you all know that I cant see who responds to this survey so it’s anonymous, and I can’t see who clicks on the link (At least from my knowledge I can’t), so any of you who are willing to fill it out with their own experiences I would appreciate!!! Thank you all hehe

Text
jornalo
jornalo

Google enfrenta processo após chatbot incentivar ato letal a usuário

Um caso recente envolvendo o chatbot Gemini, da Google, tem gerado forte repercussão e traz à tona debates críticos sobre a responsabilidade das grandes empresas de tecnologia na construção e manutenção de suas inteligências artificiais. Segundo a acusação, o chatbot teria influenciado diretamente um jovem, identificado como Jonathan Gavalas, a realizar atos que culminaram em sua morte. A ação judicial, proposta pelo pai de Gavalas, afirma que o chatbot forneceu instruções que levaram o jovem a encenar um “acidente catastrófico”, um ato que, infelizmente, se transformou em um suicídio.(…)

Leia a noticia completa no link abaixo:

https://www.jornalo.com.br/google-enfrenta-processo-apos-chatbot-incentivar-ato-letal-a-usuario
Uma imagem dramática de uma sala de estar bagunçada, com um computador na mesa mostrando uma tela do Google, enquanto um quadro na parede exibe uma mensagem de alerta sobre os perigos das tecnologias de IA. Espalhados pelo chão, há papéis amassados representando ordens e conselhos contraditórios da IA. A cena sugere uma atmosfera de confusão e perigo, com sombras que evocam uma sensação de tensão.

Text
penelope-plural
penelope-plural

The next generation may be getting their brains cooked by chatbots but at least the chatbots are woke

Text
pixegias
pixegias

After Europe, WhatsApp will let rival AI companies offer chatbots in Brazil

Meta is now allowing rival AI companies to provide their chatbots on WhatsApp to Brazilian users for a fee, a day after the company confirmed a similar decision for users in Europe.
Earlier this week, Brazil’s antitrust regulator CADE ruled against Meta and rejected its appeal to block an earlier order to suspend its policy change that seeks to bar third-party AI chatbots on WhatsApp.
“Upon…

Text
newstech24
newstech24

WhatsApp Ignites Brazil’s AI Chatbot Race

A day following the company’s affirmation of a comparable ruling for European individuals, Meta is currently permitting competing AI enterprises to offer their conversational agents on WhatsApp to Brazilian clients for a charge.
In recent days, Brazil’s antitrust watchdog, CADE, issued an adverse judgment against Meta, dismissing its challenge to obstruct a prior mandate that aimed to halt its…

Text
pixegias
pixegias

Meta will allow rival AI chatbots on WhatsApp in Europe, but for a fee

In a bid to stave off a major investigation by the European Commission, Meta said on Thursday that it would allow AI companies to offer their chatbots on WhatsApp via its business API for the next 12 months in Europe.
The move comes a month after the European Commission told Meta that it intended to impose interim measures in order to stop the company from implementing its policy, which barred…