#deepfake

20 posts loaded — scroll for more

Text
cityofcrysis
cityofcrysis

Is anyone getting Margot Robbie nude deepfakes?? What the fuck??

Text
greggyour
greggyour

YouTube expands AI deepfake detection to politicians, government officials, and journalists

YouTube is rolling out a new pilot that lets politicians, government officials, and journalists flag AI deepfakes that misuse their likeness, building on the system it already offers to creators. It works a bit like a “face Content ID”: the tool looks for AI-generated versions of a person, surfaces possible matches to them, and then YouTube reviews any takedown requests under its existing rules, which still leave room for satire and political commentary. YouTube says outright abuse has been relatively rare so far, but it’s clearly gearing up for bigger risks, from election-related fakes to voice clones and even AI versions of famous characters down the line.

Why it matters: This doesn’t just help protect reputation; it also means communicators need a plan for monitoring AI impersonations and reacting quickly when a convincing fake suddenly starts to spread.

Protect your identity and your brand

Text
dalilamadjid
dalilamadjid

59 : Le droit à l'image à l'ère du deepfake

Le deepfake est un phénomène qui prend aujourd’hui une ampleur considérable. Grâce à l’intelligence artificielle, il est désormais possible de créer des images, des vidéos ou même des voix imitant une personne de manière extrêmement réaliste.

Mais derrière ces prouesses technologiques se cache une réalité juridique préoccupante : ces contenus reposent souvent sur une représentation non autorisée…

Text
pixegias
pixegias

YouTube expands AI deepfake detection to politicians, government officials, and journalists

YouTube is expanding its likeness detection technology, which identifies AI-generated deepfakes, to a pilot group of government officials, political candidates, and journalists, the company announced Tuesday. Members of the pilot group will gain access to a tool that detects unauthorized AI-generated content and lets them request its removal if they believe it violates YouTube policy.
The…

Text
newstech24
newstech24

YouTube’s AI Deepfake Guardian for Public Voices

YouTube is broadening its identity recognition system, which pinpoints AI-generated synthetic media, to an initial cohort of public servants, electoral aspirants, and media professionals, the corporation disclosed on Tuesday. Participants in the trial collective shall obtain entry to a mechanism that identifies unapproved AI-created material and permits them to solicit its deletion should they…

Text
pixegias
pixegias

YouTube expands AI deepfake detection for politicians, government officials, and journalists

YouTube is expanding its likeness detection technology, which identifies AI-generated deepfakes, to a pilot group of government officials, political candidates, and journalists, the company announced Tuesday. Members of the pilot group will gain access to a tool that detects unauthorized AI-generated content and lets them request its removal if they believe it violates YouTube policy.
The…

Text
pixegias
pixegias

Meta’s deepfake moderation isn’t good enough, says Oversight Board

Meta’s methods for identifying deepfakes are “not robust or comprehensive enough” to handle how quickly misinformation spreads during armed conflicts like the Iran war. That’s according to the Meta Oversight Board — a semi-independent body that guides the company’s content moderation practices — which is now calling on Meta to overhaul how it surfaces and labels AI-generated content across…

Text
newstech24
newstech24

Oversight Board Slams Meta: Deepfake Defenses Are Failing

The techniques Meta employs for detecting deepfakes are deemed “insufficiently sturdy or thorough” to cope with the rapid dissemination of false information during military disputes, such as the conflict involving Iran. This assessment comes from the Meta Oversight Board — a partially autonomous entity that directs the company’s content moderation practices — which is now imploring Meta to…

Text
onetechavenue
onetechavenue

iProov Scales to Over 1 Million Daily Transactions as Deepfakes Redefine the Enterprise Attack Surface

MANILA, PHILIPPINES – iProov, the world’s leading provider of science-based biometric identity verification solutions, today announced that it surpassed one million daily transactions in 2025, marking a definitive shift in the global security landscape. As generative AI fuels large-scale impersonation imagery and remote work reshapes enterprise security, identity has become the perimeter, and…

Text
williamannos
williamannos
Text
remember-----me
remember-----me
Text
ztremx
ztremx

Did you know there was a global alliance of anti-scammers?

“You Dumb-Ass!”

Text
philparadox
philparadox
Text
paginadepsihologie
paginadepsihologie

Escrocheriile online nu mai arată ca spamul din anii 2000.

Arată ca o știre reală. Ca un apel de la bancă. Ca un mesaj de la șef.

Un ghid clar despre cum funcționează fraudele cu AI, ce semnale psihologice au și ce poți face imediat ca să nu pierzi bani.

Citește articolul și trimite-l și celor din familie — prevenția chiar contează aici.

🔗 descoperă!

Text
yovngnava
yovngnava

Deepfakeeeeeeeeeee

Es una palabra del termino proveniente de deep learning (aprendizaje profundo) y fake (falso). Básicamente, son archivos de video, imagen o audio manipulados mediante IA para que parezcan reales.

La magia ocurre gracias a una arquitectura llamada GANs (Redes Generativas Antagónicas). Imagina a dos algoritmos compitiendo: uno crea una imagen falsa y el otro intenta detectar si es mentira. Siguen así hasta que el “falsificador” es tan bueno que el “detectores” ya no puede notar la diferencia.

¿Cómo puedo hacer un deepfake?

Apps de consumo: Aplicaciones como Reface o FacePlay permiten intercambiar rostros en clips predeterminados de forma sencilla y divertida.

Software especializado: Herramientas de código abierto como DeepFaceLab o FaceSwap son el estándar para resultados profesionales. Requieren una tarjeta gráfica (GPU) potente y mucho tiempo de “entrenamiento” con fotos de la persona objetivo.

Modelos de Difusión: Herramientas más modernas (como Stable Diffusion con extensiones específicas) permiten generar videos realistas a partir de descripciones de texto.

Alcances de utilizar videos falsos

El potencial es enorme y no todo es malo. Los alcances se dividen en varios sectores:

Entretenimiento: Rejuvenecer actores (como en Star Wars) o doblar películas haciendo que el movimiento de los labios coincida con el nuevo idioma.

Educación: “Revivir” a personajes históricos para que den una clase o narren su biografía.

Investigación: Crear avatares digitales para personas que han perdido la voz por enfermedades.

Sátira y Arte: Crear parodias políticas o expresiones artísticas surrealistas.

Ejemplos negativos y riesgos
Lamentablemente, la facilidad de acceso ha llevado a usos muy oscuros. La desinformación es el riesgo principal aquí:

Pornografía no consentida: Es el uso negativo más extendido. Se utiliza el rostro de celebridades o personas comunes en videos adultos sin su permiso.

Fraude financiero (CEO Fraud): Se han reportado casos donde estafadores usan IA para clonar la voz de un directivo en una llamada, ordenando transferencias de dinero urgentes.

Desinformación política: Crear videos de candidatos diciendo cosas que nunca dijeron para influir en elecciones o causar pánico social.

Extorsión: Crear evidencia falsa de un crimen o situación comprometedora para chantajear a alguien.

Text
falconsai
falconsai

Modern espionage relies on “digital honeytraps” using synthetic NSFW content. Falcons.ai acts as your front-line counter-intel, identifying and blocking illicit deepfakes on secure networks before they become blackmail leverage. Secure the person, secure the mission.

FALCONSAI #CounterIntel #Deepfake

Text
aitan
aitan
Text
windowsblogita
windowsblogita

Deepfake e Voice Cloning: oggi possono rubare la tua voce. E usarla contro di te.

Le truffe stanno cambiando.

Non cercano più solo password.Non si limitano a email sospette.Non puntano soltanto a link falsi.

Oggi possono replicare la tua voce.

Bastano pochi secondi di audio presi da un vocale WhatsApp, da un video social, da un’intervista o da una riunione online. Con strumenti di voice cloning e deepfake audio, chi attacca può costruire chiamate credibili: “sono tuo…

Text
arte1misia
arte1misia

Musk

Text
viejospellejos
viejospellejos