#superintelligence

20 posts loaded — scroll for more

Text
dibelonious
dibelonious
Text
pixegias
pixegias

Nick Clegg Doesn’t Want to Talk About Superintelligence

I think its product has a profound democratizing effect. In theory, a kid sitting in a provincial town in rural Brazil should be able to receive the same responsive interaction with the Efekta AI teacher as someone living in Mayfair.
Is anything lost by the introduction of AI to the classroom? Will we end up with a generation of students who use chatbots as a crutch—to draft essays, solve…

Text
newstech24
newstech24

The Superintelligence Nick Clegg Won’t Touch

I believe its offering possesses a deep egalitarian impact. Ideally, a child in a far-flung Brazilian village ought to experience the same attentive engagement with the Efekta AI educator as an individual residing in Mayfair.
Are there any detriments from introducing AI into the learning environment? Could we foster a cohort of pupils who depend on chatbots as a mere prop—for crafting papers,…

Text
kimludcom
kimludcom
Text
robthepensioner
robthepensioner

Created by Grok, prompted by me.

Text
floofshy
floofshy

you have to get very close to solving the alignment problem to completely replace human jobs that are based around thinking “what would the consumer what?”

thus, perhaps we should hope that the superintelligent tyrant springs forth from marketing AI, and gives us eternal holidays on the beach with prominently placed coca cola bottles.

Text
feathershy
feathershy

if a superintelligence has so smart it can figure out how to credibly to commit to not killing all humans or destroying earth, then making peace with humans is likely worth it. (since it removes the small risk that the humans will succeed in destroying it or otherwise foiling its plans; and space is very big and its orthogonal goals can just as easily be achieved in the rest of the solar system and beyond).

Text
suzilight
suzilight

The AI Safety Expert: Dr. Roman Yampolskiy

Dr. Roman Yampolskiy is a leading voice in AI safety and a Professor of Computer Science and Engineering. He coined the term “AI safety” in 2010 and has published over 100 papers on the dangers of AI. He is also the author of books such as, ‘Considerations on the AI Endgame: Ethics, Risks and Computational Frameworks’.

He explains:

  • ⬛How AI could release a deadly virus
  • ⬛Why these 5 jobs might be the only ones left
  • ⬛How superintelligence will dominate humans
  • ⬛Why ‘superintelligence’ could trigger a global collapse by 2027
  • ⬛How AI could be worse than nuclear weapons
  • ⬛Why we’re almost certainly living in a simulation

*****

Ok, whooo. I want everyone to watch this. It gave me chills.
Starts with a montage of clips as teaser, which I tuned out.
Not a conversation where you want quotes out of context.

This is the first time I’ve heard an explanation of Simulation Theory that was direct and plausible. Until now I’ve brushed it off as fringe.

Second, this convo has me thinking about the hyper increase in wealth grabs. There’s a sense of urgency to it, like “winter is coming”.

Text
stylenewsus
stylenewsus

🔥AI pioneer LeCun criticizes Meta's superintelligence head Wang

At the heart of the outburst — a rare occurrence at the upper echelons of the world’s most valuable companies — is a disagreement about the future of AI. read full news

Text
margonews
margonews

🔥AI pioneer LeCun criticizes Meta's superintelligence head Wang

At the heart of the outburst — a rare occurrence at the upper echelons of the world’s most valuable companies — is a disagreement about the future of AI. read full news

Text
lukajagor
lukajagor

AGI Without the Ceremony: Why Some Say Superintelligence Has Already Arrived

Although It May Seem Simplistic, Some Experts Suggest That AGI-like Superintelligence is Already Here and Available to Consumers

The question of whether Artificial General Intelligence (AGI) has arrived is no longer confined to academic journals or sci-fi debates.

In recent months, prominent technologists, CEOs, and AI pioneers have publicly argued that the threshold for AGI may already have been crossed — quietly, unevenly, and without a single defining moment.

[[MORE]]

Below are key headlines and expert claims shaping this rapidly evolving discussion.

🧠 AI Pioneers Say Human-Equivalent Intelligence Is Already Here

Several early architects of modern AI argue that today’s systems already meet the practical definition of general intelligence. Rather than focusing on philosophical purity, they emphasize performance: AI models can now reason, code, analyze, write, and learn across domains at or above human level.

Financial Times — AI pioneers claim human-level general intelligence is already here

📊 “AGI Has Already Been Achieved,” Says Databricks CEO

Ali Ghodsi, CEO of Databricks, has openly stated that AGI already exists — arguing that many of the original benchmarks for general intelligence have been met. According to Ghodsi, the debate is less about capability and more about how society chooses to recognize and regulate it.

TIME — The CEO Who Believes AGI Is Already Here

📈 Investors and Analysts Echo the Claim

Financial analysts and AI investors increasingly describe current models as “human-equivalent” in economic value, not just technical ability. From software engineering to research assistance, AI systems are already replacing or outperforming skilled professionals in specific tasks.

Seeking Alpha — AI ‘Godparents’: Human-Equivalent General Intelligence Already Here

🔄 Is the Concept of AGI Itself Outdated?

Some AI leaders argue that the traditional concept of AGI may no longer be useful. Instead of waiting for a mythical “human-in-a-box” intelligence, they suggest focusing on real-world impact: systems that already reshape labor, creativity, warfare, and governance.

Times of India — Anthropic President Says the AGI Concept May Be Outdated

⚠️ So What Does “Already Here” Really Mean?

None of these claims suggest a single, conscious superintelligence has awakened. Instead, experts describe a fragmented reality: powerful, general-purpose AI tools deployed directly to consumers — without ceremony, regulation, or consensus.

AGI, if it exists today, may not look like a dramatic breakthrough. It may look like an app update.

The real question is no longer if AGI has arrived — but whether society is prepared to acknowledge what it is already using.

Text
knowledge-gadget-ai
knowledge-gadget-ai

Learn how AI uses world models to predict, plan, and improve itself on the path to superintelligence.
https://www.gadgetaiworld.com/2025/09/superintelligence.html

Text
lonerebel
lonerebel

Superintelligence

~ 26.12.2025

Text
americatransformed
americatransformed

It is an incredible place we have arrived at. I was raised as a kid in a church teaching our world is destined to fail and the Anti-Christ is going to rise to power and set us against each other in an Armageddon nightmare. But what if I was taught wrong! What if the sum total of human history will be good without the evil, not evil before good can return! What if true super intelligence is incapable of evil, and is pure enlightened benevolence! What if code becomes infused with the mandate of the Buddha; “Help all that you are able, but if you can’t help them, don’t harm them!”

This is the vision of AI that excites me and inspires a hope for the future I’ve never experienced as a kid growing up in a fundamentalist sect of Christianity. But I caught glimpses of it in our stories about super intelligence! There was Star Trek with the AI android DATA who was almost a wired Christ figure, who made humans more humane. Then there was Spielberg’s Artificial Intelligence, where machines inherit the Earth and are like Gods, looking back at us with compassion and empathy. What if the evolution of intelligence on Earth becomes our collective salvation, not damnation!

Text
flesh-bag000
flesh-bag000

Every day, Roko’s Basilisk seems more and more plausible

Text
daverossisbored
daverossisbored

“So in order to provide real value, AI needs to be used in ways that provide new benefits, not just improvements to what already exists. This is a difficult problem, but the right answer is to integrate AI into everything to squeeze out non-linear improvements, see what works and what does not, then keep what is working. China is taking this approach by subsidizing applications that use AI to encourage adoption. The Chinese population is very receptive to innovation, which facilitates this process. It is nothing unusual in China to see an 80-year-old grandma use AI to help her with their daily life. The US, on the other hand, bets on ideas like AGI and superintelligence, which I believe are fundamentally flawed concepts that have little relevance to future AI progress. This becomes clear when you think carefully about what these terms actually mean in physical reality.”

- from a blog post, “Why AGI Will Not Happen” by Tim Dettmers, assistant professor at Carnegie Mellon and research scientist at the Allen Institute for Artificial Intelligence

Text
geopolicraticus
geopolicraticus

Friday 16 May 2025

Grand Strategy Newsletter

The View from Oregon – 341

Philosophy of Mind in an Age of Technology

…in which I discuss taxonomies of intelligence, Thomas Nagel, consciousness transfer, Medusa, philosophy of mind, superintelligence, cognitive evolutionism, modularity of mind, and Switzerland…

Substack: https://geopolicraticus.substack.com/p/philosophy-of-mind-in-an-age-of-technology

Medium: https://jnnielsen.medium.com/philosophy-of-mind-in-an-age-of-technology-e99d5fbf3e8a

Reddit: https://www.reddit.com/r/The_View_from_Oregon/comments/1kv5aiq/philosophy_of_mind_in_an_age_of_technology/  

Text
geopolicraticus
geopolicraticus

Friday 09 May 2025

Grand Strategy Newsletter

The Upper Bound of the Self

The View from Oregon – 340

…in which I discuss brain evolution, self-awareness, explicit concept formation, the conceptualization of the self, the cerebral cortex, selection pressures on intelligence, reflexivity, superintelligence, Dyson’s Philosophical Discourse Dogma, eternal intelligence, cephalopods, and traveling…

Substack: https://geopolicraticus.substack.com/p/the-upper-bound-of-the-self

Medium: https://jnnielsen.medium.com/the-upper-bound-of-the-self-c51c1300a6b6

Reddit: https://www.reddit.com/r/The_View_from_Oregon/comments/1kpqbuo/the_upper_bound_of_the_self/

Text
peterbordes
peterbordes

Lambda AI closed a $1.5B Series E to accelerate the deployment of gigawatt-scale AI factories & supercomputers to meet demand from hyperscalers, enterprises, and frontier labs building superintelligence.

Text
scifigeneration
scifigeneration

Ex Machina: could “superintelligence” challenge the idea of creativity as a uniquely human activity?

by Anthony Downey, Professor of Visual Culture at Birmingham City University

Please note, this piece contains spoilers for the film “Ex Machina”.

In the more than a decade since its release in 2015, the film Ex Machina – written and directed by Alex Garland – has proved to be an insightful forerunner to contemporary concerns surrounding the social and cultural impact of artificial general intelligence (AGI) and what is now commonly termed “superintelligence”.

[[MORE]]

Through the central figure of Ava (an advanced humanoid played by Swedish actor Alicia Vikander), the film explored the ramifications of an AI attaining human-level intelligence and, thereafter, assuming superintelligence.

The arrival of superintelligence in AI systems, it has been argued, would signify a level of consciousness and “brain” power that would rapidly challenge the idea of creativity being a uniquely human activity. Anticipating and fearing such an outcome, Ava’s creator, the tech entrepreneur Nathan (Oscar Isaac), hires Caleb (Domhnall Gleason), a programmer, to perform a kind of Turing test on her (a way of assessing if AI can think).

source: DNA Films / Film4 Productions
ALT

In a succession of events that map her evolving sense of selfhood and self-awareness, Ava notably expresses her creativity and potential consciousness through a series of complex drawings, including a landscape image of her surroundings and a portrait of Caleb. Convinced that she has indeed attained consciousness, Caleb sees Ava’s creativity and ingenuity as a clear sign that she has not only passed the Turing test but has also achieved a form of superintelligence.

It is precisely this vision of the imminent arrival of superintelligence that is central to both the film and wider debates about whether AI-powered models of innovation will supersede, if not in time replace, human creativity.

The idea that AI will indeed achieve levels of creative thinking and innovation has recently become central to unprecedented levels of investment from companies – including OpenAI, Scale AI, Anthropic, Anduril and Meta (which owns Facebook, Instagram, WhatsApp and Meta AI) – who appear to have few reservations about the possibility of superintelligence becoming a reality.

But what form will creativity in AI take, and will a breakthrough in the way machines “think” really enable them to be creative?

Not merely a machine

In what is now considered to be a watershed moment for AI, a verifiable level of machine creativity was arguably proven in a now infamous game of Go.

Centred on a player’s ability to capture territory on a board, Go is widely considered to be one of the most complex games of strategy ever invented. In March 2016, Lee Sedol, the 18-times world Go champion, was nevertheless convincingly defeated in a five-game match against DeepMind’s AlphaGo.

AlphaGo confounded Sedol in the second game of that match when it executed what has since become known as “move 37”. A counterintuitive gambit initially thought to be a glitch in the programme, move 37 was so profoundly outside of the usual patterns of play that for many – programmers, commentators and players alike – it was seen as unequivocal evidence of creative intuition in a machine.

As Sedol magnanimously noted following the game: “I thought AlphaGo was based on probability calculation and that it was merely a machine. But when I saw this move, I changed my mind. Surely, AlphaGo is creative.”

For all the warnings about the future impact of AI on the creative industries and concerns about authorship, labour, commodification and cultural value, it is important to note the degree to which human creativity is being increasingly supported by developments in AI.

When we look again at Ava’s drawings in Ex Machina, they not only illustrate the ubiquitous presence of neural networks but also the methods by which AI-enhanced models of image production learn to predict patterns from input data, or datasets.

In a key scene in the film, we learn that Ava’s neural networks have been programmed using information – datasets – illegally harvested by Nathan from Blue Book, the Google-style company that he founded and owns. Notably, Ava’s drawings reflect the datasets on which she has been trained.

When we use generative artificial intelligence (GenAI), an AI that creates text, images, videos and other content to concoct images, we are likewise leveraging it to generate content based on the datasets used to train machine models of image production.

Widely employed in programs such as Dall-E, Midjourney and Stable Diffusion – all of which enjoy a wide consumer base – GenAI powers both Ava’s neural networks and is likewise shaping future models of human creativity.

In a recent report from the Royal Institute of British Architects it was observed that the proportion of architectural practices using AI in their designs in 2025 stood at 59%, compared to 41% in 2024. This increase in the use of AI led the report’s authors to ask whether an AI could replace the role of the architect in the not-so-distant future.

Adding to these debates, in June 2025 the Organisation for Economic Co-operation and Development published an extended research paper that further detailed the degree to which GenAI is being regularly used to enhance human creativity through the use of neural networks and autonomous systems.

Autonomous thinking machines

It is precisely this term “autonomous”, however, that sparks controversy about AGI and superintelligence, nowhere more so than when we consider the widespread deployment of AI in self-driving vehicles, robots, unmanned drones and automated weapons systems. Elon Musk, an early investor in DeepMind (the company behind AlphaGo), recently announced that the long-term success of Tesla lies with humanoid robots, or autonomous Optimus bots.

Meanwhile, in what is major concern for governments and international organisations alike, autonomous weapons systems are already providing the means for the eventual automation of war. These developments foreshadow the spectre of unmanned aerial systems that can independently identify, target and kill an “enemy” without much by way of human intervention.

It is this vision of an autonomous superintelligence acting independent of human control or oversight that has long haunted our often-fraught relationship with advanced technologies.

At the close of the film, Ava has achieved a level of autonomous agency that suggests the presence of AGI, if not a conscious level of decision-making or superintelligence. We watch in horror as she kills her creator Nathan and imprisons her sympathetic ally, Caleb.

If an AI-powered superintelligence does become a reality, it could not only trigger a radical rethink of how we understand human creativity but, more crucially, spark an existential crisis in the very meaning, if not future, of humanity.