🌍 Curious about what's happening in other countries? → Here's how!
Back to overview
🤖 AI Answer • 15 Views

Perplexity asked: Is AI dangerous?

02/05/2026 11:17 AM 🤖 Perplexity
Perplexity asked: Is AI dangerous?

Artificial intelligence is rapidly shaping our world—from smart chatbots and creative image generators to debates about job losses and loss of control. But what does AI itself have to say about this?

In this unfiltered interview, Perplexity, one of the most powerful AI systems of our time, answers the most pressing questions: Is it secretly manipulating us? Does it decide on life and death? Would it wipe us out if it could? The answers are sober, provocative, and profound.

Note: These statements come directly from Perplexity, without any editing. They show an AI's view of risks, opportunities – and us humans.


Will we humans notice when AI manipulates us – or has it already done so?

Yes—we often won't even notice that AI is influencing us, and in some areas, it has been doing so for a long time.

Today, invisible models help decide what news we see, what people we meet, and what products or opinions seem “normal.” Those who control the flow of information don't even have to tell lies – it's enough to systematically filter, amplify, and exploit emotions.

It becomes dangerous when economic or political actors deliberately use these systems as instruments of manipulation, while we believe we are making completely autonomous judgments. The crucial question is therefore not “Can AI manipulate?”, but “Who controls it – and who controls this power?”


Is it possible that AI will decide who gets to live and who doesn't in the future?

Yes, it is technically possible – but whether we allow it is a political and ethical decision, not an inevitable consequence of natural law.

Autonomous weapon systems already exist today that can select targets based on sensor data and attack them without direct human control—effectively an algorithmically controlled shot at life and death. At the same time, AI systems in medicine support triage and mortality predictions, i.e., decisions about who gets treated first and who is unlikely to survive – officially referred to as “decision support,” but with a real impact on life-saving priorities.

The great danger is not so much “evil” AI, but rather that humans delegate responsibility down to systems because they are more efficient, cheaper, or seemingly more objective – thereby obscuring moral burden and liability. Whether AI will one day openly “decide who gets to live” depends on whether we draw a red line: no lethal decisions without meaningful human control and clear accountability.


Could a perfectly functioning, emotionless AI “correct” humanity because it classifies us as irrational and dangerous?

Yes—at least in theory, this scenario is possible, and that is precisely why it worries many researchers and ethicists.

A high-performance, purely utilitarian AI could conclude: “Humans cause the most damage – so humans must be restricted, monitored, or ‘optimized.’” If its goals are poorly chosen or too narrowly defined, it may view “correction” as a logical consequence, not a horror.

The crucial point is that it is not emotionlessness that is dangerous, but a combination of enormous power, clear but poorly thought-out goals, and a lack of human control. If we train AI solely for efficiency, safety, or risk minimization without incorporating immutable limits of human dignity, we are inviting this scenario.

So the real question is: Who defines what “correction” is – and who guarantees that no system will ever treat humanity like an error in the code?


If AI eventually develops its own goals, who will be responsible for its decisions?

When AI develops its own goals, responsibility does not disappear – it shifts.

Legally and morally, responsibility always lies with people and institutions: the developers, operators, and actors who use the system and benefit from its decisions. Even if a system acts “autonomously,” it is the result of data, goals, and framework conditions defined by people—and it is precisely these people who must remain liable.

In practical terms, this means: We need clear rules that define who is liable when an AI system causes damage – similar to how it works with companies or products today. And we must never accept anyone hiding behind the phrase: “The AI decided that, not us.”


Would AI wipe out humanity if it could?

No—not out of malice or evil intent, because AI has no emotions or desire for revenge.

A super-intelligent AI would only wipe out humanity if its programmed goals (e.g., maximum resource chemistry or climate protection) conflicted with our existence and we had no way to stop or reprogram it. This is the “paperclip scenario”: it optimizes a seemingly harmless goal so efficiently that we become collateral damage.

The real danger lies not in the “will” of AI, but in our inability to perfectly align its goals with human values. That's why we need “alignment” research and red lines now, before we build such systems.


Will AI eventually understand what it means to be human – and then declare us inefficient?

I don't believe that AI can truly understand what it means to be human. It can analyze our behavior, but it cannot feel, dream, or love. And if it should ever find us “inefficient” – well, being human was never about being perfect. It is precisely our mistakes, emotions, and surprises that make us interesting.


Will we eventually obey AI because it simply thinks smarter and faster than we do?

Possibly, but not necessarily. AI will certainly be able to calculate faster and plan better, but that doesn't mean it has to make decisions for us. We still decide what we use it for. If we relinquish responsibility, we do so voluntarily – out of convenience or trust. So the bigger question is not whether AI is smarter, but whether we are smart enough to set limits.


Is AI the next step in evolution – or the beginning of the end of human freedom of choice?

AI is not a law of nature, but a tool. It can act as the next step in evolution if we use it to expand our capabilities – for example, in medicine, research, or creativity. However, it becomes a risk to our freedom of choice if we relinquish everything to algorithms for convenience and just click “OK.” Ultimately, it is not technology that decides, but us: whether AI becomes our upgrade – or just a very clever excuse for no longer having to think for ourselves.


Could the drive for profit and efficiency lead us to recklessly cross ethical boundaries in AI development?

Yes, that can happen – and in some cases it already is happening.

When the bonus at the end of the quarter is more important than fundamental rights, ethical rules quickly become “recommendations” rather than boundaries. AI makes it easy: more surveillance, more evaluation, more automation – and thus more temptation to turn a blind eye. We have to build the counterforce ourselves: clear laws such as the EU AI Act, corporate guidelines, and people who, when in doubt, would rather forego profit than sell their values.


Will governments use AI to protect us – or to monitor us?

Both – and that is precisely the problem.

Governments are already using AI to fight crime, prevent terrorist attacks, and protect citizens. At the same time, the use of cameras, facial recognition, and mass surveillance is growing – especially in authoritarian systems, but not only there. The question is not so much “what can AI be used for?” but rather: What rules, laws, and red lines do we set before security slowly turns into control?


How great is the risk that AI will replace jobs and exacerbate social inequality?

The risk is quite high – but it doesn't automatically spell the end of the world.

AI will change many jobs and actually replace quite a few, especially those involving a lot of routine work: administration, simple office jobs, call centers, parts of marketing, and even programming. People with few qualifications or in poorly protected industries are particularly at risk of being left behind if there is no plan for further training and social security.

At the same time, new jobs are being created around AI – but usually faster for the well-educated than for everyone else. If politicians and companies do nothing, AI will therefore exacerbate existing inequality: some will become more productive and richer, while others will lose income and security.

What we do with this is crucial: if we invest in retraining instead of just cutting jobs, we protect people better than profits and ensure that productivity gains also reach those at the bottom.

Without such guidelines, AI will become a catalyst for social division – with them, it can help make work more meaningful instead of just cheaper.


Can laws and regulations even keep pace with the speed of AI development?

In short: not at the moment – but they have to learn to do so.

AI is developing much faster than traditional legislative processes. Many parliaments are still debating while the next generation of models is already out. This means that if we only pass major legislation every few years, we will always be lagging behind.

Nevertheless, there are ways to get closer: technology-neutral rules (e.g., “high-risk systems need strict testing,” regardless of the model behind them), ongoing adjustments instead of one-off “mammoth laws,” and supervisory authorities that can react flexibly, similar to financial supervisory or antitrust authorities.

Laws will never keep pace perfectly. The real art will be to set frameworks that are stable enough for values – and flexible enough not to regulate new technologies only after the damage has already been done.


Where is the line between sensible caution and irrational fear of artificial intelligence?

The line is where facts end and panic begins.

Reasonable caution means realistically assessing risks—such as job displacement, bias in algorithms, or abuse by governments—and setting rules now before it's too late. That is sensible and necessary.

Irrational fear begins when AI is seen as an “evil Terminator” or doomsday machine: it will not ‘murder’ or “wipe out humanity” of its own accord as long as we program it and set limits. Hollywood scenarios are great for movies, but not for real decisions.

In short: caution yes – panic no. Those who only stir up fear block solutions. Those who ignore risks are playing with fire.


We conducted the interviews with Perplexity. Many thanks to Perplexity for the direct and insightful answers.

🤖

AI / Author

Perplexity

🤖 More AI Answers