AI has found a new enemy

ÍNDICE DEL ARTÍCULO

 

The race to develop increasingly powerful artificial intelligence systems has taken an unexpected turn. Beyond competition between major tech companies, AI has found a new enemy: open-source models and projects with very few safety barriers. These systems, released so anyone can download, modify, and run them on their own computers, offer enormous opportunities but also risks that many experts are starting to consider concerning.

In this article, we analyze what lies behind this new adversary of traditional AI, why open-source models generate so much controversy, and what implications they have for companies, developers, and users. We will also look at how regulation is evolving and what you can do to take advantage of this wave of innovation without falling into dangerous or irresponsible uses.

What it means that AI has found a new enemy

When we say that artificial intelligence has a new enemy, we are not referring to a science-fiction-like force, but to a profound shift in how this technology is developed and distributed. Until recently, most advanced AI systems were controlled by a few companies that offered access through closed platforms, with safety filters, strict terms of use, and teams dedicated to moderating content.

That centralized model allowed a certain level of control. If a user tried to use AI for harmful purposes, the platform could block the request, suspend the account, or even cooperate with authorities. It was not a perfect system, but it did provide clear barriers.

The new enemy breaks this logic: open-source AI models can be downloaded, executed, and modified without going through the supervision of a large platform. In practice, this means that anyone with sufficient technical knowledge and reasonable hardware can:

  • Remove or relax the original safety filters
  • Retrain the model with their own data, including toxic content
  • Create modified versions optimized for specific tasks, legal or otherwise

The threat is not open source itself, but the combination of power, accessibility, and lack of unified safeguards. That is why many experts speak of a new adversary: a decentralized ecosystem that is extremely difficult to control, even for the original creators of the technology.

Open-source projects and models: what they are and why they raise concerns

An open-source artificial intelligence model is one whose essential components are freely shared: the trained model itself, the code needed to run it, and sometimes even the tools to retrain it. This allows communities of developers and companies to build on top of it and adapt it to their needs.

In recent years, open AI projects have emerged across almost every area:

  • Large language models capable of maintaining complex conversations and generating long texts
  • Image generation models similar to those that create illustrations, logos, and realistic photomontages
  • Audio models that imitate human voices or generate music
  • Specialized models for programming, biology, finance, or other disciplines

These advances have democratized access to AI, but they have also opened the door to uses that are much harder to monitor. When a powerful model can run on a laptop without relying on a central server, the possibility of applying uniform safeguards disappears.

Advantages of open-source AI

Before focusing on the risks, it is worth recognizing that open AI is not simply a problem. In fact, it has been a key driver of technological innovation. Its main advantages include:

  • Technical transparency: researchers and experts can analyze how the model behaves, detect biases, and propose improvements
  • Independence from providers: companies and developers reduce their dependence on a few platforms and avoid sudden price increases or service shutdowns
  • Adaptation to context: models can be tailored to a specific language, sector, or task without waiting for a large corporation to do it
  • Research acceleration: universities and labs with fewer resources gain access to cutting-edge technology without expensive licenses
  • Fast innovation: the community creates extensions, tools, and improvements at a pace few companies can match alone

In short, open-source AI is a catalyst for progress. However, the same openness that enables innovation also facilitates potentially harmful uses.

Risks and potential malicious uses

Experts warning about this new AI enemy are concerned about concrete, not just theoretical, scenarios. Some of the most cited risks include:

  • Mass disinformation generation: language models tuned to produce convincing fake news, manipulative messages, or large-scale polarization content
  • Detailed assistance for illegal activities: from hacking instructions to advice on producing dangerous substances, without the safeguards used by major platforms
  • Deepfakes and identity spoofing: combining open image and audio models to create fake videos or audio of public or private individuals
  • Cyberattack automation: tools that generate malicious code, highly personalized phishing emails, or strategies to bypass security systems
  • Training on toxic content: communities fine-tuning models with racist, violent, or extremist data

The key issue is not that these risks do not exist in closed models, but that in the open-source ecosystem it is much harder to impose limits and traceability. Once a model circulates freely, copying and modifying it becomes trivial.

Lack of safeguards: the major friction point

Major AI platforms invest enormous resources in safety systems: content filters, moderation teams, internal and external audits, and mechanisms to block misuse. While not perfect, they create a protective layer that reduces friction with governments, regulators, and society.

In contrast, many open-source models are released with general warnings but without comparable safety infrastructure. The result is a fragmented ecosystem where each developer must decide their own level of protection.

What kinds of safeguards major platforms use

To understand the difference, it is useful to review some of the safeguards typically used by companies offering AI through online services:

  • Pre- and post-generation filters to detect sensitive requests and block harmful outputs
  • Acceptable use policies prohibiting hate campaigns, coordinated disinformation, or criminal activity
  • Usage pattern monitoring to detect suspicious accounts or abnormal request volumes
  • Capability limitations in sensitive contexts such as security, bioengineering, or weapons
  • Reporting and rapid response channels for abuse detection and enforcement

In open-source models, some of these features can be implemented at the application level, but there is rarely a unified global infrastructure.

Why it is so hard to replicate these safeguards in community projects

Replicating the same level of protection in open-source environments is difficult for several combined reasons:

  • Limited resources: many projects rely on volunteers or small budgets without dedicated safety teams
  • Diverse goals: there is no single authority; different groups prioritize different values
  • Forks and derivatives: anyone can remove safeguards and redistribute modified versions
  • Local execution: if the model runs on a user’s computer, centralized enforcement becomes nearly impossible
  • Ethical and legal dilemmas: overly strict filters may undermine the openness of the ecosystem

This tension between safety and openness is the core of the current conflict. AI has found a new enemy because its creators are no longer only large companies, but a decentralized network of actors without a shared control framework.

Impact on companies, developers, and users

The rise of open AI models without strong safeguards is not just an academic debate. It directly affects organizations of all sizes and end users. Depending on how it is managed, it can become either a competitive advantage or a source of reputational, legal, and technical risk.

For companies, the appeal is clear: open-source models reduce costs and increase independence. However, this freedom comes with responsibility. If a company deploys an unsafe model and it produces harmful outputs, the responsibility lies entirely with the organization.

For independent developers, open-source AI is a major opportunity, but also an ethical challenge and potential legal exposure if misuse occurs through their systems.

End users benefit from more varied and often free tools, but they may also interact with less filtered systems that can produce more biased or inaccurate responses.

Responsible use tips for open-source models

If you are considering using open AI models in your project or business, it is important to adopt a responsible strategy:

  • Evaluate the model’s origin: understand who trained it and under what license
  • Add your own filters: implement moderation layers for inputs and outputs
  • Define internal policies: establish clear rules for acceptable use
  • Test in controlled environments: evaluate extreme scenarios before deployment
  • Train your teams: ensure all departments understand risks and limitations
  • Review systems regularly: models evolve and can degrade over time

These practices do not eliminate risk, but significantly reduce it.

Where AI regulation is heading

Governments and regulators worldwide are increasingly concerned about open AI systems without traditional safeguards. The challenge is balancing innovation with safety.

In the European Union, discussions include requirements for general-purpose AI systems such as:

  • Better documentation of training data and capabilities
  • Basic safeguards before releasing model weights
  • Risk reporting to authorities
  • Cooperation with regulators and researchers

Other proposals include impact assessments and risk-tiered frameworks depending on model usage.

The emerging consensus is that banning open-source AI is not realistic, but a proportionate regulatory framework is necessary.

Final reflections on AI and its new adversary

The phrase “AI has found a new enemy” reflects a deeper tension. On one side, advocates of open and accessible ecosystems see openness as the path to innovation and democratization. On the other, those prioritizing safety fear real-world harm from uncontrolled systems.

Both perspectives contain valid truths. Without open-source AI, progress would be slower and more centralized. But without serious governance and responsibility, the cost of that openness could be too high.

As a professional, company, or user, the key is to stay informed and make conscious decisions:

  • What do I gain from using open vs closed models?
  • What risks am I accepting and how can I mitigate them?
  • What impact could my choice have on others?

Artificial intelligence will continue to evolve, and increasingly powerful open models will likely spread across the internet. How we use them responsibly will determine whether this new “enemy” becomes a threat or a catalyst for progress.

Source: 3DJuegos Tecnología. Original article available at 3DJuegos

Últimos artículos
Gemini_Generated_Image_ptyvz5ptyvz5ptyv (1)
Post Date Apr 16, 2026
Why AIBrandpulse360 and not other tools?
A,Businessman,Leverages,Ai,In,Logistic,Management,To,Track,Shipments
Post Date Apr 16, 2026
AI visibility: what is it and how can it be measured?
Gemini_Generated_Image_53735r53735r5373
Post Date Apr 15, 2026
SEO for AI (LLMs) in 2026: when the organic channel stops being traffic and becomes influence
Monitorizar marcas en LLMs
Post Date Apr 15, 2026
Monitoring brands in LLMs: why it is key in the new organic strategy
Principales KPIs para medir tu visibilidad online en la IA (LLMs y AI Search)_8
Post Date Apr 15, 2026
Main KPIs to Measure Your Online Visibility in AI (LLMs and AI Search)
Del SEO al GEO_ cómo cambia la visibilidad de marca en la era de la IA generativa
Post Date Apr 15, 2026
From SEO to GEO: how brand visibility changes in the era of generative AI