Back
12 February, 2026

Launch of the First Open Large Language Model for Latin America & the Caribbean: LATAM-GPT. 

On 10th of February, CAF (Development Bank of Latin America and the Caribbean) together with the Centro Nacional de Inteligencia Artificial (CENIA), the Government of Chile, AWS, and Data Observatory presented Latam-GPT, the first regional GPT model in the world.

Latam-GPT, was presented as the first open large language model (LLM) developed from and for Latin America and the Caribbean. The initiative is being introduced as a landmark regional step in the evolution of artificial intelligence—one that seeks to move the region from primarily consuming foundation models to also building and shaping them, with governance and priorities grounded in Latin American and Caribbean contexts.

The project is coordinated by CENIA and described as a collaborative effort involving more than 60 institutions and close to 200 specialists across the region. Organizers frame the launch as a milestone not only for the model itself, but also for a new model of cooperation between the state, academia, international organizations, and the technology industry.

Key Facts

  • Presented by CAF (Bank of Development of Latin America), CENIA, Government of Chile, AWS, and the Data Observatory.
  • Purpose: A strategic response to structural gaps in AI development in Latin America and the Caribbean, strengthening technological and cultural sovereignty
  • The stakeholders involved in LATAM-GPT’s development underscored this technology is a regional AI public good and enabling infrastructure for governments, universities, startups, and companies.

Model Characteristics:

  • Open model (described as open, transparent, and ethical)
  • 70000 parameters
  • Developed on Llama 3.1
  • Trained in the AWS cloud
  • Collaboration scale: 60+ institutions and ~200 specialists contributing across the region

The Digital Sovereignty: Why It Matters

In the current global AI landscape, foundation models increasingly shape productivity, competitiveness, and economic growth. Most frontier LLMs, however, are trained on data distributions that underrepresent the languages, cultures, and lived realities of the Global South. This can translate into performance gaps (weaker results on region-specific language and knowledge), higher bias risk, and reduced policy control when deploying AI at national or regional scale.

Latam-GPT is framed as a direct answer: a foundation model meant to better reflect regional linguistic and cultural contexts, while also creating a platform that regional stakeholders can extend, adapt, and audit. The “open” framing signals an ambition to democratize access to high-impact AI capabilities and reduce dependency on closed proprietary systems—especially for public-interest deployments where transparency and accountability can be crucial.

What This Means

1) Digital Sovereignty and Representation

If Latam-GPT is meaningfully open, adopted and trained with regionally representative data, it may become a core component of “AI sovereignty” strategies across Latin America and the Caribbean. Models that better capture local language patterns, public-sector terminology, and regional knowledge can improve accuracy in high-volume use cases like citizen services, education support, and document workflows. Over time, the model could reduce reliance on external providers for sensitive deployments that require stronger local oversight or require a deeper cultural understanding.

2) Regional public-good infrastructure

Latam-GPT is positioned as a technological public good—a foundation layer that enables others to innovate. For governments, universities, and small firms, the cost of building from scratch is typically prohibitive. A shared open foundation model can reduce entry barriers and accelerate experimentation. If access is open (e.g., permissive licensing, clear documentation, and easy deployment options), Latam-GPT could boost an ecosystem of regional products: government chat assistants, domain-specific copilots for health and education, and tools for local languages and cultural content.

3) Innovation and productivity effects

A model in the 70000 parameter range signals ambition for strong general-purpose performance. That can translate into real productivity gains—especially in text-heavy workflows common across public administration and services: drafting and summarization, translation, knowledge retrieval, classification, and customer support. For the private sector, this could lower costs to develop AI-assisted workflows tailored to regional contexts (legal, finance, telecom, retail). The “open” approach could also foster competitive differentiation through fine-tuning for specific industries rather than building everything on closed APIs.

4) Governance, safety, and the “open” trade-off

Open models increase transparency and reusability, but also require stronger guardrails. Stakeholders will want to see safety testing and red-teaming, bias evaluations, documentation on training data provenance, and policies for responsible use. It can be expected that due to its use to support public-sector deployment, trust will hinge on measurable commitments to safety, accountability, and clear governance mechanisms (who maintains it, how updates happen, and how risks are managed).

5) Dependency and infrastructure questions

While the model is framed as sovereignty-building, it still presents reliance on global cloud infrastructure for compute. That is not necessarily negative—cloud accelerates delivery and scale—but it raises strategic questions about cost, resilience, portability, and long-term independence. Further questions around resources and future plans for training, monitoring, and updates, since foundation models require sustained investment can be expected.

6) Regional coalition as a signal

The scale of collaboration – 60+ institutions and 200 specialists – suggests strong convening power and potential for shared regional standards. If this coalition persists beyond the launch, it could become an engine for: shared datasets, evaluation benchmarks in Spanish and Portuguese (and potentially broader linguistic diversity), and joint governance frameworks. That would make Latam-GPT more than a one-off release: it could become an enduring institution-like platform.

Next Steps

  • Access: license terms, whether model weights are downloadable, and how commercial use is permitted.
  • Access model: self-hosting options vs. hosted endpoints; compute requirements and cost guidance
  • Documentation and evaluation: model card, data statements, safety testing, bias analyses, and benchmarks relevant to regional use cases
  • Governance and sustainability: who owns maintenance, how updates are decided, and funding/compute strategy for ongoing improvement
  • Early adoption: pilots with governments and universities, partnerships with startups, and sector-specific versions (education, public services, justice, health)

Our View

Latam-GPT is being positioned as a major step toward regionally grounded AI infrastructure – a foundation model designed to serve Latin America and the Caribbean with stronger representation, greater transparency, and broader access. Latam-GPT signals a region that can sustain a collaborative, open approach to foundation models that supports both innovation and public-interest governance.


Contact us

Need a problem solved?

Our dedicated experts, located around the world, are here to help.