Table of Contents

Overview

What happens when frontier models from companies such as Anthropic, OpenAI, etc. are meaningfully used to produce the majority of the worlds software and open-source models do not catchup in capability or access?

We know that they’re already aligned with Palantir, US Gov, Intelligence agencies, etc. and ChatGPT logs have now been used in prosecutions. But how long before they start subtly injecting code based on the context of the request (especially with continuous learning advances). I imagine future attacks will make Stuxnet look like iloveyou and this is all a call back to the Ken Thompson hack. Only now the compiler has moved a level of abstraction above and our “trusting trust” has now moved outside of our local environment.

Before this era, coding was restricted to the code present on your computer, perhaps your hosting environments (AWS, GCP, Azure, etc.), and wherever you host your code (GitHub, GitLab, etc.), but now the very act of producing the code and reasoning over the core problems to produce the solutions has now been outsourced, that has not happened before. The production of the code has now been codified in natural language rather than translated into code with the programmer having full awareness.

Zooming out, the new workflows of 2026 and onwards now include taking meeting transcriptions, slack and other stakeholder messages and having the LLM-based agents automatically take in the existing project context (issues, documentation, past LLM work, previous commits, etc.) along with this fresh directional context and then producing the plans from those and directly executing them.

It’s not unreasonable to imagine that companies are already directly taking stakeholder feedback and executing similar processes (at least for staging or other full pre-release stages, however it wont be long before people are pushing directly to production, at least for now this involves human checks but that won’t last forever, especially outside of regulated industries).

One could make the argument that this won’t happen from western companies due to ethics of the engineers / staff at these companies but it hasn’t been that long since Twitter was forced to suppress anti-COVID sentiments (along with other social media companies) along with many other suspicious behaviour from western governments in general (particularly the US and the UK, I’m unsure on others).

I have no doubt that China will execute similar strategies with their frontier LLMs such as Kimi K2.5, Minimax, Qwen-series, Deepseek and others as the CCP have more direct control over all companies in the country.

Traditionally the counter-balance to these forces has been open-source (Linux vs Microsoft, Android vs iOS, Mozilla vs Internet Explorer, Traditional Banks vs Blockchain, etc.) but the issue with LLMs and especially agentic LLMs are the considerable cost required to train the models, its just not feasible as of 2026 for groups with sub billion dollar budgets to meaningfully compete on training frontier models anymore.

So if machine intelligence is concentrating on groups who can command small nation state level budgets, these groups are increasingly likely to be compromised over time by nefarious forces and people are relying on the handouts of particular providers (OpenAI GPT-OSS, various chinese models, Mistral, etc.) to continue to counter-balance these forces, then what other choices do we have? Traditional open-source projects scaled with human labour and experience, reached a critical mass initially driven by an ardent minority group, and then eventually permeated into the public. Is that possible again or is the fundamental nature of advanced machine intelligence qualitatively different?