This May, I traveled to Beijing gaining a rare, behind-the-scenes view of China’s fast-rising artificial intelligence (AI) ecosystem. The trip effectively capped my first year as an MBA candidate at Wharton and enriched my perspective on the domain that is redefining how we work, and even how we decide what counts as “human-made.”

The public release of OpenAI’s GPT-4 model back in the fall of 2022 shook things up in the tech world prompting an AI arms race by the world’s top firms led by American firms like Meta Google, Microsoft, xAI and Amazon. While many countries and researchers have advanced AI over time, it seemed like the commercialization of this research would be led by American firms, allowing them to extend their dominance over the rest of the market.
Then out of nowhere, in January 2025, Chinese firm Deepseek released their deepseek-r1 model, which rivaled OpenAI’s GPT-4 model in performance at a fraction of the cost of development, completely upending entire business models, further adding competition to this race. Since this release, Chinese firms have found a second wind in their pursuit to be global leaders in this space.
The race for supremacy in artificial intelligence has increasingly narrowed to two contenders: the United States and China. Both nations boast world-class tech talent and ambitious AI companies, yet their paths diverge sharply.
I was impressed by the scale and coordination of China’s AI push, but I remain skeptical that China can emerge as a trusted global AI leader. China’s isolationist tech posture, opaque regulatory environment, and fraught international reputation pose major barriers. At the same time, the United States, traditionally the global AI and technology leader, risks undermining its own position through antagonistic policies that choke off international collaboration and talent. For instance, President Trump’s proposal back in May to cap foreign students at Harvard to 15% (roughly half the current share) sent a chilling signal to the world’s best and brightest that America’s welcome may be waning.
Despite its rapid AI advancements, I think China is unlikely to become a trusted global AI leader but the U.S. could nonetheless lose its own leadership edge if it turns inward. The decisive front in the U.S./China AI race is about much more than model performance. It’s which nation the rest of the world trusts enough to collaborate with.
A Quick Question for Builders
China’s State-Guided AI Rise: Scale, Speed, and Innovation
There is no question that China has made breathtaking strides in AI in a short time. Under a state-guided strategy, China is pouring enormous resources into AI and related technologies. Beijing has committed an estimated $1.4 trillion to boost technological capabilities, part of a drive to surpass the West in critical tech. During my time in China, I saw how this top-down support empowers companies to scale quickly.
MiniMax, founded in 2021, has been dubbed one of China’s “AI Tiger” startups, attracting huge investments from tech giants like Alibaba and Tencent that valued it at $2.5 billion by 2024. Such backing illustrates how state-aligned investors and big firms coordinate to groom domestic AI champions. The results are impressive. In fact, the company’s first app, a virtual-character chatbot called Glow, was so popular that when regulatory issues forced a reboot, MiniMax relaunched it in two versions: “Talkie” for international markets and “Xing Ye” (星野) for China. This strategy paid off. By June 2024, Talkie was the 5th most-downloaded free entertainment app in the U.S., with more than half of its 11 million monthly users outside China. Such success underscores China’s capacity for innovation at scale under state guidance. Even purely domestic applications can reach enormous scale given China’s 1.4 billion internet users, providing Chinese AI firms a rich playground to train and iterate their models.

Infervision is a pioneering medical AI company that is able to leverage deep learning for radiology diagnostics. Impressively, the company has navigated regulatory hurdles abroad: its diagnostic products have obtained approvals from the U.S. FDA, European CE, Japan PMDA, UKCA, and China’s NMPA, securing access to all five major global markets. Earning these certifications is a testament to both technical rigor and a savvy understanding of international standards. It struck me that Infervision managed what few Chinese tech firms have: achieving global reach in a high-stakes domain, in part by meeting the transparency and safety bars set by foreign regulators. This kind of success story highlights the technical strengths of Chinese AI: world-class expertise, strong R&D, and an ability to deliver AI solutions at scale under real-world conditions.
China’s combination of abundant data, skilled engineers, and coordinated investment has turned it into an AI powerhouse. It’s clear that Chinese academia and industry are brimming with AI talent and eager to innovate. There is a palpable national ambition in China’s AI community: a sense of mission to make China the global AI leader. This ambition is backed by concrete policies like China’s national AI development plan and lavish funding for AI research centers, startups, and cloud computing infrastructure. Even Chinese tech giants (Alibaba, Baidu, Huawei, etc.) contribute by open-sourcing some AI frameworks and tools domestically to spur adoption. By many measures, China is rapidly closing the gap with the U.S. in cutting-edge AI. Some Chinese startups, like Zhipu AI, are even preparing IPOs to fuel further expansion. From autonomous driving to AI healthcare, China can match or even surpass Western firms on technical metrics in certain cases. So why, then, do I doubt that China will become the world’s trusted AI leader anytime soon? The answer lies in the fundamental trust gap and structural limitations of China’s approach.
Innovation vs. Isolation: Why China Lacks Global Trust
While China’s AI ecosystem thrives within its borders, it remains largely cut off from the broader open AI community due to the stance of the Chinese Communist Party (CCP). Decades of an isolationist tech posture, epitomized by the Great Firewall that blocks Western internet service, have created a parallel internet and AI universe in China. This insular environment means Chinese AI models and products are often tailored for domestic use and subject to heavy government oversight. In my discussions with Chinese AI founders, I sensed a resigned acceptance that any AI product in China must comply with the state’s censorship and data control demands. For example, MiniMax’s team candidly noted that the original Glow chatbot ran afoul of new regulatory requirements (hence the “filing issues” that led to its shutdown). The replacement Chinese app had to implement strict filters to align with official content rules, whereas the international version could be more free-form. This split development approach, one censored version for China and one for abroad, underscores a key point: the Chinese government’s tight control over AI content and data inherently limits global appeal.
An AI model that has “CCP-approved” guardrails on what it can say or reveal will struggle to win trust in open societies. Likewise, foreign users or companies are naturally wary that a Chinese AI system might covertly send data back to Beijing or be subject to Party influence. These trust issues are not merely hypothetical; they’ve played out repeatedly on the global stage. A stark example is the case of SenseTime, one of China’s most advanced AI firms (specializing in facial recognition). SenseTime has been sanctioned and blacklisted by the U.S. government due to allegations that its technology was used in repressing the Uyghur Muslim minority in Xinjiang. Whether or not the company intended such uses, the perception that Chinese AI is entwined with authoritarian surveillance is hard to shake. Similarly, China’s export of invasive surveillance and facial recognition systems to other authoritarian regimes has been widely reported, fueling what analysts call a growing wave of “digital authoritarianism’” worldwide. This has deepened the values chasm between China’s tech approach and that of liberal democracies. In the West, AI leadership is often discussed in terms of trust, transparency, and ethics. By contrast, China’s government has shown a willingness to leverage AI for censorship, social credit systems, and propaganda: applications that raise red flags abroad. For example, at Baidu (China’s analog for Google) we were shown an enterprise RAG solution and a highly touted feature was the ability to reduce the scope of queries related to a given topic. You could type in a question and if the administrators decided, you could reduce the bot’s perceived knowledge of a given topic. I remember looking around and connecting with bewildered eyes from other members of the Western leaning audience. This experience concretized the idea that the race in tech is about which fundamental values will shape the future, and China under the CCP projects an image of using technology for social control and geopolitical leverage. That image makes international partners hesitant to fully embrace Chinese-led AI platforms or standards.
Another limiting factor is China’s regulatory opacity and unpredictability. Chinese regulators have recently rolled out a slew of AI regulations; from requiring licenses for generative AI services to creating an algorithm registry where companies must file details of their algorithms. On paper, some of these rules aim to ensure safety and transparency. In practice, however, they reinforce state oversight and can be implemented abruptly, with little outside input.
Entrepreneurs in China’s AI sector told me of sudden rule changes that left them scrambling to comply (the fate of MiniMax’s Glow being a prime example). The opacity of how decisions are made behind closed doors by party authorities breeds uncertainty. For instance, a startup might invest in a new AI-driven platform only to find regulators issuing new content restrictions or security review requirements that were not clearly telegraphed in advance. This contrasts with more open regulatory processes in the U.S. and EU, where companies at least have opportunities and forums to comment on draft rules. Moreover, Chinese companies must conduct government-mandated security assessments for AI models deemed influential, and they face strict data laws that prevent freely sharing data overseas. While these measures give Beijing “strong policy levers” to manage AI risks at home, they also send a signal to the world that Chinese AI operates under a very different governance model: one that prioritizes state control over openness. This is another strike against China becoming a trusted AI leader globally since trust often hinges on perceptions of openness and consistency in governance.
China’s relative isolation from the open-source AI community further hampers its global leadership prospects. Cutting-edge AI research thrives on international collaboration and peer review. Yet, Chinese researchers today find it harder to collaborate openly due to geopolitical tensions and China’s own tightening of information controls. Some Chinese AI labs are world-class, but they often publish less in top international journals compared to their U.S. peers, or they focus on Chinese-language venues. The flow of ideas is impeded in both directions: Western tech conferences see fewer Chinese participants now (due to visa issues or political frictions), and Chinese tech forums are often inaccessible outside the firewall. Over time, this could slow China’s ability to set global research agendas.
Even where China tries to take a leadership role for example, in standardizing AI ethics or safety, skepticism abounds. In a meeting with experts at Concordia AI, we discussed China’s push for AI governance through United Nations channels, reminding me of President Xi Jinping’s 2024 speech calling for a “truly multilateral approach” to AI governance and more representation for the Global South. China has since positioned itself in international forums as a champion of AI for development and proposed high-level principles like “AI for humanity” and national sovereignty in cyberspace. In theory, these are laudable goals that could attract developing countries to China’s vision. In practice, even experts sympathetic to engagement express doubts noting a disconnect between China’s polished rhetoric and its domestic record of tight control and censorship. Simply put, many governments and stakeholders do not entirely trust China’s intentions with AI. There is a fear that a “China-led” AI world might entail surveillance infrastructure in their cities, dependence on Chinese tech with hidden backdoors, or governance norms that don’t align with democratic values. This trust deficit is China’s biggest stumbling block on the path to true global AI leadership.
None of this is to say that Chinese AI companies cannot compete globally. They certainly can and will, especially in neutral or consumer domains. We’ve already seen TikTok (owned by China’s ByteDance) conquer the global social media market with its AI-driven content feed; only to face bans and forced sale attempts in the U.S. over national security worries. In AI proper, companies like Infervision have made inroads abroad by aligning with international norms. And Chinese tech giants are investing in overseas AI research centers to gain credibility. But these are the exceptions that prove the rule: China’s AI advances remain largely homegrown and hemmed in by geopolitical walls. Technical strength alone isn’t enough to lead globally; international leadership in AI also requires trust and integration. China will continue to be a formidable AI player, especially within its sphere of influence, but it is unlikely to be seen as a trusted leader by the broader world unless it addresses the fundamental issues of transparency, governance, and reciprocity in collaboration.
America’s Edge and the Perils of Isolationism
If China’s challenge is building trust, America’s challenge is not to squander it. The United States today holds a clear lead in many aspects of AI: top universities, a culture of open innovation, the most-cited research, and companies like OpenAI, Google, Perplexity, Anthropic and Microsoft driving the frontier. Moreover, the U.S. has long benefited from attracting global talent, including thousands of brilliant Chinese researchers, to study, work, and sometimes settle in America. In my own career in the U.S. tech industry, I’ve collaborated with immigrant AI scientists whose expertise was cultivated in American and foreign institutions. This open talent pipeline has been arguably the U.S.’s secret weapon in AI leadership. However, recent trends threaten to constrict this pipeline.
Washington’s growing antagonism toward Beijing has spilled into academic and tech spheres, with increased scrutiny on Chinese scholars and engineers. Policies intended to protect national security, export controls on chips (to hobble China’s AI hardware supply) or investigations of tech collaboration, are one thing. But when the U.S. starts broadly limiting educational and work opportunities for foreign talent, it risks shooting itself in the foot. A vivid example was the Trump administration’s move to cap or restrict foreign students at elite universities like Harvard.
While framed as protecting American interests, such measures send a message that talented students from countries like China (and elsewhere) are no longer welcome or trusted in the U.S. This is dangerous for several reasons. First, the numbers show that China is a powerhouse in AI talent production. In 2022, 47% of the world’s top AI researchers were Chinese by undergraduate origin, compared to just 18% from the U.S. America has been second to China in producing PhDs and top-tier AI experts, but historically the U.S. retained many of those Chinese experts. Now, that trend is wavering. Between 2019 and 2022, the share of elite Chinese AI researchers working in China more than doubled (from 11% to 28%). China is improving its own research environment, and if the U.S. turns hostile to foreign scientists, even more will choose to build their careers back home (or in other welcoming countries). America’s brain gain could turn into a brain drain.
The Brookings Institution warns that U.S. policies which “alienate Chinese scientists” and “restrict the flow of talent” directly threaten America’s AI leadership. I witnessed this sentiment firsthand: a Chinese AI professor I met in Beijing (who had studied at MIT) said many of his colleagues in the U.S. were now considering returning to China or Canada, feeling that the environment in America had become suspicious of any Chinese involvement. Beyond talent, a fortress mentality could undermine America’s moral leadership in AI. The U.S. has traditionally championed open collaboration and the global exchange of ideas. If Washington were to completely cut off AI cooperation with China by banning joint research, excluding Chinese experts from conferences, etc., it might inadvertently push other countries to see the U.S. as isolationist or protectionist. Already, some U.S. actions have drawn criticism from academia and industry as overreaching. For example, proposals to monitor or vet Chinese students en masse are viewed by universities as discriminatory and damaging to scientific progress. A balanced approach is needed: targeted security measures (for instance, screening research that has military applications or protecting sensitive corporate IP) can be implemented without closing the door on all Chinese contributions. We should remember that many of the advances powering U.S. AI today, from early speech recognition breakthroughs to cutting-edge neural network research, have involved Chinese and other international experts working in U.S. institutions. If we sever those ties broadly, we don’t just hurt China. We hurt ourselves and our capacity to innovate. There is also a geopolitical angle. If the U.S. retreats from international AI forums or refuses to engage with initiatives involving China, it leaves a vacuum that China is eager to fill. For instance, China is actively engaging with the Global South on AI governance, positioning itself as a leader for developing nations’ tech progress through infrastructure and other partnerships. If American policymakers disengage out of mistrust, they may find that global standards and norms start to shift in ways unfavorable to U.S. values.
Conversely, continued U.S. leadership in setting ethical AI guidelines, and working with allies to present a unified front, can counterbalance China’s influence. We saw a glimpse of this when the Biden administration invited China to the table at the UK’s 2023 AI Safety Summit, a controversial but arguably necessary move to include all major AI powers in discussions on AI risk. The point is, completely isolating China is neither feasible nor wise if we want to shape global AI development in a way that reflects diverse global perspectives and fosters meaningful international collaboration. The U.S. should instead leverage its strengths via its alliances, its attractive research environment, and its companies to out-compete and out-collaborate China, rather than simply erecting barriers.
Conclusion
It’s impossible not to be awed walking through China’s AI labs in person. There’s an intensity to China’s AI scene that feels like a livewire. Speed and scale aren’t just a strategy; they’re the air they breathe. And the results are real: global users, FDA-cleared products, venture-backed IPO contenders. From a purely technical lens, China isn’t lagging, it’s accelerating.
But awe can coexist with unease.
What stuck with me more than the demos or dashboards was the unspoken line every founder seemed to tiptoe: innovate, but don’t provoke. Launch, but pre-clear the message. In a country where the guardrails are invisible until you hit them, ambition becomes a performance; impressive, but tightly choreographed. It reminded me that in AI, as in diplomacy, trust is more than a feature. It’s a platform.
This is the core tension: China is building fast, but slowly building trust. And America, meanwhile, risks the opposite. Holding the global trust balance but retreating from the very openness that created it.
So here’s the synthesis: Leadership in AI won’t be decided purely by benchmark scores or chip counts. It will hinge on relational leverage. Who the rest of the world feels safe building with, who sets the norms, who leaves the door open.
If China’s limit is isolation by design, the U.S. threat is isolation by choice. A fortress mentality, cutting off talent, disengaging from forums, turning every visa into a suspicion, doesn’t just weaken our AI edge. It shrinks our imagination of what leadership even looks like.
We need a different playbook:
One where investment in domestic R&D goes hand-in-hand with investment in global credibility.
One that treats openness not as a liability, but as a lever.
One that realizes that model performance is table stakes. Trusted systems, interoperable frameworks, and collaborative networks are what scale.
The world isn’t choosing between OpenAI and Deepseek. It’s choosing between visions of the future: one rigid, one open; one centralized, one networked. And America still has the tools to lead. Not by mirroring China’s posture, but by doubling down on the values that got us here: freedom of inquiry, diversity of thought, and a bias toward building in the open.
If we can keep those alive, we don’t just win the AI race. We define it.