The People’s Liberation Army (PLA) is rapidly integrating generative artificial intelligence (AI) into its military intelligence operations, leveraging both domestically developed and foreign-sourced large language models (LLMs) to transform data collection, analysis, and decision-making.
Recent insights from PLA media, academic research, and patent filings reveal a strategic focus on deploying specialized AI models, including adaptations of well-known large language models (LLMs) from Meta (Llama), OpenAI, and leading Chinese providers such as DeepSeek, Tsinghua University, Zhipu AI, and Alibaba Cloud.
At the core of this technological push is the development of tools capable of processing open-source intelligence (OSINT), satellite imagery, and multi-source intelligence streams including human intelligence (HUMINT), signals intelligence (SIGINT), geospatial intelligence (GEOINT), and technical intelligence (TECHINT).
According to a PLA-affiliated patent submitted in late 2024, the military is training large language models (LLMs) on these diverse datasets to enable comprehensive battlefield assessments, enhance situational awareness, and accelerate intelligence product generation.
DeepSeek’s models, for instance, were rapidly adopted by the PLA in early 2025, reportedly forming the backbone of new OSINT-focused intelligence solutions.
Technical Innovations and Challenges
The PLA’s AI-driven intelligence tools are engineered to synthesize vast quantities of data, generate actionable insights, and provide real-time recommendations to commanders.
These systems are designed to improve the speed, accuracy, and scalability of intelligence operations while reducing human workload and operational costs.
However, PLA researchers acknowledge persistent technical hurdles most notably hallucinations (incorrect or fabricated outputs from AI models) and algorithmic bias and have warned that overreliance on AI could compromise the quality of intelligence.
To address these issues, PLA is developing iterative intelligence workflows that combine human expertise with AI-generated analysis.
For example, the Academy of Military Science (AMS) has recommended integrating generative AI into intelligence tasks gradually, continuously assessing model effectiveness, and ensuring traceability of AI-generated content.
The AMS also underscores the importance of training LLMs on high-quality, domain-specific intelligence corpora, though this remains a challenge due to security concerns and the sensitive nature of military data.
In parallel, the PLA is actively studying the US military’s application of generative AI for intelligence purposes.
This includes monitoring organizational changes, policy developments, and experimental deployments by the US Defense Innovation Unit (DIU), which has tested AI solutions for OSINT collection and battlefield visualization since mid-2023.
Risks and Strategic Implications
The PLA’s aggressive adoption of generative AI carries both operational advantages and potential risks.
On one hand, AI technologies promise to revolutionize intelligence collection, enabling rapid data mining, scenario generation, and threat detection.
On the other, PLA analysts have raised concerns about “false information pollution,” such as deepfakes and AI-generated disinformation, which adversaries could exploit to mislead intelligence personnel.
Additionally, reliance on AI models developed with ideological biases, whether from Western or Chinese sources, could compromise analytical objectivity.
Looking ahead, the PLA is expected to continue refining its AI systems, prioritizing model reliability, data security, and integration with human-led intelligence workflows.
Nevertheless, as the PLA and China’s defense industry navigate the complexities of generative AI, the ultimate effectiveness and resilience of AI-powered intelligence operations remain an open question.





