Chinese researchers develop AI model for military with Meta's help

Chinese researchers detail how they used early version of Meta's Llama as a base for what it calls "ChatBIT"

By
Reuters
|
Flags of China and U.S. are displayed on a printed circuit board with semiconductor chips, in this illustration picture taken February 17, 2023. — Reuters
Flags of China and U.S. are displayed on a printed circuit board with semiconductor chips, in this illustration picture taken February 17, 2023. — Reuters
  • China's top PLA-linked Academy of Military Science involved.
  • Meta says PLA 'unauthorised' to use Llama model.
  • Pentagon says it is monitoring competitors' AI capabilities.

Top Chinese research institutions linked to the People's Liberation Army (PLA) have used Meta's publicly available Llama model to develop an AI tool for potential military applications, according to academic papers and analysts.

In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the PLA's leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta's Llama as a base for what it calls "ChatBIT".

The researchers used the Llama 2 13B large language model (LLM) that Meta released in February 2023, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.

ChatBIT was fine-tuned and "optimised for dialogue and question-answering tasks in the military field", the paper said. It was found to outperform some other AI models that were roughly 90% as capable as OpenAI's powerful ChatGPT-4. 

The researchers didn't elaborate on how they defined performance or specify whether the AI model had been put into service.

"It's the first time there has been substantial evidence that PLA military experts in China have been systematically researching and trying to leverage the power of open-source LLMs, especially those of Meta, for military purposes," said Sunny Cheung, associate fellow at the Jamestown Foundation who specialises in China's emerging and dual-use technologies including AI.

Meta has embraced the open release of many of its AI models, including Llama. It imposes restrictions on their use, including a requirement that services with more than 700 million users seek a license from the company.

Its terms also prohibit use of the models for "military, warfare, nuclear industries or applications, espionage" and other activities subject to US defence export controls, as well as for the development of weapons and content intended to "incite and promote violence".

However, because Meta's models are public, the company has limited ways of enforcing those provisions.

In response to Reuters questions, Meta cited its acceptable use policy and said it took measures to prevent misuse.

"Any use of our models by the People’s Liberation Army is unauthorized and contrary to our acceptable use policy," Molly Montgomery, Meta's director of public policy, told Reuters in a phone interview.

The Chinese researchers include Geng Guotong and Li Weiwei with the AMS's Military Science Information Research Center and the National Innovation Institute of Defense Technology, as well as researchers from the Beijing Institute of Technology and Minzu University.

"In the future, through technological refinement, ChatBIT will not only be applied to intelligence analysis, but also ... strategic planning, simulation training and command decision-making will be explored," the paper said.

China's Defence Ministry didn't reply to a request for comment, nor did any of the institutions or researchers.

Reuters could not confirm ChatBIT's capabilities and computing power, though the researchers noted that its model incorporated only 100,000 military dialogue records, a relatively small number compared with other LLMs.

"That's a drop in the ocean compared to most of these models (that) are trained with trillions of tokens so … it really makes me question what do they actually achieve here in terms of different capabilities," said Joelle Pineau, a vice president of AI Research at Meta and a professor of computer science at McGill University in Canada.

The research comes amid a heated debate in US national security and technology circles about whether firms such as Meta should make their models publicly available.

US President Joe Biden in October 2023 signed an executive order seeking to manage AI developments, noting that although there can be substantial benefits to innovation," there were also "substantial security risks, such as the removal of safeguards within the model".

This week, Washington said it was finalising rules to curb US investment in AI and other technology sectors in China that could threaten national security.

Pentagon spokesman John Supple said the Department of Defence recognised that open-source models had both benefits and drawbacks, and that "we will continue to closely monitor and assess competitors' capabilities".