The global response to the COVID-19 pandemic has demonstrated the significant societal benefits of recent scientific and technological advances, such as the use of synthetic biology in vaccine development and of gene editing tools such as CRISPR in the development rapid diagnostic tests. But with these advances comes the challenge of ensuring they are used responsibly and for peaceful purposes.
The dual-use nature of many new biological techniques pose significant challenges for governance, particularly as new and complex synergies emerge between biology and other disciplines, such as chemistry, artificial intelligence (AI) and cyber technologies.
At the same time, emerging technologies are lowering the barriers to entry into the life sciences, making potentially harmful biological agents more accessible to a diverse range of actors. These developments have diversified biothreats beyond what was originally concieved by the Biological Weapons Convention (BWC), introducing new and more interconnected pathways for the hostile use of biological agents.
States parties to the BWC are convened in Geneva this week, an opportunity to take stock of its implementation and prepare for next year’s review conference. There is an urgent need to update the treaty regime by reframing discussions to properly address the complexity of modern-day biothreats.
Evolving regulatory challenges
The COVID-19 pandemic has sparked new discussions surrounding the security of labs that handle the most dangerous pathogens, so-called biosafety level 4 labs. There is often a lack of transparency surrounding the kinds of activities being undertaken in these labs, and little oversight or control of research that could pose an unduly risk. The Global Health Security Index found that less than five per cent of countries provide oversight of dual-use research.
Unlike its chemical weapons counterpart, the BWC does not have a verification mechanism. The intrinsically dual-use nature of biological research means that it cannot be verified in the same way as nuclear or chemical activities. The treaty regime therefore relies on confidence-building measures (CBMs) as its primary means of assessing compliance.
This reliance is increasingly challenged by rapid advances in science and technology, which outpace states’ ability to properly assess the potential risks of certain research. The same techniques that can be used for unquestionably peaceful purposes, such as the promotion of public health, could also be misused in the hands of a rogue state, terrorist group, or malicious actor.
Artificial intelligence and automated technologies are making the life sciences more accessible by simplifying complex processes and reducing the level of tacit knowledge previously required. Automated technologies enable research to be conducted remotely, transforming arduous manual processes into lines of code.
The emergence of fully-automated ‘cloud labs’ could provide a cost-effective means of accessing experimental biology. However, as the industry grows, company-governed customer screening will become more difficult and it may be necessary to develop more established regulations to ensure activities are only undertaken for peaceful purposes.
There is also the question of how to regulate ‘intangibles’ – the informational ingredients for the use of technologies, often embedded in tacit knowledge. Bypassing material export controls, intangible technology transfers cannot be as easily regulated by traditional list-based approaches.
Knowledge can be transferred instantly online, and publicly available research data can be accessed within a few clicks. Careful balance is required when weighing the potential risk of misuse against the right to participate in the fullest possible exchange of technological information.
Such digitalization of the life sciences has diversified biological threats beyond their ‘traditional’ understanding. The convergence of biology with cyber and AI technologies introduces new pathways for the hostile use of biological agents: a sophisticated cyber-attack has the potential to steal, exploit or manipulate data; or tamper with security systems connected to the Internet of Things (IoT) in high-risk labs containing dangerous pathogens. Increased collaboration across these converging disciplines is needed to understand new vulnerabilities emerging from the use of cyber and other technologies.