This chapter summarizes key implications of AI in China and provides recommendations for stakeholders to help protect workers against the misuse of this technology.
The empirical data examined for this paper underscores a key trend of the AI roll-out in China so far: the expansive integration of AI into critical employment processes (recruitment, management, evaluation and personnel changes) has amplified the power imbalance between employers and employees, putting workers at greater risk of exploitation. Enhanced efficiency achieved through AI has not translated into better job quality for workers.
Access to data is a critical factor in the relationship between employers and their workers. Through control of data and algorithms, employers process and utilize much more information than their employees, which puts companies in a strong position in negotiations and other situations such as workload considerations. Firms are reshaping the rules and norms through opaque algorithms that remain incomprehensible to workers.
The implementation of AI tools can be reductive. The quantification of work processes necessary for AI reduces complex production activities and worker input into simple data points. This leads to diminished autonomy for workers as they must follow increasingly prescriptive instructions and meet ever-higher goals with their behaviours monitored online and offline. Notably, supported by greater computing, AI makes more detailed task division possible, which could further de-skill workers, as they are increasingly responsible for a smaller segment of production.
With AI driving the rise of flexible work, workers find themselves struggling to negotiate better terms with employers. This is particularly evident in the gig economy, in which workers are classified as ‘independent contractors’. This allows firms to sidestep many of the obligations and responsibilities they would typically have for full-time employees. Gig economy workers are often geographically dispersed, have a high turnover rate and have limited interaction with others. These isolated working patterns make it challenging to mobilize collective action for effective bargaining.
Although AI often empowers workers by liberating them from tedious, repetitive tasks, it does not necessarily alleviate their workload. Instead, firms often recalibrate and impose new tasks to optimize human labour. Moreover, as AI takes over less complicated and standard tasks, the remaining jobs for human workers are becoming more challenging. To adapt to such a work environment requires relentless efforts from human workers to retrain and upskill, adding to the already heavy burden of workers – in particular those with lower AI literacy (e.g. marginalized populations and the elderly). As a result, the enhanced efficiency achieved through AI has not translated into better job quality for workers.
This paper recognizes employment as a high-stakes and high-risk area of AI application, given the significant and foundational shifts these new tools are bringing to workplaces. Such AI applications should undergo close examination by multiple stakeholders, particularly workers. A collaborative approach engaging businesses, workers, unions, industry bodies, investors and policymakers is essential to ensure that these powerful tools are developed and deployed with a human-centred approach.
Findings from this research provide fresh insights for global policymaking, highlighting aspects that are often missed in Western-focused discussions. The following recommendations are targeted at government bodies and policymakers responsible for developing regulation, legislation and enforcement of rules relating to the design and deployment of AI in workplaces in China. However, these recommendations have a global relevance, particularly for workers and stakeholders concerned with the speed at which this technology is developing. The AI development environment in China offers many insights into the real-world implementation of AI.
Policy recommendations
As AI enters workplaces, it is crucial that policymakers, civil society and companies address the growing power imbalance between employers and employees. The following section provides two categories of recommendations: policies that empower workers, and measures that check the power expansion of employers.
Worker empowerment
Integrate worker protections into AI and data regulations. To protect the rights of employees, policymakers should consider the potential impact of AI on the workforce during the consultation and formation of new AI legislation. Inclusion of protections will provide a legal basis for workers to defend their rights. For example, the EU Artificial Intelligence Act considers employment as a ‘high-risk’ area in which related AI systems need to be registered with local authorities and assessed internally or by a third party before they can be put out on the market.
Members of civil society and the media can help inform policymakers with insights and first-hand evidence from workplaces. For example, it was the work of journalists and scholars in China that first raised awareness of the plight of delivery drivers – which led to the new algorithmic recommendation regulation explicitly requiring platform operators to protect workers’ rights to obtain fair pay, rest and holiday leave.
Enhance worker representation in AI deployment. This research paper has found that the interests of Chinese workers are not reflected in AI tools partly because employees are excluded from crucial decision-making on AI adoption. Therefore, it is important for worker organizations and companies to establish an effective communication mechanism through which companies can consult workers on new AI tools and workers can reflect their experiences and flag concerns to management. The mechanism should also facilitate the formulation of negotiated agreements between workers and management on the optimal use of AI.
Improve transparency and ensure workers’ rights to data. The lack of access to necessary information often hinders the ability of workers to challenge employer misconduct. Policymakers should mandate that companies seek explicit approval from employees and candidates for collecting their personal data – rather than burying such information in employment contracts. Workers should be given the legal right to access their personal data during and after employment, as well as to request personalized and easy to understand explanations of how their data are processed through specific AI systems at workplaces.
Provide adequate training for upskilling. Of those interviewed for this paper, many workers are stuck in positions with deteriorating conditions due to a lack of new skills and abilities to secure alternative roles, which has diminished their bargaining leverage against employers. Education policymakers should explore ways to provide accessible training and reskilling for workers, potentially through social schemes or subsidized courses through private entities. Initiatives like the UK’s apprenticeship scheme, where the costs of skills training is shared by both governments and employees, can serve as a model for upskilling in the AI era.
Checking the power of employees
Curb developer market monopolies. Market regulators should be more vigilant of the anti-competitive behaviours of developers and operators behind AI work solutions (who may also operate non-work platforms that collect data, such as Bytedance, which owns work tool Feishu and short-video platform Douyin). Stronger anti-monopoly enforcement and measures to foster competition can prevent unchecked expansion of leading developers under the guise of AI innovation.
Set clear limits on data collection in employment contexts. Private enterprises in China have significant discretion in collecting and processing personal data during key employment stages, due to the inherent power imbalance between workers and employers. Data that are not directly related to work, such as biometric information, social media posts and financial records, have been collected for AI training and insight generation. This unchecked freedom has led to the aggressive and ethically questionable use of AI. Therefore, policymakers should set clear boundaries on what kind of data can or cannot be collected and processed by private enterprises, specifically in the employment context, so that employers can be held accountable for their practices. National personal data protection regulations can be used as a base for developing context-specific rules.
Control the pace of AI deployment in workplaces. When solely driven by market forces, AI implementation in China has led to deteriorating job quality for workers and unemployment as a result of deskilling. Therefore, policy intervention is needed to control the pace of the roll-out of AI work products to give workers ample time to adapt to the changing work environment. Policymakers should introduce comprehensive social and economic impact assessment requirements before novel AI work products are approved for launch to market. To ensure that firms are abiding by these standards, robust audits and reviews should be carried out on a regular basis throughout the deployment of AI tools.