While AI adoption may hold great promise there are several underlying issues that can have detrimental impacts. This chapter identifies potential avenues for addressing these issues.
This paper has demonstrated that enhanced efficiency from wider AI implementation does not necessarily translate into the improved job quality and well-being of workers. While there are deep-rooted socio-economic factors unique to China that contribute to these findings, there are also problems related to how AI tools are designed and deployed that have worsened the conditions of workers in China.
Top-down decision-making
AI solutions tend to be developed and deployed in a top-down manner. Key decisions are predominantly made by business leaders and engineers, often excluding insights from the very workers these systems will impact. Entrenched in a bubble of technologists and shareholder interests, AI developers often fail to grasp the broader societal consequences of their innovations. Instead, their decision-making is primarily driven by commercial gains and enhancing user experiences, which depend on improvement of precision, speed and the reliability of algorithms.
Yet, the underlying AI mechanism is far from a neutral, technical optimization. What engineers see as minor tweaks to algorithms can have profound implications for workers. The case of Chinese food delivery drivers breaking traffic rules in response to the reduction of AI-set delivery timings demonstrates how efficiency-driven outcomes can neglect the contexts in which such optimization occurs. As companies implement more advanced AI solutions, they unintentionally establish new workplace norms and standards developed by algorithms designed to enhance productivity.
Whereas engineers traditionally focused on the intricacies of technology, they are now suddenly thrust into positions where their decisions wield broad societal ramifications. However, the prevalent technical-focused education and training in China rarely equips these professionals with the depth of knowledge required for such wide-reaching decision-making – to adequately achieve this requires inputs from ethicists, policymakers, social scientists and philosophers.
Chinese engineering curriculums lean heavily towards technical knowledge, sidelining humanities subjects essential for fostering responsible and ethical technological development. As a result, when engineers enter the workforce, they frequently find themselves in isolated environments with limited understanding of the daily challenges faced by customer-facing workers. Immersed in fulfilling product improvement requests under tight deadlines and intense schedules, engineers find it challenging to contemplate the broader impacts of their work. Moreover, even those who wish to integrate social considerations into their designs often feel lost due to the absence of clear and actionable guidelines for responsible AI development.
As a result, social impacts are rarely considered when technical decisions are made. This gap can lead to unethical and irresponsible designs and applications at the cost of workers. It is especially dangerous if AI products are deployed at scale and in high-stake scenarios, such as automating redundancies.
AI deployment outpaced worker adaptation
Under fierce competition, Chinese AI firms often rush to launch new products as soon as they identify market opportunities. With little external scrutiny and few regulatory checks, the pace at which new AI solutions are deployed to workplaces often surpasses workers’ capabilities to adapt. Workers find themselves compelled to rapidly adjust to changing work environments, requiring them to quickly develop new skills and adapt to novel ways of working. Those unable to rapidly adapt face job insecurity.
With little external scrutiny and few regulatory checks, the pace at which new AI solutions are deployed to workplaces often surpasses workers’ capabilities to adapt.
In China, employers typically do not provide sufficient training or time for workers to adjust to AI-driven workplace transitions, and government entities do not provide any relevant training for workers. This leaves employees in the precarious position of having to navigate these new AI systems on their own. As a result, those that cannot adapt often find themselves stuck in low-skilled, low-value roles and struggle to improve their socio-economic status through career progression. If this trend goes unaddressed, it could worsen an already difficult job market in China, where young people are struggling to get a job. According to the latest available figures, China had a record high jobless rate of over 20 per cent in June 2023 for 16–24-year-olds, before the Chinese government stopped publishing the figure.
In addition, employers face challenges in fully leveraging AI capabilities due to a workforce that lacks the necessary skills to effectively use these tools. Companies attempt to address this skills gap through recruitment. However, as one HR professional pointed out in a research interview, individuals with up-to-date and relevant skills are rare and highly sought after. If the pace of AI adoption is not carefully balanced with the capacity for adaptation among various stakeholders, and without providing sufficient support, this could worsen the mismatch between available talent and the skills needed, potentially undermining rather than furthering business interests.
Lack of regulatory accountability
At present, the implementation of AI tools is largely at the discretion of firms whose primary objectives are productivity and cost-saving. Despite the broad legal implications spanning labour rights, personal data protection, market practices and AI governance, a comprehensive regulatory framework remains elusive.
While in recent years China has strengthened the legal protection of personal data, the new laws mainly focus on the rights of consumers and containing the power of large platform companies. Existing labour laws have not been updated to reflect the changes that digital technologies have introduced to workplaces, nor have they clarified the rights of gig economy workers. Under the strong state support for AI development, Chinese firms are largely free to test the boundaries of technology and push workers to their limits with little legal repercussions. Although they may occasionally suffer reputational damage when aggressive practices trigger media attention and a societal backlash, the impact on Chinese companies tends to be short-lived and has a limited deterrent effect on others.
However, some regulatory progress has been made with the introduction of the ‘Provisions on the Management of Algorithmic Recommendations in Internet Information Services’ legislation in 2022, following a wide public outcry after a media investigation detailed the plight of food delivery drivers whose schedules and incomes are dictated by aggressive algorithms. The regulation states that platforms providing work dispatch services ‘shall protect workers’ lawful rights and interests such as obtaining labour remuneration, rest and vacation, etc.’