|
|
|
|
|
|
- Job posting
- Candidate matching
- CV sorting
- Meeting scheduling
- AI video interview
- Background investigation
|
- Algorithmic task management
- Collaborative work platforms
- Messaging apps
|
- Online activity monitoring
- People analytics
- Wearable devices
- Surveillance cameras
|
- Business analytics
- HR management tools
|
|
- Mass recruitment for seasonal workers
- Campus recruitment
- Entry-level position recruitment
|
- Platform work
- Office work
|
- Promotion/demotion
- Pay rise/pay cut
- Employee training
|
- Dismissal
- Internal transfer
- Labour arbitration
- Court hearing
|
|
- Speed up process
- Save costs
- Expand talent pool
|
- Optimize resources
- Improve efficiency
- Enhance visibility and traceability
|
- Discipline workers
- Improve productivity
- Retain talent
|
- Optimize resources
- Cost saving
|
Source: Compiled by the author.
Recruitment
AI-enabled tools are proliferating across all stages of recruitment in China, from posting a job ad, matching candidates and screening CVs, to arranging and conducting interviews and onboarding. AI offers a tool for efficiently recruiting and organizing the workforce, which is crucial to the competitiveness of firms when their business model relies on slim profit margins.
In Shanghai, 91.58 per cent of companies that took part in a 2023 survey on the use of AI in human resources stated that they used AI-enabled software or services to varying degrees during recruitment processes. Nationwide, large employers (those with more than 1,000 employees) that deal with high volumes of job applicants are the primary users, with one-third of such companies using AI tools for recruitment.
While many firms are aware of the potential pitfalls and ethical concerns associated with the use of AI in hiring processes, the clear benefits and the fast-paced competition for recruits outweigh these concerns.
The survey research found that AI tools are widely used in campus recruitment for fresh graduates and mass-hiring for seasonal workers in manufacturing and service sectors. However, more senior roles and knowledge-intensive positions, such as lawyers or business consultants, tend to continue to rely on traditional recruitment channels, such as headhunting and internal referrals.
Campus recruitment
According to interviews with HR professionals and job seekers in China, the job-hunting journey for a typical Chinese graduate often starts with browsing job ads online that are recommended by AI algorithms based on the seeker’s profile, preferences and search history. Once a suitable role is identified, the graduate submits a CV and cover letter through the company’s online application portal, or through third-party platforms that are often linked to the candidate’s social media accounts such as WeChat. The application is then directly assessed by an AI system that evaluates the applicant’s qualifications based on specific criteria set by the company. If shortlisted, the person may be informed and contacted by an AI agent with a synthetic human-like voice to arrange an AI video interview based on their availability.
|
Recommended by AI
|
|
Assessed and selected by AI
|
|
AI generates messages for next steps
|
|
AI analyses and scores performance
|
|
Human interviewers refer to AI-generated reports
|
|
AI chatbot available 24/7 for questions
|
Source: Compiled by the author.
During the video interview, AI (some appear in the form of an avatar) captures and analyses candidates’ answers and other features including the slightest movements in facial expression and body gestures. A detailed report will be generated that will help determine whether the candidate progresses to the next round. Human interviewers are typically involved from the second round of interviews and make the final decision on successful candidates. Some companies employ AI to tailor training materials and programmes based on the profiles of candidates during onboarding.
Mass recruitment
AI has proven useful in the mass recruitment of candidates for low-skilled jobs, such as call centre operators, assembly line workers and delivery riders. Unlike the competitive professional jobs where one is picked out of many applicants, firms that are mass hiring low-skilled roles filter out a small number of candidates who fail to meet basic qualifications and proceed to hire the rest. In China, job seekers of such roles are often asked to upload images and videos of themselves via smartphone applications. AI is used to conduct analysis and generate scores and recommendations based on the specific requirements of the roles and employers. For example, the candidate’s ability to speak standard Mandarin will be tested for customer service roles through an AI speech recognition system. Physical dexterity will be evaluated for factory jobs by analysing candidates’ movements in video footage. For client-facing roles, AI is used to score a candidate’s appearance, such as if they have scars on their face or tattoos.
Background check
Beyond initial candidate selection, AI is also used to perform background checks on candidates. For example, ZMBeiDiao, a leading background check provider, uses facial recognition to verify the identity of job seekers against police databases and flag potential risks by examining information from their public and private records. Highly sensitive personal information, such as credit and health records, are acquired and examined, according to sales materials from ZMBeiDiao and other background check providers.
Talent pool management
In addition, firms also use AI to build their own talent pool by leveraging data collected from past applicants and identifying potential candidates on third-party platforms. To facilitate communication with potential employers and increase their chances of securing a job, it is common for job seekers in China to allow potential employers to look at content on their personal WeChat and other social media accounts during job applications. WeChat is a one-stop platform providing services that cover almost every aspect of modern life in China, including messaging, payments, access to public services and short video streaming. The authority to view a candidate’s social media account could give firms access to job seekers’ contacts, posts and other related personal information for profiling. AI solution providers including Gllue and Talentlines promote their ability to identify potential candidates and track their career movements by scraping information from other aggregated job sites, such as Maimai, China’s equivalent to LinkedIn. An HR professional noted in an interview with Chinese media that they will get notifications when the targeted people update their profile on those platforms – a sign that they are potentially looking for new opportunities or career changes.
Data security risks
The extensive personal data collected during the hiring process makes data security a growing concern. Many firms lack robust IT infrastructure to safeguard the wealth of personal data they collect and store, even if a candidate is unsuccessful. For instance, in 2019, a database of 33 million profiles of Chinese job seekers containing sensitive personal details such as an individual’s username, age, home address, email address, phone number, marital status, job history and salary history was leaked. Furthermore, an investigation conducted by state broadcaster CCTV in 2021 revealed that three major job sites – Zhaopin, Liepin and 51Job – had suffered data leaks, with some of the exposed information subsequently being sold to scammers.
Perpetuating bias
AI promises to foster fairness in recruitment by reducing human bias. However, in China, where biases – particularly regarding gender and age – are deeply ingrained in the job market, AI often perpetuates these existing prejudices instead of mitigating them. For example, many job listings in China will specify preferred age ranges and gender, and women frequently face questions about their marital status.
All recruitment systems that use AI reviewed in this research assess a candidate’s suitability by using a variety of factors based on the preferences of employers – some of which are discriminatory. In some cases, personal information of candidates that is not necessarily relevant to the job can be evaluated. For example, some of the AI recruitment software reviewed for this paper claim that they can tell a job applicant’s mental health status and tendency towards violence by making them answer questions and play games. Because these evaluations are algorithmic, they can be less transparent than human-led assessments, making it difficult for candidates to challenge potentially biased decisions.
While Western societies have prevalent concerns about AI models inheriting social bias from flawed datasets, in China discrimination in many cases is intentionally introduced by tailoring AI models to match employers’ preferences, which can amplify harms. Scholars interviewed for this research also share concerns that rigid AI-driven selection criteria could lead firms to overlook exceptional candidates whose qualities cannot be easily quantified and captured by algorithms, including interpersonal and communication skills. However, these potential pitfalls do not seem to overly concern Chinese firms. A team leader at a Chinese tech company articulated the utilitarian attitude of Chinese firms succinctly. The primary focus of the firm, they said, is not on capturing unique talents but ensuring that their recruitment methods can find people who are capable of performing the job effectively. ‘We only need to make sure the one we select can do the job,’ they said.
Intensified competition and anxiety
While firms report greater efficiency in processing large volumes of applications and accessing a broader talent pool, workers have mixed feelings about their increased interaction with AI when job seeking. Several interviewees mentioned that AI reduces the amount of manual work during the application process but some fear that the convenience afforded by AI could intensify overall competition in the job market, as workers must compete with more candidates and meet higher standards.
While firms report greater efficiency in processing large volumes of applications and accessing a broader talent pool, workers have mixed feelings about their increased interaction with AI when job seeking.
Moreover, the ambiguity surrounding AI selection criteria adds to job seekers’ stress. Many candidates invest additional time and effort to bolster their chances of clearing AI evaluations. On Zhihu, China’s answer to Quora (a knowledge-sharing platform), users share advice on how to make a favourable impression on AI systems, in regard to appearance, eye contact, smiling, tone, the quality of microphones and webcams, and physical background. In the hope of impressing an AI system, a recent female graduate invested in an expensive skin treatment to reduce ‘imperfections’ and rigorously practised maintaining a smile.
With many job seekers scrambling to navigate the intricacies of AI-driven recruitment, a burgeoning area dedicated to AI interview coaching has taken off in China. It offers an array of services: from tailoring CVs to be more AI-compatible to human coaching to achieve high scores in AI interviews. Job seekers can also pay for access to past AI interview questions and sample answers, as well as subscribe to mock AI interview systems to ‘fine tune’ their responses to get higher scores.
Rather than being critical of AI-driven recruitment, Chinese job seekers are increasingly adapting to this new landscape, as evidenced by the rise of AI interview coaching services. This shift may unintentionally widen social inequality: individuals with more resources to access and adapt through these private coaching and training services gain a competitive edge in the job market, as such services are often out of reach for socio-economically disadvantaged groups.
Privacy invasion
While the invasion of privacy during background checks is not a new phenomenon in China, AI’s unparalleled capacity to gather, analyse and process immense amounts of personal data has made these investigations more comprehensive and cost-effective than ever before, prompting employers to standardize such practices during the hiring process. One HR professional at a tech company said the thoroughness of background checks enabled by current AI tools has significantly weakened workers’ positions in job negotiations. While not all collected information is directly relevant to a given role, it can still be a deciding factor for employers when faced with a large pool of candidates. Moreover, these data can be leveraged against employees during their tenure, especially if conflicts arise with their employers.
Management
Securing a job is just the start of an employee’s engagement with AI at work. From gig economy workers to white-collar professionals, AI is increasingly utilized by firms in China to manage the day-to-day activities of employees. This section examines two of the most widely used AI applications – algorithmic management and collaborative work systems – and their impacts. In both cases, employers have greater access and control of the tools. The mechanism behind AI systems is often not transparent to workers – exacerbating the pre-existing information asymmetries between employers and employees, which amplifies existing power imbalances. While AI holds the potential to enhance work management efficiency, it often prompts companies to intensify surveillance of their employees in order to feed data-dependent algorithms.
Algorithmic management
AI is central to the rise of Western gig economy platforms like Uber and DoorDash, in which firms use algorithms to allocate tasks to drivers, determine wages and evaluate performance with little human interference. In China, AI is used by platforms such as Didi for ride-hailing and Meituan for food delivery, which automatically assign tasks based on a worker’s service history, their loyalty to the platform, customer review scores and their real-time location data. AI also helps to standardize and optimize services by providing detailed work instructions and real-time feedback on execution.
Collaborative work
Much like the pivotal roles that platforms such as Google Workspace and Microsoft Teams play in contemporary office work, Chinese companies are similarly leveraging homegrown collaborative work systems to set targets, track work progress, monitor employee activities and allocate tasks based on AI big data analysis.
Alibaba’s DingTalk, ByteDance’s Feishu and Tencent’s WeCom – with a combined total of over 300 million monthly active users as of February 2024 – offer a broad range of services, from employee check-ins, circulation of internal announcements and instant messaging, to project tracking, document sharing, video conferencing and administrative approvals. The companies also operate some of China’s largest social media and e-commerce platforms. For example, Alibaba is the owner of shopping site Taobao, Bytedance is the parent company of short-video platforms Douyin and TikTok, and Tencent owns messaging app WeChat. The firms utilize data to train algorithms and generate insights for decision-making. Those in management-level roles are typically given greater access to data within these systems.
Reduced autonomy
Greater workflow efficiency through AI comes at the cost of worker autonomy. Both gig economy workers and white-collar professionals find themselves working longer hours at higher intensity in a relentless pursuit of ever-higher goals set by self-learning algorithms.
Research found that food delivery workers in China lost control over their work hours because algorithms dictate when and how orders are dispatched. In order to make a sufficient living, delivery workers, most of whom use motorbikes or mopeds, must be on standby waiting for orders and prepared to work at any time. The platforms also keep shortening the allocated time for delivery as the algorithms take into account the data generated by the fastest riders, who were often only able to achieve this by breaking traffic rules. As a result, all delivery workers are forced to meet the new target times to keep getting orders, leading to a dramatic spike in the number of traffic accidents because, for example, riders took illegal shortcuts or rode on the wrong side of the road to save time.
Several drivers for ride-hailing apps interviewed for this research paper also mentioned that they cannot realistically refuse orders assigned by the system, as it would negatively impact their performance scores, which would then affect the future orders they get and their remuneration. Bad reviews, diversion from scheduled routes, driving mistakes and inactivity can indeed all result in lower scores and income. Some drivers have to switch to different platforms when their scores are insufficient to secure good orders.
Enforced discipline
AI serves as a mechanism to enforce employee discipline. Irregularities detected by AI can result in workers being barred from accessing their accounts, essentially preventing them from securing jobs through a platform. Ride-hailing app drivers in China suffer account suspensions due to reasons such as incomplete personal information, reckless driving or a mismatch in driver identities.
AI serves as a mechanism to enforce employee discipline. Irregularities detected by AI can result in workers being barred from accessing their accounts, essentially preventing them from securing jobs through a platform.
In many cases, however, decisions made by AI can be incorrect and controversial. In one instance, a female Didi driver was suspended because the system mistakenly assumed that the car was driven by a man due to her low-pitched voice. In another incident, a driver lost access to his user account when the audio monitoring system installed in the vehicle detected the phrase ‘tiao lou’ – referring to committing suicide by jumping from a building – during a conversation with a passenger, prompting the AI to flag it as a safety risk without considering the context.
Such AI intervention is possible thanks to the mandatory surveillance devices installed in vehicles, which are responsible for collecting data that are subsequently processed and utilized by AI systems. While major ride-sharing apps maintain that these practices enhance passenger safety, many drivers confess that they feel a strain at work due to this constant monitoring. ‘It’s better not to talk to passengers’, one driver said. While gig economy platforms promote greater freedom and flexibility to attract freelancers, in reality, workers face limited autonomy under the oversight of AI.
Increased anxiety and stress
For office employees who typically work behind computer screens, AI-enabled work collaboration systems have become a major source of stress. An interviewee employed at one of the big tech companies in China said they face heightened pressure to maintain their online appearance and respond to messages quickly, which could be attributed to the increased visibility and trackability of work activities through AI tools. For example, platforms like DingTalk and Feishu identify members who have not read group messages and have a function to auto-call them if they do not respond to messages within a set time frame. Workers also receive automatic calls to keep them on their toes when there are updates related to their projects. Furthermore, AI can also identify employees’ ‘idle’ hours and make assignment recommendations to their line managers. If workers finish a project early or a meeting gets cancelled, they often end up being assigned new tasks to occupy their availability thanks to AI.
In China, more advanced collaborative work systems have evolved into an all-encompassing platform that goes beyond the boundaries of traditional work. For instance, a tech worker in Shenzhen recounted being questioned by their supervisor about a taxi ride booked via a work app after a late shift. The journey was flagged by AI because the fare that day was inconsistent with the employee’s average expenses and the destination was different from previous days. Through the often-compulsory work app installed on staff mobile devices, firms also track the real-time location of field workers, such as sales representatives and technicians, so managers can check if the workers arrive at designated places on time.
Some firms have gone beyond monitoring physical and online activities of employees to using AI to gauge and manage workers’ emotional states. A Chinese subsidiary of the Japanese camera maker Canon deploys a workspace management system that only allows smiling employees to enter the office and book conference rooms. Using so-called ‘smile recognition’ technology, Canon said the system was intended to bring more cheerfulness to the office in the post-pandemic era. ‘Mostly, people are just too shy to smile, but once they get used to smiles in the office, they just keep smiling without the system which creates a positive and lively atmosphere,’ a spokesperson told Nikkei Asia.
Platform operators, sitting in positions of authority, can alter rules and implement changes without needing to provide any justification. Workers that rely on platforms for incomes face a bleak choice: either passively accept the terms or depart entirely, with no avenue for negotiation. Furthermore, with AI monitoring and the automation of detailed instructions and decisions, workers are forced to strictly adhere to the rules, leaving them little room for errors.
Evaluation
Beyond recruitment and management, AI analytics tools are widely used by Chinese firms to evaluate individual performance and inform HR decisions related to pay and career advancement. From how often workers use company devices, to how fast they reply to messages, AI tracks a wide range of behaviour data and provides insights about employees’ performance and value to the organization.
AI recommendations and predictions
Besides impacting the productivity rate, loyalty and morale of employees (as outlined in the previous section), AI models can also predict the growth potential of workers and their value to a firm. Some analytical tools provide management with detailed recommendations on the scale of pay rises, promotions and personnel changes. While human managers still have the final say on these decisions, many of the managers interviewed mentioned that they have grown reliant on these AI insights.
The rationale of firms is that comprehensive workplace surveillance and people analytics tools deter procrastination and incentivize hard work by providing more objective and quantifiable evaluation methods, thereby enhancing overall productivity. By consulting AI-generated insights, HR and management can expedite the evaluation process. They can take prompt action to retain talent when issues are flagged by the system. Firms can also leverage AI to identify the deficiencies and skill gaps of workers and provide training for improvement.
Extensive workplace surveillance
AI analytical tools require significant employee data, which is gathered through extensive workplace surveillance – both software and hardware solutions. Some Chinese firms install online activity monitoring software on workers’ laptops to track their real-time screen activities, including chat content, browsing history and modifications to documents. Some AI systems can automatically flag ‘suspicious’ actions such as visiting recruitment sites or video streaming platforms. Basic ‘productivity’ reports will summarize employees’ time spent on websites and applications.
More sophisticated tools integrate data collected from work collaboration systems like those described in the previous section. They analyse employees’ system activities, messages and texts for insights, such as how productive the employee is and how satisfied they are at work. The AI-driven evaluation can extend beyond the professional realm, analysing employees’ social media posts to gauge their emotional well-being and loyalty to firms. However, many Chinese office workers refuse to connect personal devices to the office Wi-Fi as this enables employers to monitor private chats and browsing histories. When workers are away from their desk, their behaviour can also be tracked thanks to AI-enabled surveillance cameras and wearable devices such fitness bands and watches.
Increased workload and competition
In research interviews, many workers reported increased workloads and mental stress due to AI tools. Beyond trying to impress their human supervisors, workers grapple with the additional challenge of satisfying the metrics set by algorithms. To be recognized by AI as a ‘good worker’, employees often end up working for longer hours at a higher intensity to achieve high performance scores and secure promotion. This has subsequently intensified internal competition among colleagues and eroded job satisfaction.
Unlike the traditional human evaluation where workers gain insights from their direct interaction with their bosses, some workers interviewed mentioned that the criteria emphasized in AI-driven evaluations remain opaque to them.
Many workers find themselves preoccupied with meeting short-term targets to stay competitive, often at the expense of doing more creative and fulfilling work. Moreover, unlike the traditional human evaluation where workers gain insights from their direct interaction with their bosses, some workers interviewed mentioned that the criteria emphasized in AI-driven evaluations remain opaque to them. The lack of clarity has led some workers to feel the need to excel in every quantifiable aspect, resulting in heightened workloads and burnout.
Counterproductive results
The data-driven evaluation process has prompted employees to tactically deploy their work time. For example, one worker interviewed expressed a reluctance to undertake challenging tasks that cannot yield quantifiable results in the short term. Another worker reported that they spend more time writing work diaries and updating the system on their achievements than doing exploratory tasks. This trend could hinder firms’ long-term growth, as research shows that a lack of incentives for employees to express creative ideas could lead to stagnation of an organization, due to its decreased ability to innovate and adapt to new situations.
Furthermore, workers question the accuracy and fairness of AI’s performance evaluations, as AI struggles to factor in exceptional individual circumstances (such as illness) and recognize the value of exploratory tasks that, although harder to codify, could offer significant long-term value to the firm. For instance, a salesperson who invests time in watching a video to better understand a client’s background might be incorrectly labelled as unproductive by AI. It could negatively affect the worker’s career, when, in fact, a deeper understanding of the client may have led to a successful work relationship with the client.
AI promises more objective and efficient evaluation of workers based on the notion that human actions can be boiled down to quantifiable metrics for algorithmic analysis. But the reality is far more nuanced, and poorly designed AI evaluation can easily lead to unfair and inaccurate results. On a technical level, the challenge lies in AI’s capacity to fully grasp and interpret the subtleties of individual behaviours and the myriad of contexts in which they occur. On an ethical level, who should control AI and how best to assign value and weight to specific behaviours is subject to debate. Without properly addressing these questions, AI evaluation could easily lead to unintended negative consequences for both workers and firms. Furthermore, through quantification, worker behaviours are distilled into mere data points, whose value is determined by firms. This process further tips the power balance in the favour of employers.
Personnel changes and disputes
The growing power that employers wield over workers, amplified by their exclusive access to and control of data and analytics tools, becomes especially evident in scenarios like dismissals and labour disputes. Unequal access to information further diminishes workers’ bargaining power particularly in the case of labour disputes. This disparity emboldens firms to adopt even more aggressive and reckless data collection practices, knowing they possess a significant advantage over workers in legal disputes.
Information asymmetry
While there has not been a prominent case of a Chinese firm using AI to fire workers, AI could be utilized to back layoff decisions and provide ‘evidence’ of worker wrongdoings in labour disputes. Due to AI’s capacity to amass extensive personal data throughout a worker’s employment, employers’ unique access to and control over these data gives firms a significant upper hand during labour conflicts.
As companies increasingly integrate their daily operations into an all-encompassing platform powered by AI, many workers find it challenging to substantiate their work performance in labour disputes, because they lose access to the relevant systems upon dismissal. Conversely, thanks to their continuous monitoring and diligent record-keeping, employers can access a vast reservoir of data on both current and former employees, enabling them to produce ‘evidence’ that bolsters their position. Chinese workers often have little choice but to consent to far-reaching personal data collection agreements, as a standard onboarding procedure for new positions, leaving them vulnerable in legal cases.
Many Chinese workers interviewed for this paper expressed hesitation in pursuing legal action against their employers, deterred by the vast amount of data controlled by firms and the challenge of gathering evidence independently. This is on top of the already stark disparity in power between employers and workers in terms of time, personal connections and resources.
In 2019, an engineer who worked for a Chinese tech firm for eight years sued his employer after he was fired on the grounds of ‘breaching company rules’. During the court hearing, the tech firm used surveillance camera footage to demonstrate that the employee did not spend sufficient time at his desk between 10:00 and 18:00. The engineer argued that the footage did not record his work beyond these hours and his time spent in meetings and at other work locations. But he found it difficult to prove his work performance because all the data were stored in the internal system, which he was denied access to after his dismissal. The group chat messages he presented, which contained work content outside regular hours, were rejected by the court as evidence.