Published on: January 31, 2023
New technologies, including those described as artificial intelligence, are increasingly becoming an integral aspect of the employee experience and valuable tools for employers and their employees in the human resources context. Employers are keenly aware that proper design, implementation, and company oversight of such technologies are necessary to avoiding negative human resource outcomes, including and especially unconscious bias. As the Equal Employment Opportunity Commission considers guidelines and regulations regarding the use of AI, HR Policy Association strongly urges the Commission to undertake an open and transparent process involving public comment from all stakeholders, to the end that any resulting guidance or regulation clarifies rather than adds to the complexity of a rapidly-growing regulatory regime.
HR Policy Association represents the chief human resource officers of more than 400 of many of the largest corporations in the United States and globally. Collectively, HR Policy Association member companies employ more than ten million workers in the United States – nearly 9% of the private sector workforce. Our members are providing innovative solutions to mitigate the potential risks inherent with the use of AI in the work environment, as was noted by the “Blueprint for an AI Bill of Rights,” recently published by the White House Office of Science and Technology Policy.
As large employers consider the development, deployment, and use of AI in the work environment, the U.S. economy is experiencing a historically tight labor market. This means that, for employers, it is critical that new technologies are linked with a company’s talent strategy. In addition to increasing efficiency and productivity through the use of AI, chief human resource officers are considering how to leverage technology to, among other things:
- Elevate employee voice, enhance management responsiveness, and encourage employee engagement.
- Drive a positive corporate culture, particularly in hybrid working environments.
- Pursue talent retention through investing in employee career growth.
- Enhance the employee and candidate experience, recognizing that HR technologies are often a first or major interaction with an employer, while ensuring that the human element of HR is not lost.
- Close the skills gap by closing the opportunity gap: expanding the talent pool and getting the right talent into the right roles.
It is not the intention of these comments to defend or critique any particular company, technology, or use case for any particular technology. Rather, these comments will discuss some of these emerging opportunities, the ways employers are mitigating the potential harms associated with the use of AI in the work environment, and finally suggest policy approaches that maximize AI’s potential in the work environment while most effectively helping employers minimize risk.
Training Tomorrow’s Workforce
New technology is accelerating changes in the way work is done, exacerbating the need for new skills in the workforce. A study by Deloitte and the Manufacturing Institute estimated that of the 4 million manufacturing positions to become available by 2030, 2.1 million will go unfilled due to the skills gap. In addition, the pandemic has expedited skill obsolescence by more than 70%, according to a recent survey. According to Garter, by 2021, one in three skills on a typical job posting from 2017 in IT, sales, or finance had become obsolete. The costs on the U.S. economy will be significant. Due to sector skill shortages, by 2030 the United States is expected to lose as much as $162.25 billion in revenue in the tech sector alone, and $1.748 trillion overall.
This skills gap is largely attributable to the loss of old jobs, the creation of new jobs, and the transformation of current functions due to automation. According to Deloitte, “When parts of jobs are automated by machines, the work that remains for humans is generally more interpretive and service-oriented, involving problem-solving, data interpretation, communications and listening, customer service and empathy, and teamwork and collaboration. However, these higher-level skills are not fixed tasks like traditional jobs, so they are forcing organizations to create more flexible and evolving, less rigidly defined positions and roles.”
Several companies are leveraging AI-powered technologies to identify learning opportunities and facilitate flexible, personalized upskilling, which can strengthen talent pipelines and improve retention rates. Machine learning can derive from employee information recommended role pathways and learning sequences, and help facilitate such steps. AI training can additionally be integrated seamlessly into an employee’s workflow, providing information and access to expertise in the context of a job to improve flexibility and ensure workers are positioned to succeed amid ongoing changes in the way work is done.
Deploying AI-powered technologies to assist in worker training can facilitate not only skills refreshment but better interactions with management. For example, IBM has introduced an AI system that “helps each employee navigate job opportunities, learning, and career paths, and partnered this with a robust career conversations campaign where now 80 percent of IBM employees report they are having meaningful career conversations with their managers.”
AI solutions are also assisting workers searching for a job to access skills training. Several job-matching platforms have integrated training modules that allow job-seekers to learn new skills and earn credentials that increase their hiring potential. This helps ensure job-seekers are not “left behind” by the continued accelerated rate at which skills are evolving.
Increasing Workplace Access
AI can help facilitate the involvement of traditionally marginalized workers. Supporting talent strategy goals, facilitating workplace access can mean improving work-life balance by improving workplace culture, automating flexible scheduling, or assisting workers with disabilities, among other developing use cases.
Disability access: Roughly one in four U.S. adults live with a disability. Many of these are unemployed or underemployed due to work access issues. A recent report by Accenture in partnership with the American Association of People with Disabilities and Disability:IN suggests that if companies embraced disability inclusion, they would unlock a talent pool of 10.7 million people. Inclusive design of AI systems, enabling and drawing on the full range of human diversity, promises to increase work environment access to disabled workers, particularly in a hybrid work environment. Certain solutions have already increased accessibility for disabled individuals, including image and facial-expression recognition for those with a visual impairment, and lip-reading recognition for those with a hearing impairment.
Work culture: While also clearly beneficial to employees, a healthy corporate culture is correlated with higher profitability and returns to shareholders. Certain software aims to assist management in understanding trouble spots in workplace culture, such as increasing work-life balance. Insights by AI technologies can give employers actionable information on work environment stressors that would otherwise not be available, improving the employee value proposition by addressing concerns in real time. Such use of AI may serve to facilitate interaction between managers and employees, rather than undermine it, and if used properly and non-invasively, can build trust in the workplace.
Flexibility: Other software solutions optimize scheduling to provide flexibility to workers, matching labor demand with worker qualifications, preferences, and availability. Flexible scheduling is particularly important to marginalized communities and women in the workforce. These solutions elevate worker preferences, providing employees greater agency in the work environment, while also simplifying what can be a complicated task for employers.
Creating New Labor Market Efficiencies
Companies have an obligation to deploy new technologies, like AI, in a responsible manner and ensure they augment – not replace – human decision-making. If not deployed properly and given the appropriate oversight, AI could simply screen out qualified job candidates for non-job-related characteristics, resulting in missed talent for employers, or worse, potentially replicating long-standing patterns of bias. Conversely, AI technologies may increase efficiencies in the labor market, connecting companies with talented workers with non-traditional educations, career paths, and/or backgrounds. While the skills gap is a considerable challenge for employers and job seekers, there are significant numbers of talented individuals who are often overlooked by recruiters.
Several platforms have been introduced that facilitate job opportunities for such candidates. A recent panel by the OECD discussed current applications of technology to expand the talent pool, including:
- Programmatic job advertising;
- Improving the inclusivity of job descriptions;
- Analyzing resumes for structured data, skills, and experience; and
- Chatbots that screen and schedule, addressing the common “black hole” applicant experience and speed up the connection of talent with jobs.
AI can derive from marketplace data opportunities that may previously have been missed, using insights on skills and potential to drive recommendations to both employers and prospective employees while providing real-time data on needed skills to help position workers for changes in the workplace. Importantly, such technology can also provide insight into what other jobs may be a good fit for a worker, facilitating career development. These approaches may apply both to internal and external talent pools.
In addition to connecting job seekers with career opportunities, AI can be utilized to facilitate job readiness. For example, one organization helped guide candidates that may have normally been overlooked for new opportunities by giving them access to ways to demonstrate proficiencies in real-world job skills while providing mock interviews and feedback. Companies are given the opportunity to provide feedback to these job applicants and improve the hiring potential of candidates that they may have passed on.
Employer Efforts to Mitigate Risk
The significant risks of bias, denying workers autonomy and dignity, and applying set-it-and-forget-it uses of technology that deteriorate rather than improve working conditions should be taken seriously. For large employers, these risks directly implicate their talent strategies, necessitating an ongoing focus on fairness, privacy, and safety. For example, even companies with a record of successes in terms of diversity and inclusion within their workplaces must wage a continuing battle against unconscious bias, which can be a barrier among hiring managers during sourcing and talent acquisition processes and can negatively impact diversity efforts.
In order to build trust and support worker attraction and retention, large employers are committed to the prevention of bias in the workplace. Reputational damage alone may undermine a company’s efforts to assemble a competitive workforce, and may cost employers as much as 10% in additional cost per hire. Other potential negative outcomes may be produced by the misapplication of AI in the work context, which could undermine efforts to establish an inclusive corporate culture. Notwithstanding regulatory concerns, in practice the impact of poorly used AI affects both employers as well as current and potential employees. With a loss of trust, companies would face significant challenges deploying even responsible uses of AI to increase efficiency, enhance the worker experience, and support their DE&I efforts.
Examples of employer-driven efforts to promote ethical and responsible use of AI: Business leaders and NGOs recognize the importance of building trust regarding the use of AI, and more importantly of avoiding deploying artificial intelligence in ways that discriminate or otherwise undermine corporate business objectives. There are many current examples of employer-driven efforts to ensure AI is used ethically and responsibly, several of which HR Policy Association has led or participated in. Below are examples of just some of these initiatives.
- HR Policy Association AI principles for company adoption: In 2020, HR Policy Association recommended to our members a set of principles on the use of employee data and AI as a framework and starting point for companies to leverage in their own work environments. These principles include:
- Transparency: The intended uses of data should be able to be clearly understood, explained and shared, including the impact on decision-making and the processes for raising and resolving any issues. In some cases, this may include an explanation of the algorithms involved in machine learning assisted analysis and how those algorithms are developed and “trained” to analyze employee data.
- Integrity: The principle of integrity is interpreted in a variety of different ways by companies according to their culture but is rooted in the concept of “positive intent.” In addition to committing to the use of data in a highly responsible way, companies may also specify that the purpose of all AI is to augment and elevate humans rather than replace or diminish them, and that data usage should be sensitive to cultural norms and customs and aligned with company values.
- Bias: Although AI has been touted as the solution to unintended bias in many people-related processes, such as hiring, performance management and promotion, the risk of unintentional bias occurring within AI or the datasets used to train them is concerning. Principles around data and ethics should commit to continuous monitoring and correction for unintended bias in machine learning.
- Accountability: Individuals should be accountable for the proper functioning of AI systems and for unintended consequences arising out of its use. Companies should ensure that everyone involved in the lifecycle of an AI system is trained in AI ethics and that ethics is part of the product development and operation of an AI system. This may include the coders and developers responsible for creating the software, the data scientists responsible for training it, or the management of the company.
- World Economic Forum “Human-Centred Artificial Intelligence for Human Resources Toolkit”: In cooperation with a task force of AI and HR experts including HR Policy Association, the World Economic Forum developed a framework that aims to equip HR professionals with a basic understanding of how AI works in the context of HR, guide companies on the responsible and ethical use of AI, and help companies use AI-based HR tools effectively. The toolkit includes two useful checklists: one for assessing new AI tools before making the critical decision to implement them in a company and one for strategic planning regarding how to responsibly use AI in general.
- The Data & Trust Alliance is a not-for-profit consortium bringing together leading businesses and institutions to learn, develop and adopt responsible data and AI practices. Participating HR Policy Association members include American Express, CVS Health, General Motors, Humana, IBM, Johnson & Johnson, MasterCard, the Nielson Company, Pfizer, Under Armour, and UPS. The Alliance has released its Algorithmic Bias Safeguards for Workforce—criteria and education for HR teams to evaluate vendors on their ability to detect, mitigate and monitor algorithmic bias in workforce decisions.
In addition to collaborative efforts, many employers have developed principles and best practices to build safeguards against potential harms in using AI and build trust both within and external to their company. It is important to note that many HR Policy companies do not use or produce biometric technologies, but nevertheless are leaders in developing robust AI oversight policies and practices. The following are just a small sample of such efforts.
- Accenture’s AI ethics and governance framework takes an interdisciplinary approach that supports agile innovation and ensures governance of AI systems. Accenture emphasizes the need for organizations to put into practice well-defined AI principles, minimizing unintended bias, ensuring transparency, creating opportunities for employees, and protecting the privacy and security of data.
- Microsoft’s AI principles – Fairness, Inclusiveness, Reliability & Safety, Transparency, Privacy & Security, and Accountability – are put into practice throughout the organization largely through the work of its Office of Responsible AI (ORA); the AI, Ethics, and Effects in Engineering and Research (Aether) Committee; and Responsible AI Strategy in Engineering (RAISE). The Aether Committee advises Microsoft’s leadership on the challenges and opportunities presented by AI innovations. ORA sets AI rules and governance processes, working closely with teams across the company to enable the effort. RAISE, meanwhile, enables the implementation of Microsoft responsible AI rules across engineering groups. 
- IBM’s AI Ethics features a robust, multidisciplinary, multidimensional approach to trustworthy AI, with three principles and five foundational pillars for ethical AI. IBM’s AI Ethics Board, a central, cross-disciplinary body to support a culture of ethical, responsible, and trustworthy AI throughout IBM, supports a centralized governance, review, and decision-making process for IBM ethics policies, practices, communications, research, products, and services.
Regulatory activity and the consideration of such in this area is quickly becoming a patchwork of varying standards and requirements. For example, the Federal Trade Commission’s Advance Notice of Proposed Rulemaking includes a focus on algorithmic discrimination and worker monitoring, among other things. At the state and federal level, legislation is being considered to provide worker protections against discrimination through the use of AI, with several noteworthy measures already having passed at the state level. The Equal Employment Opportunity Commission’s laudible initiative to ensure that artificial intelligence and algorithmic decision-making tools do not, in Chair Charlotte Burrow’s words, “become a high-tech pathway to discrimination,” has the opportunity to provide clarity to employers, rather than add existing requirements to an already crowded regulatory landscape.
New guidelines or standards should align with existing government policies and commonly adopted employer best practices. Any government guidelines on the use of AI in the employment context should be aligned with regulatory expectations across the federal government, particularly including the EEOC’s “Artificial Intelligence and Algorithmic Fairness” initiative, part of which will involve the “issuance of technical assistance to provide guidance on algorithmic fairness and the use of AI in employment decisions.”
Further, any government guidelines should be compatible with existing processes, procedures, and policies that employers have established to comply with the patchwork of state, federal, and international laws affecting the use of innovative technologies in the employment context. Employers have invested significant resources to develop compliance processes, procedures, and policies, and employers should be able to leverage these governance structures when aligning with the federal guidelines.
Already confusion exists within the U.S. government as to the synchronicity of White House efforts, particularly the Office of Science and Technology Policy’s recently-released “Blueprint for an AI Bill of Rights,” and statutorily-authorized efforts, including the National Institute of Science and Technology’s “AI Rick Management Framework.” This confusion has prompted a letter to White House OSTP Director Arati Prabhakar from Rep. Frank Lucas (R-OK), Chairman of the House Committee on Science, Space, and Technology, and Rep. James Comer (R-KY), Chairman of the Committee on Oversight and Accountability, raising concerns about conflicting guidance. Such conflicts include items as basic as defining AI and establishing principles for trustworthiness in AI systems. Until the U.S. federal government is able to clarify its stance on such issues, employers are left to read the tea leaves for themselves.
In addition to the federal EEO laws enforced by the Commission, the use of technology in the employment context is regulated by many frameworks. In the United States alone, federal and state laws relating to anti-discrimination, labor laws, data privacy, and AI-specific laws affect the use of technology in the employment context.
An increasing number of state and local laws are directly regulating the use of artificial intelligence in the employment context. The Artificial Intelligence Video Interview Act (AIVIA) in Illinois, for example, requires transparency, consent, and certain government reporting from employers who require candidates to record an interview and use artificial intelligence to analyze the submitted videos. In December of 2021, the New York City Council enacted a law requiring companies to obtain independent audits of certain automated employment decision tools used in the context of hiring and promotion.
Myriad AI-specific requirements across states and cities makes compliance difficult to manage across intersecting domains. The Commission should strive to promote consistency with federal, state, and municipal requirements in order to foster compliance through consistency.
AI, including that which uses biometric information, is not a monolithic concept, and therefore a “one-size-fits-all” approach to oversight may inadvertently expose workers to risk. AI use cases among HR Policy members vary considerably, depending on a wide variety of factors. The risk profiles of different uses of artificial intelligence vary considerably both in scope and in kind (i.e., safety, privacy, autonomy, or fairness). For example, using facial recognition technology during interviews presents a different degree of risk than an AI-powered predictive text tool, and raises different types of risks than GPS tracking features on a company-owned vehicle.
A “one-size-fits-all” model of oversight may inadvertently expose workers to risk, even while providing protections in the cases for which the oversight was aimed. Companies build these considerations into their technology oversight process, seeking to apply their principles on AI in a nimble manner as innovation accelerates. Any AI policy promoting ethics and trust without these characteristics will prove both insufficient and unviable.
Artificial intelligence technologies pose significant opportunities for American workers, while containing inherent potential risks. To ensure that the risks are minimized while the rewards are maximized, and that the policy landscape is clarified for stakeholders, the Commission should undertake an open and transparent process involving public comment from all stakeholders as it considers guidelines and regulations. HR Policy Association appreciates the opportunity to provide our point of view and look forward to continuing to lend any assistance we can to the important work of the Commission.
 “Blueprint for an AI Bill of Rights.” The White House, October 2022.
 “2.1 Million Manufacturing Jobs Could Go Unfilled by 2030.” The National Association of Manufacturers, May 4, 2021.
 Groysberg, Boris, and Connolly Baden, Katherine. Pandemic’s Impact on Executive Skills.” Harvard Business School, September 29, 2021.
 Baker, Mary. “Stop Training Employees in Skills They’ll Never Use.” Garter, September 4, 2020
 “The Global Talent Crunch.” Korn Ferry, Spring 2018.
 “From Jobs to Super Jobs.” Deloitte, 2019.
 Noting the end result of these changes remains to be seen, there are some positive signs. A Survey by Salesforce of 773 automation users in the U.S. found “89% are more satisfied with their job and 84% are more satisfied with their company as a result of using automation in the workplace.” “New Salesforce Research Links Lower Stress Levels and Business Automation.” Salesforce, December 2, 2021.
 Moore, Tanya and Bokelberg, Eric. “How IBM Incorporates Artificial Intelligence into Strategic Workforce Planning.” Society for Human Resource Management, Fall 2019.
 “Disability Inclusion.” Accenture.
 A. Edmans, “Does the Stock Market Fully Value Intangibles? Employee Satisfaction and Equity Prices,” Journal of Financial Economics 101, no. 3 (September 2011): 621-640
 Chamberlain, Andrew; Sull, Charles; and Sull, Donald. “Measuring Culture in Leading Companies.” MIT Sloan Management Review, June 24, 2019.
 Albinus, Phillip. “2022 Top HR Product: Workday Scheduling and Labor Optimization.” Human Resources Executive. August 22, 2022.
 “AI for Labour Market Matching.” OECD, February 23, 2022.
 Hayes Weier, Mary. “Why Companies Should Hire for Potential over Pedigree: Q&A with Byron Auguste.” Workday, April 23, 2018.
 Burgess, Wade. “A Bad Reputation Costs a Company at Least 10% More per Hire.” Harvard Business Review, March 29, 2016.
 “Human-Centred Artificial Intelligence for Human Resources.” World Economic Forum. December 2021.
 “Algorithmic Bias Safeguards for Workforce Overview.” The Data & Trust Alliance. December 2021.
 “Responsible AI Principles from Microsoft.” Microsoft.
 “EEOC Launches Initiative on Artificial Intelligence and Algorithmic Fairness.” U.S. Equal Employment Opportunity Commission, October 28, 2021.