Pexels.com - Free to use. See Description.

Artificial Intelligence Implications for Human Resources

Artificial intelligence (AI) is a highly specialized tool being used in corporate settings to  perform specific tasks such as analyzing data, predicting patterns, or automating routine processes.  Concerns about potential misuse or unintended consequences of AI, however, have prompted efforts to develop standards. The U.S. National Institute of Standards and Technology, for example, is holding workshops and discussions with the public and private sectors to develop federal standards for AI systems. In the 2023 legislative session, at least 25 states, Puerto Rico, and the District of Columbia have introduced artificial intelligence bills.

Artificial Intelligence versus Generative Artificial Intelligence (Gen AI)

A February 2024 report from Forbes gives these examples of Artificial Intelligence:

  • A video streaming service uses advanced algorithms and machine learning techniques to analyze your viewing history and suggests content making recommendations increasingly personalized. 
  • In a corporate setting, AI can provide tailored suggestions for recruiters via applicant tracking systems (ATS) that use algorithms to collect data, organize candidate information, and screen resumes for skills, experience, and education.

The same report from Forbes gives these examples of Generative Artificial Intelligence:

  • Gen AI creates new content or data that did not exist before. A human resources application of Gen AI would be the creation of personalized training programs and content. Gen AI can analyze an individual employee’s learning style and career development goals to generate customized learning modules, and educational materials aligned with the employee’s growth path and the organization’s needs.
  • According to July 2023’s McKinsey report,  Gen AI also encompasses natural language capabilities which are required for a large number of work activities.   Gen AI can also be used to write code, create marketing content and provide customer service via chatbots. 

AI adoption is expected to accelerate the timeline for automation up to 29.5% of work hours in the U.S. economy by 2030 according to the McKinsey report.  This applies not only to manual or routine tasks, but also eventually to areas requiring “creativity, expertise, and interaction with people​​.”  A report from FlexOS looks at the top 150 Gen AI tools in use worldwide as of February 2024.

Challenges For Recruitment, Teams, and Employee Engagement

Research from the Columbia Business School indicates “integrating AI into human teams can impact performance and coordination, leading to a decline in productivity. Despite the productivity gains offered by AI, there is a notable human aversion to working with AI agents, which raises concerns about trust and job satisfaction – key competencies of employee engagement and retention.”   Human Resource professionals may need to create common guidelines and implement learning programs to reskill employees on the use of AI and to minimize trust issues by showing how AI tools enhances productivity. 

Although a common use of AI by recruiters would be to have it sort through thousands of applicants via an applicant tracking systems (ATS), the process should not be completely automated.  The AI does not have the ability to use a broad lens in hiring going beyond the precise qualifications to reach candidates who do not exactly match the minimum job criteria (e.g., similar job titles, degree earned, etc.) to reach those candidates who an experienced recruiter may recognize as having the capacity to learn,  to have transferable skills, or be otherwise appropriate for the position.   If the person providing the criteria used to program the AI version of the ATS has unconscious biases, those biases will filter into the ATS.  The results could be an ATS that unfairly excludes protected classes potentially creating liability for the employer. 

Compliance with Existing Laws

Currently, there are no existing federal laws or regulations specifically governing the use of AI in the employment context.  However, AI-assisted processes must comply with existing employment laws.  The Equal Employment Opportunity Commission (EEOC) issued a technical assistance document in May 2023 explaining Title VII’s application to an employer’s use of artificial intelligence in the workplace.  Among the topics addressed are:

  • Adverse Impact: If use of an algorithmic decision-making tool has an adverse impact on individuals of a particular race, color, religion, sex, or national origin, or on individuals with a particular combination of such characteristics, then use of the tool will violate Title VII unless the employer can show that such use is “job related and consistent with business necessity” pursuant to Title VII.
  • Selection Procedure: If an employer administers a selection procedure, it may be responsible under Title VII if the procedure discriminates on a basis prohibited by Title VII, even if the test was developed by an outside vendor.
  • Selection Tool: If an employer is in the process of developing a selection tool, and discovers that use of the tool would have an adverse impact on individuals of a particular sex, race, or other group protected by Title VII, it can take steps to reduce the impact or select a different tool to avoid engaging in a practice that violates Title VII.
  • ADA: The Americans with Disabilities Act (ADA) prohibits discrimination against individuals with disabilities in various areas, including employment. There are considerations around ensuring accessibility and non-discrimination for individuals with disabilities in the design and use of AI systems. For instance, there is potential employer liability if:
  • The employer does not provide a reasonable accommodation necessary for a job applicant or employee to be rated fairly and accurately by the AI algorithm;
  • The employer relies on an algorithmic decision-making tool that intentionally or unintentionally screens out an individual with a disability, even though that individual can do the job with a reasonable accommodation;
  • The employer adopts an algorithmic decision-making tool for use with its job applicants or employees that violates the ADA’s restriction on disability-related inquiries and medical examinations.
  • ADEA: The Age Discrimination in Employment Act (ADEA) prohibits employment discrimination against individuals 40 years of age or older. There is potential liability if AI system inadvertently discriminates against older workers or contributes to age-related biases.

Federal and Global Reactions to AI

The White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence, and published them in a document entitled Blueprint for an AI Bill of Rights.  The document is a list of suggestions, not enforceable law, created in response to documented cases of “systems supposed to help with patient care [that] have proven unsafe, ineffective, or biased,” and “algorithms used in hiring and credit decisions [that] have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination.”  Briefly the five principles from the Blueprint for an AI Bill of Rights encompass:

  • You should be protected from unsafe or ineffective systems.
  • You should not face discrimination by algorithms and systems should be designed in an equitable way.
  • You should be protected from abusive data practices via built-in protections.
  • You should know that an automated system is being used.
  • You should be able to opt out.

The European Union has established the European AI Office to play a key role in implementing the proposed Artificial Intelligence Act (AI Act) to foster the development and use of trustworthy AI and international cooperation.  As proposed, the AI Act is a European regulation on AI assigning applications of AI to risk categories:

  • Unacceptable risk:  AI used for social scoring (ranking people based on their personal characteristics, socio-economic status, or behavior).
  • High-risk: AI that poses significant threats to health, safety, or the fundamental rights of persons due to use in health, education, recruitment, critical infrastructure management, law enforcement or justice.
  • General-purpose: includes models like ChatGPT which will be subject to transparency requirements.
  • Limited risk: AI subject to transparency obligations aimed at informing users that they are interacting with an artificial intelligence system (e.g., ones that generate or manipulate images).
  • Minimal risk: AI systems used for video games or spam filters likely to subject to a suggested voluntary code of conduct

Ethical Considerations for AI

Although AI can save time by automating many repetitive tasks, human resources is human-focused and AI can never fully replace the meaningful connections human resources provides.  AI has the potential to improve candidate sourcing and provide data-driven insights for strategic decision-making, but employers must recognize that using AI is not without risks.  Implementing clear policies on data collection, usage, and obtaining applicant and employee consent coupled with regular human oversight can help ensure information produced by AI is accurate and used appropriately.  Effective AI processes and policies should include:

  • Education of employees about the ethical use of AI which includes the inadvertent potential for copyright infringement claims from 3rd parties when content is created by Generative AI or through the inappropriate uploading of sensitive or proprietary information into AI platforms.
  • Ensuring employees possess the skills to interact with AI effectively and can recognize unethical / non-compliant AI practices (e.g., collecting and handling of protected personal data, biased hiring decisions, invasion of privacy, loss of anonymity of employee/company/customer proprietary data, inequitable performance management or compensation decisions).
  • Ensuring transparency and accountability in AI-driven practices, data collection and decision-making processes.
  • Ensuring no discriminatory or biased practices occur.
  • Implementing measures to secure AI systems against malicious attacks and unauthorized access.
  • For AI that collects, stores, and analyzes personal data, ensure that candidates and employees are fully informed about how and why their information is handled, stored, and safeguarded.