A few years ago, talk of artificial intelligence (AI) for human resources mainly was a marketing spin for hyper-learning technology. The advent of ChatGPT from OpenAI® and other generative AI tools changed that. From the creation of the wheel to the development of the internet, AI joins a relatively short list of revolutionary developments that changed the world. AI already has significantly impacted employer organizations despite the technology's early stage of development — and organizations are exploring ways to use it.
IBM's 2023 CEO study found that 50% of CEOs integrate generative AI into digital products and services, another 43% use it to inform strategic decisions, and 36% use it for operational decisions. Yet only 29% of these CEOs' executive teams feel they have the in-house expertise to adopt generative AI. Further, only 30% of non-CEO senior executives surveyed said their organization was ready to adopt generative AI responsibly.1
Some HR leaders rush to embrace ChatGPT while others ignore it, fearing unintended consequences or worse — that it will replace them. Yet there is a lot of space between these two responses — which is where HR needs to be. Properly understood and used responsibly, AI can perform transactional routine tasks, freeing HR time for more strategic activities. As for the fear factor, AI won't take your job — but the person who knows how to use it effectively may.
The following information may help HR leaders find comfort in the in-between space.
AI is not the same as hyper-automation
Many confuse AI with hyper-automation. Professionals make no real decisions using hyper-automation. Instead, the simple math of this tool makes things happen far more quickly than if people managed those tasks in a slower, multi-step fashion. For example, approval of paid time off (PTO) requires only one human action — manager approval. Automation can manage all the other steps: confirmation of earned time, notifying the requester of approval and correctly logging PTO in the time system.
Conversely, AI comprises the ability of machines to perform tasks associated with human intelligence, such as learning and problem-solving. AI uses a large language model fed an immense amount of data, enabling an algorithm to determine the output of a query. Simultaneously, the machine learns with each query and decision, providing more data to respond to future queries. Machine learning should't occur in a vacuum. A human touch is essential for the responsible and ethical use of AI — hence the ongoing need for HR involvement and guidance.
Good AI and not-so-good AI
In August 2023, MIT Technology Review reported research on 14 large language models, revealing outputs rife with bias.2 AI language models reflect the biases in their training data and those of the people who created and trained them. Using correct inputs, managed expectations and human review of outputs, organizations can use AI's power for good, improving productivity and clarity of language and intent with use. The absence of any of these three conditions — correct inputs, managed expectations and human review — can lead to not-so-good AI outcomes.
The growth of generative AI models has dramatically changed the threat scenario. Cybercriminals are using deceptive chatbot services to facilitate destructive activities. In July 2023, the data analytics platform Netenrich® uncovered a new AI tool sold on the dark web called "FraudGPT," explicitly built for malicious activities such as phishing emails and cracking tools to break security measures.3 Such threats are real, and organizations struggle to stay ahead of cybercriminals.
Not all threats are malicious
Threats aren't limited to malicious software. Consider a 2023 personal injury lawsuit for which a 30-year-old lawyer used ChatGPT to prepare his briefs. ChatGPT fed him six non-existent court decisions that he cited to bolster his case. When it was discovered that no such case law existed, the embarrassed attorney admitted that he "was unaware of the possibility that its content could be false" and accepted responsibility for not confirming the chatbot's sources. The attorney and his firm were sanctioned and fined.4
In its current stage, generative AI can't be trusted without human oversight. The technology takes snippets of human-created information from the web, splices them together and spits them out as factual information. AI cannot discern true from false — only a human can. HR users must review outputs and independently confirm whether the information is factual and — in the instance of subjective information — logical for its designated purpose.
There's no question that AI will improve as it evolves — much faster than its revolutionary predecessors. Consider that the Internet is 40 years old. Yet, IBM already offers an AI certification for tech professionals. Organizations must take a monitoring approach. HR's use of AI requires properly trained staff that understands AI's value and limitations and knows how to use it ethically and responsibly. Toward that end, new tools are available to help manage AI risk. Cutting-edge technology can monitor your AI use against your business strategy as reflected in policies and definitions. Such tools can assess the use of AI in making consequential decisions for your organization. These tools then report on the failure or success of your model.
Regulation lags behind AI technology
Regulation associated with AI is emerging worldwide. Italy stepped up as the first country to ban ChatGPT while regulators determined appropriate use and consequences. As of October 2023, 36 primarily authoritarian countries have banned ChatGPT. Other jurisdictions, including the European Union and China, are developing tailored rules for AI. In the US, the education community is leading the way in raising concerns about generative AI. Several large US school districts have banned ChatGPT due to plagiarism and accuracy concerns. Others have blocked access to it from their systems.
In August 2023, the US Equal Employment Opportunity Commission (EEOC) settled its first-ever AI discrimination in hiring lawsuit involving a tutoring company that allegedly programmed its recruitment software to reject older applicants.5 Although the EEOC settled the case, the lawsuit sent a clear signal that employers using AI in the hiring process can be held liable for unintended discrimination.
States and municipalities are moving faster than the US government to regulate the use of AI in the hiring process. Various recent legislation from across the country speaks to the following:
- Notifying candidates of the use of AI and how it works
- Requiring candidate consent to use AI to assess candidate-supplied information
- Dictating a candidate's right to know what data is collected and analyzed, how long data may be kept and with whom information may be shared
- Prohibiting the use of facial recognition software during a video interview without consent
- Requiring an annual independent check of the software for bias
Still, these laws lag behind the technology.
Don't fear AI; control it through responsible and ethical use
Understandably, many HR leaders feel overwhelmed with the prospect of staying on top of the rapidly developing technology and associated regulations. Unfortunately, the onus is on employers to vet AI tools and validate that there is no discrimination because AI vendor contracts typically include non-liability clauses. Given this responsibility, organizations may be tempted to ignore the benefits of generative AI in hiring and elsewhere and block its use. However, AI can enhance HR strategy, and organizations using it may gain a competitive advantage. Gallagher's advice is not to fear AI but to control it.
The following four broad "rules" can help to guide responsible and ethical use of AI in HR:
- Data entry. Never enter information classified as "confidential" or "restricted" in an unapproved AI system.
- AI systems for company business. Actively monitor information uploads from the organization's network to publicly accessible AI tools. Reserve the right to take responsive action or block usage.
- Support productivity. Allow employees to use AI for productivity provided they abide by rules 1 and 2. Practical applications include drafting sample job descriptions, removing gender bias, auto-posting job descriptions to targeted hiring sites, scheduling candidate interviews and summarizing employee policy. A human should always review the output for accuracy.
- No public sharing. Never share output from a company-approved AI system outside the company.
As you leverage AI opportunities in a compliant manner, develop use cases to establish success scenarios, failure scenarios and variants or exceptions to guide future use.
Human oversight is HR's superpower
AI is here to stay and will most certainly become more powerful. Organizations that refuse to consider how it can benefit productivity and outcomes risk losing their competitive edge. Responsible and ethical use of AI demands the involvement of a human — especially in HR applications. The need for a constant human touch is HR's "superpower." Used for good, AI can enable HR to do more and execute strategically, work with less, maximize productivity, achieve organizational goals and retain talent. So, while there is no need to fear AI, it demands vigilant oversight by trained humans.
Gallagher's Human Resources Technology Consulting practice can work with you to optimize AI and other HR technology so your organization operates more efficiently. Let us help your team face the future with confidence.
Call us at 1.800.821.8481 or request that we contact you.