"Open" generative artificial intelligence (AI) systems foster collaboration and rapid innovation, while "closed" systems emphasize control, security and competitive advantage. HR leaders must consider their specific objectives for using AI, the associated risks and ethical considerations.
Getting your Trinity Audio player ready...

Author: Rebecca Starr

null

This article is fourth in a series on artificial intelligence (AI) in human resources. This discussion outlines what HR specialists need to know about the differences between open and closed AI systems and their and pros and cons.

As the fourth installment, this piece follows the introductory article on the human aspect of HR and subsequent articles on the fundamental risks of generative AI and best practices and how experimentation with AI can unlock real value for HR. We invite you to share your experiences with AI in HR for possible inclusion in future series installments.

The imperative of protecting employee data

The rapid evolution of publicly accessible or "open" AI systems to "closed" or proprietary systems reflects the recognition of the risks discussed in our second article, including protecting sensitive employee data — personally identifiable information (PII) and protected healthcare information (PHI). In this article, we look at the differences between open and closed systems in the context of HR functions.

Imagine this situation:

An employee sues your organization when a private health condition surfaces in a public chat group after your benefits manager accesses insurance claims data through an open AI platform. The employee sues for breach of privacy and further claims that the information resulted in loss of a higher-paying position at another organization. According to the employee, after extending a verbal offer, the prospective employer unexpectedly withdrew the offer without explanation. The employee believes the organization learned of the costly medical condition and wanted to avoid the higher medical cost.

This fictional scenario isn't out of the realm of possibility when an employer organization uses an open AI system. The above example represents one of many potential nightmare situations for HR departments deploying generative AI platforms when leaders lack a thorough understanding of the differences between open and closed systems.

Balancing innovation with data security

Before we compare open and closed AI systems, it's helpful to understand the evolution of generative AI models. Openness and collaboration characterized the early generative AI models released to the public and continue to enable valuable innovation. However, concerns about misuse and unintended consequences quickly arose, and the conversation shifted to a debate around control, responsible development and safe and ethical standards. Business leaders in every sector grappled with whether and how to use AI to gain a competitive advantage without risking operations.

This debate led to propriety systems, or "closed AI," in which only individuals within a defined community may access the training data and algorithms. By design, closed systems protect proprietary information and ensure the control and accuracy of training datasets. Such organizations strictly prohibit external interaction or data modification, theoretically eliminating problems associated with AI "hallucinations" — inaccurate or nonsensical outputs.

Closed systems address some core concerns with generative AI, notably protecting PII and PHI employee information. Yet, they detract from some benefits of open AI and create new issues. As a result, some organizations are adopting a hybrid approach. This evolution from open to closed to hybrid reflects the complex balance between fostering innovation and ensuring the responsible use of AI technologies.

The evolution of generative AI will continue. HR leaders sitting on the fence, hoping for a clear sign to jump into the AI arena, risk falling behind if they wait much longer. If you're among the fence sitters — or are seeking reassurance you got off the fence at a good place — the table in the following section compares open versus closed systems.

Comparing open with closed AI systems

Open generative AI systems foster collaboration and rapid innovation, while closed systems emphasize control, security and competitive advantage. Our conversations with employers confirm that most organizations value all these attributes.

To focus the decision-making process, HR leaders must consider their specific objectives for using AI, the associated risks and ethical considerations. Understanding how open and closed systems compare on a point-by-point basis may inform your considerations.


Open AI Closed AI
Transparency Open to the public; shared architecture; encourages community review and contributions Private to the community; protects proprietary information
Innovation Rapid innovation through shared knowledge and collective problem-solving Slower and limited to a closed community
Control Low control; transparent data training and sharing Allows for internal control; training data may be kept secret
Problem identification Greater opportunity to identify problems due to size of community Significantly less potential for data problems, but fewer users to flag problems
Bias Easier to identify bias due to transparency of data training sources Harder to identify bias; existing internal bias is transferred to AI system
User support Typically, less support due to a lack of controlled infrastructure Opportunity for user support (internal or outsourced)
Ethical considerations Broader scrutiny can lead to better ethical practices; increased risk for misuse Reduces misuse associated with open access
Regulatory compliance Demands substantial resources to manage compliance; higher risk Can define rules to ensure compliance
Competitiveness Shared, collaborative approach Emphasizes data protection; enables competitive advantage

Is hybrid AI the answer?

A hybrid AI system combines open-source elements with security and use control, enabling organizations to leverage advantages and mitigate risks. Aside from the benefits of open and closed AI models outlined in the table, a hybrid platform offers additional advantages:

  • Flexibility. Organizations can customize and tailor a system to their specific needs. A hybrid approach enables the innovation associated with open AI while segmenting propriety and sensitive PII and PHI data behind a firewall with robust security controls.
  • Cost efficiency. Closed AI systems require an investment that may not be practical for small- to mid-size organizations. Taking advantage of free, open-source elements for some functions can significantly reduce the development costs of a proprietary, closed AI system.
  • Speed. Organizations can reduce the time required to introduce generative AI by combining adaptable open-source technologies with proprietary solutions.
  • Misuse monitoring. A hybrid approach delivers a larger community to identify misuses, such as bias and unethical applications, while maintaining control over sensitive information.
  • "Human-centricity." AI can boost efficiency and provide decision-making insights for many HR functions, but it can't replace the human touch. A hybrid model allows humans and technology to work together, with humans applying knowledge to situations that lack training data and protecting sensitive information. Adopting a hybrid approach conveys to employees that you value a human-centric HR function.

Benefits administration changes the AI conversation

This discussion intentionally doesn't address the fit of specific HR applications with open or closed systems. Earlier articles in the series explored an open system: opportunities and risks. AI's value in HR lies in supporting productivity and enhancing communications, which open AI systems can accomplish for many functions.

Using AI for benefits administration takes the conversation to the next level. Numerous benefit administration applications and advantages exist, as well as more risks — most obviously the potential for violation of PII and PHI protections. Any HR function using this data must be confined to a closed system (recall the scenario described above).

Using AI for benefits administration requires employers to make philosophical choices about their level of involvement in employees' benefits selection. Do you want to be a "helicopter" or "free-range" employer? The next and final article in the series will explore that topic.

Recommended resources

Lawton, George. "Attributes of Open vs. Closed AI Explained," Tech Target, 8 Jul 2024.

Palamarchuk, Natalia. "How Much Does AI Cost? Pricing Factors and Implementation Types Explained," Flyaps, 18 Mar 2024.


Disclaimer

Consulting and insurance brokerage services to be provided by Gallagher Benefit Services, Inc. and/or its affiliate Gallagher Benefit Services (Canada) Group Inc. Gallagher Benefit Services, Inc. is a licensed insurance agency that does business in California as "Gallagher Benefit Services of California Insurance Services" and in Massachusetts as "Gallagher Benefit Insurance Services." Neither Arthur J. Gallagher & Co., nor its affiliates provide accounting, legal or tax advice.