Ethical Implications of AI-Augmented Decision-Making

by Maren Misiaszek, Cognitive Science

Artificial intelligence (AI) increasingly shapes human decision-making. While its influence was once limited, transparent, and clearly distinct from human judgment, AI systems now filter and structure information in ways that are opaque. As AI-generated premises become foundational to human decision-making, the boundary between algorithmic influence and human judgement grows increasingly diffuse. The ethical implications may be far-reaching when decisions are based on premises that may not be explainable. This paper uses hiring as an exemplar to examine how AI impacts decision-making and to what extent those decisions can still be considered authentically human. By recognizing the potential for AI to distort the premises on which humans base their decisions, we can establish ethical guardrails that help ensure those decisions remain grounded in truth and human accountability. No matter how sophisticated AI becomes, humans remain responsible for their decisions and the subsequent consequences, even when those decisions are shaped by AI-generated premises that may construct a flawed reality.

AI, decision-making, transparency, ethical AI, data filtration


Introduction 

Human decision-making, when intertwined with AI augmentation, remains an esoteric topic, one that seems best suited for academic discussions in austere halls. However, the relationship between human decision making and AI augmentation is highly relevant, has high-stakes consequences, and is bound to impact every area of our lives over the coming years. It needs to be discussed around kitchen tables, the halls of Congress, in regulatory oversight bodies, and in large commercial enterprises. Humans need to be aware of how AI arrives at the ‘factual’ basis it presents to humans, and they need to make fully informed decisions, cognizant of the provenance of the data and the premises AI offers up.  

To clarify the extent of the problems surrounding human decision-making, we have to first consider the diffusion of boundaries between “real” and “fake.” We will explore an illustrative use case where the lines between AI-generated imagery vs real imagery have blurred to the extent that it could fool most humans. Then we will show how this blurring between fake images and reality causes diffusion of boundaries in human decision-making. Next, we will examine the exemplar use case on candidate recruitment and acquisition and explore how human decisions are systematically eroded in the process. Lastly, we will illuminate potential mitigations surfaced in the literature, where mitigation remains the key concept. We can no longer separate the impact of AI-augmentation from many aspects of life. As AI augmentation spreads exponentially, the boundaries around authentically human decision-making will increasingly blur. Diffusion of decision-making boundaries could be beneficial or detrimental. However, regulation is not enough to stem the tide. To retain humanity in human decision-making, humans must be aware of the extent to which they base their decisions on premises constructed from AI-generated data.

Illustrative Use Case: The Flawed Assumption that Humans Own their Decisions 

To demonstrate how easily humans can be led to believe that a decision is entirely theirs, we need to understand the natural response of humans to AI-extrapolations.  

Image generation is based on Generative Adversarial Networks (GANs). GANs have two neural networks that work against each other to produce a highly realistic image. The generator network starts with partials or even random noise, and it keeps refining the output, layer by layer, stacking neural networks until the image emerges clearly. The opposing discriminator network then assesses how real the image appears. It has been trained on thousands of real images (though there is no clear provenance of what it has viewed and what it was provided as training data). The GANs will reject the generator network’s image as fake if it cannot map it sufficiently to the ‘real’ images it has been trained on. Adversarial training then causes these two networks to keep competing against each other in a ying and yang fashion, until the image is so close to reality that neither the discriminator nor humans can tell the difference (Blockchain).  

GANs are so effective that they can now produce image extrapolation that people marvel at, such as the DALL-E example of Vermeer’s Girl with a Pearl Earring below. 

Figure 1. Image generation of Vermeer’s Girl with a Pearl Earring as prompted by August Kamp and as rendered by DALL-E.

This seems like innocuous fun, often leaving us impressed by generative AI’s capabilities. But the advent of generative AI’s skill in predicting the right setting and generating the most realistic image extensions, without being capable of providing the source of its training data, remains troubling. If humans can no longer discern between the original image and the generated extrapolations, then humans cannot base their decisions with confidence on whatever else AI is extrapolating. This blurring of boundaries between AI-augmentation and human decision-making can have deadly consequences.  

Figure 2. Image generation based on prompt by Maren Misiaszek “photo-realistic image of soldier with keffiyeh red and white” and as rendered by Gencraft on 4/29/2024.

AI has been used for years in warfare to augment human decisions. Drone technology was used by NATO to identify hidden Serbian strategic positions during the Kosovo War in 1999, and the U.S. military increasingly used drones with lethal payloads in Afghanistan between 2010 and 2020, all guided by AI (Humble, 2024). With generative AI’s capabilities enriched by GANs, this augmentation of human decisions has been taken to an entirely new level.  

A soldier on a rooftop may spot movement in a window above his advancing team on the ground. Generative AI fills in the partial image and identifies the person in the window as an enemy target. It even aligns the crosshairs on the enemy target before prompting the soldier to select “Approve” or “Abort.”  

Departments of Defense placate ethical concerns with the assurance that a human makes the final decision. But if all the data points and incremental conclusions generated are compiled by algorithms, is it truly a human decision?  

Figure 3. Image generation based on prompt by Maren Misiaszek “photo-realistic image of child with black hair at window with red and white checkered curtains” and as rendered by Gencraft on 4/29/2024.

Holland Michel begs to differ: “The gunsight never pulls the trigger. The chatbot never pushes the button. But each time a machine takes on a new role that reduces the irreducible, we may be stepping a little closer to the moment when the act of killing is altogether more machine than human, when ethics becomes a formula and responsibility becomes little more than an abstraction” (2023). 

Generative AI is trained to confabulate. It predicts the next most probable word, or in this case, the next most probable layer of pixels. Nonetheless, it has no understanding of the text or image it renders. It has no morality. No consciousness. What if its image completion is predicated on biases or fears, tied to a specific geographical region?  

What if soldiers’ fears and assumptions are playing a role in the available data that generative AI was trained on? What if the truth is closer to the generated image above (Figure 3)? 

A collage of a child's face

Description automatically generated

Figure 4. Side-by-side rendering of excerpts from Figure 2 and Figure 3

Could these two partials have been misconstrued? How would it have affected the soldier’s ‘decision’ to abort or to approve the kill shot if he had more information about the partials, or the AI generator’s confidence level in expanding the image? Humans believe what they see. The realism that AI extrapolation can achieve is so powerful that humans who have not been trained to consider the provenance of data, the tuning of extender models, and the possibility that the AI extrapolation may be entirely wrong, may blindly accept AI-augmentations as fact. And then they base life and death decisions on a set of potentially flawed, amoral algorithms.     

While most hiring managers are not faced with life and death decisions when evaluating generative AI’s shortlist of candidates, they still need to weigh the benefits, risks, and potential downstream impact of basing their decisions on AI’s filtration and selection of candidates. With increasing concern expressed from European countries and concern for fairness spreading, human hiring managers need to be able to justify their decisions. 

Exemplary Use Case: Recruitment, Ethics, and the Erosion of Human Decision-Making Boundaries  

One of the best established and fastest-growing applications for AI is the recruiting and hiring process. An Applicant Tracking System (ATS) drastically streamlines candidate screening processes, but the risk of inadvertently introducing bias remains a significant ethical challenge. We are well past the point where we can demand full interpretability and explainability; the complexity of black-box AI algorithms stumps even the data scientists who wield this technology. These factors should increase the demand for accountability measures and regular audits to ensure fairness and transparency to the extent possible. There is a constant tension between accepting the premises of an AI-driven ATS in recruitment, and the rights of all data subjects, in this case candidates, to be treated with fairness. So how did we get to this point? 

At the turn of the century, keyword-based algorithms extracted terms from résumés and mapped it to the keywords in a particular job description, which only worked for highly technical types of roles. Then, in 2015 Amazon had a break-through. They trained their algorithms on high-performing employee résumés, hoping the algorithms will apply this learning on the large candidate pool that applied to Amazon every year. This worked well, until it became apparent that the algorithms were favoring males (Dastin, 2018). Being trained on masculine language and males’ résumés produced an in-grained bias that was impossible to extract. This bias was not explicitly programmed into the algorithm but emerged as a result of the data it was trained on. Interestingly, years of efforts to mitigate the bias were wasted as the algorithms continued to exhibit discriminatory behavior. In the end, Amazon suffered irreparable brand damage for what was perceived as an unethical hiring approach. The company ended up permanently disbanding the group responsible for the project. This ingrained bias was elusive enough to avoid initial detection but persistent enough to keep rearing its head, all because of the data it was initially trained on. 

One study that examined the hiring practices of a global enterprise, highlights the benefits of AI constructing “ideal candidate profiles” but reminds us that AI may be limited in its ability to make decisions free of bias and discrimination. The author cautions against the enterprise’s adoption of a ‘blind faith’ approach and goes on to state that “heavy reliance on the purported benefits of AI could increase the risks in the absence of the appropriate checks and balances to ensure the integrity of the technology” (Sposato, 2022). This so-called ‘blind faith’ approach is the result of an over-reliance on the objectivity of AI, and this over-reliance in turn can lead to hiring managers’ unquestioning acceptance of a short-list of candidates, or their comfort with AI whittling down a pool of candidates to the top five who seem remarkably similar.  

Overcoming Obstacles to Full Human Agency in Decision-Making 

Despite AI-augmented summarization, filtration, extrapolation, or segmentation, there are several obstacles to truly owning a decision.  

Human agency requires transparency about the provenance of the data that underlies the premise. Was that data generated? From where was it extracted? Where did it originate, and is the source of record a trusted source? What data was it trained on? How was the training data tested for bias? 

The issue with transparency, so often demanded by oversight bodies, are the inherent black boxes. On the one hand, developers want to protect proprietary techniques and development of specialized algorithms. On the other hand, data scientists are not always entirely sure of how algorithms learn. The black boxes can be rather opaque to them too.  

However, Wachter et al. states, “explanations of automated decisions need not hinge on the general public understanding how algorithmic systems function. Even though such interpretability is of great importance and should be pursued, explanations can, in principle, be offered without opening the black box.” She suggests the following three points as an alternative to demanding full explainability (Wachter, 2018): 

  1. Inform and help the individual understand why a particular decision was reached 
  2. Provide grounds to contest the decision if the outcome is undesired 
  3. Understand what would need to change in order to receive a desired result in the future, based on the current decision-making model 

Lack of transparency about the data and black boxes are only the external part of the problem. Humans need to overcome an internal proclivity for accepting AI-generated content as factual by default. This awareness is critical to avoid the ‘blind faith’ approach. 

The last obstacle to overcome would be an all-or-nothing approach. The exemplar use case highlights the potential for the introduction of bias and questionable ethics in the candidate selection process. That does not mean that all hiring should once again be left to humans only. Recruitment lends itself well to a hybrid approach.  

Instead of treating all roles in recruitment processes as equal, and then segmenting candidates based on traits or other areas where questionable ethical approaches may creep in, roles are segmented instead when analyzing hiring data and implementing the right workflow for each role. Meaningful, actionable segments are then created, such as the following four categories: 

High Complexity/Low Repeatability  
These are C-Suite roles and other pivotal leadership and operational positions. In addition to augmented sourcing, a high-touch, high-tech workflow is implemented from start to finish. 

Low Complexity/High Repeatability  
These are the typical roles organizations are filling all the time. This is an ideal type of role where self-service and automation can be implemented to keep things simple and fill roles fast. 

High Complexity/High Repeatability  
These roles consist of specialized skills, experience, and talent that elicits a lot of competition. This is where algorithms can source and cultivate a pipeline of top candidates. 

Low Complexity/Low Repeatability 
These are fundamental roles, that sporadically need data-driven sourcing to keep the pipeline stocked so the hiring team can act fast when requisitions open. 

This hybrid approach balances positive candidate experiences with efficiency and scalability, while remaining aware of the need to retain ownership of human decisions within an ethical framework. 

The call therefore is not for new models that will somehow guarantee fairness, nor for dubious assurances that AI algorithms will provide solid foundations for unbiased decisions. Humans, aware of their own shortfalls in decision-making should keep taking advantage of the drastically improved efficiency that AI offers. At the same time, they should remain equally aware of AI’s shortfalls. Perhaps, at this stage of our journey with AI, awareness is as good as it gets.

Conclusion 

AI holds tremendous potential for improving the efficiency and quality of hiring processes. Likewise, it holds promise for most areas of our lives, augmenting human intelligence, scaling human capability, automating repetitive tasks, fostering innovation, and advancing scientific research. However, along with the promise comes profound ethical concerns, especially in the areas of bias and discrimination as highlighted in the exemplary use case. The brief analysis provided in this paper illuminated the critical need for a balanced approach that recognizes the advantages of AI while addressing its ethical challenges head-on. Hybrid models that integrate bias-resilient prototypes, ensuring transparency and accountability along with adherence to ethical AI practices, provide a pragmatic approach. But enforcing strict regulatory oversight along with the lure of increasing productivity and decreasing operational cost by employing AI indiscriminately, creates an environment that will make it harder for humans to make those distinctly human choices. Awareness of AI’s influence on human choices is an important first step. Recognizing that the boundaries between AI and human decision-making are diffuse is equally important.

But perhaps most foundational is the understanding that AI is inherently flawed—because it learns from flawed humans who unwittingly reinforce those flaws. As AI continues to evolve, it may learn to mitigate some of these imperfections, but it will never be immune to the limitations of its origins. Humans must critically examine the premises on which they base their decisions, especially when those premises are shaped by AI. The compelling authority of AI-generated insights and the perception of infallibility can lead to overconfidence. AI can augment our understanding of complex situations, but it cannot replace human judgment. No matter how sophisticated AI becomes, humans alone bear the full weight of their decisions and are not exonerated from the subsequent consequences.

References 

Blockchain Council. April 6, 2024. Top 10 Generative AI Examples You Need to Know. https://www.blockchain-council.org/ai/generative-ai-examples/ Accessed April 29, 2024   

Dastin, Jeffrey. October 10, 2018. Reuters. Amazon scraps secret AI recruiting tool that showed bias against women https://www.reuters.com/article/idUSKCN1MK0AG  Accessed April 2024 

Holland Michel, Arthur. Aug 2023. MIT Technology Review. Inside the messy ethics of making war with machines. https://www.technologyreview.com/2023/08/16/1077386/war-machines/ Accessed April 2024  

Humble, Kristian. July 12, 2024. Georgetown Journal of International Affairs. War, Artificial Intelligence, and the Future of Conflict. https://gjia.georgetown.edu/2024/07/12/war-artificial-intelligence-and-the-future-of-conflict/  Accessed March, 2025

Sposato, Martin. June 2021. International Journal of Organizational Analysis. Opportunities and risks of artificial intelligence in recruitment and selection https://www.researchgate.net/profile/Martin-Sposato-2/publication/352464056_Opportunities_and_risks_of_artificial_intelligence_in_recruitment_and_selection/links/623998ec54e2be6c9940d9d0/Opportunities-and-risks-of-artificial-intelligence-in-recruitment-and-selection.pdf Accessed April, 2024 

Wachter, Mittelstadt, et al. 2018; revised 4/2019. Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR, Harvard Journal of Law & Technology, 31 (2), 2018 

Wardini, Josh. 2023. 12 Must-Know Statistics On How Many Companies Use AI in Hiring. https://artsmart.ai/blog/how-many-companies-use-ai-in-hiring Accessed April, 2024 


Citation Style: APA