top of page
Search

AI at Work: Building a Future-Ready Workforce

By Natalie Pierce and Stephanie Goutos


Introduction

Artificial intelligence (AI) is everywhere.  It’s transforming the way we do business, infiltrating our strategic roadmaps, and redefining the benchmarks of success.  Nowhere do we feel this more than in our workplaces.  AI tools are being used to recruit, onboard, train, manage, retain, and even predict employees’ future decisions.  At the employee level, AI tools are being used to draft content, automate tasks, brainstorm ideas, analyze data, and much more.  The line is becoming increasingly blurred between independent human judgment and AI-augmented decision making.


Successfully leveraging this technology demands an environment of continuous learning, upskilling, and adaptability.  Maintaining legal compliance within the fluid framework of developing legislation will also be essential.   Recent guidance from federal agencies like the Equal Employment Opportunity Commission (EEOC) and Department of Justice (DOJ) offers a starting point concerning algorithm decision-making tools and preventing discrimination, and there is more regulation on the horizon.  


The comprehensive Executive Order issued by President Biden on October 30, 2023 sets forth rigorous standards and mandates that agencies evaluate AI risks and develop guidelines to mitigate harms while maximizing benefits for workers.


This evolving landscape poses both challenges and opportunities for business leaders.  Whether beginning the AI adoption journey or reassessing current strategies in light of the rapid changes, the key is to adopt a mindset of “progress, not perfection.”  The critical question is not whether you have all of the answers now, but instead how to continuously future-proof your workforce in light of evolving legal standards. Today’s leaders should be asking:


  • How can we leverage AI to enhance our employees’ skills and productivity?


  • What steps can we take to ensure our AI practices comply with the latest legal guidance and standards?


  • How can we foster a culture of continuous learning and innovation to keep ahead of competitors?

 

This article is to help guide employers on navigating AI adoption, ensuring legal compliance, and successfully preparing their workforces for the future.


Common Use Cases of AI in the Employment Lifecycle


1.     Recruiting and Hiring – AI is commonly used to streamline recruiting, by offering tools that can review large amounts of data, such as resumes, to help identify potential candidates.  Additionally, generative AI is particularly helpful to assist with efficient creation of content such as job descriptions, postings, and other communications.  However, AI tools can sometimes fall short of expectations due to issues such as algorithmic bias.


Sample Use Case:  A company uses an AI system for resume screening, but inadvertently filters out a high percentage of female applicants due to training the model on historical hiring data, leading to unintended correlations.


RecommendationEmployers should conduct regular audits of their AI tools to ensure that these systems are not perpetuating biases.  Training models on diverse datasets and incorporation of anti-bias algorithms help to mitigate this risk.  Additionally, human oversight should never be fully replaced by AI in recruitment to ensure fair hiring decisions.


2.     Employee Engagement and Productivity – AI-driven analytics can help employers gauge employee engagement and productivity, providing insights into workplace trends and performance.  While these tools are beneficial, employers must navigate the ethical implications of such monitoring to maintain trust and respect for employee privacy, as well as ensure legal compliance.


Sample Use Case: A tech company uses an AI program for performance evaluations, which assesses employees’ productivity. However, the algorithm fails to account for individuals with disabilities who may work at a different pace or require reasonable accommodations.


Recommendation: Employers must ensure that AI tools comply with applicable laws, such as the Americans with Disabilities Act, by allowing for human discretion in performance evaluations.  Providing regular AI training can help employees involved in   the development and management of these tools make more ethically informed and legally compliant decisions.


3.     Training and Development – AI provides the opportunity for personalized employee training and development, such as tailored learning paths that adapt to individual learning styles and career goals.


Sample Use Case: A financial services firm introduces an AI platform that customizes learning modules for each employee, resulting in more effective upskilling.


Recommendation: Leverage AI to create personalized learning experiences, but also allow for human mentorship and support to address unique employee needs and career goals.


4.     Predictive Analytics – Predictive AI offers foresight into important variables such as employee turnover, leadership gaps, and future skill requirements, allowing businesses to proactively plan for and address these challenges.  Additionally, it can be leveraged to assist with operational needs, such as developing employee work schedules.  It does so by evaluating the anticipated organizational needs and aligning them with employee availability. 


Sample Use Case: A retail chain uses predictive analytics to optimize their employee schedules, but the system does not take into account local labor laws regarding minimum  rest periods that are required in between shifts.


RecommendationBefore implementing such AI tools, conduct a thorough legal review to ensure that they comply with all applicable labor laws.  Employers should also establish a protocol for the manual override of AI decisions to be used when they conflict with legal requirements, employee rights, or are otherwise warranted.


Risks of AI in the Workplace

While AI poses a number of challenges across multiple sectors, the crux of our discussion here is dedicated to helping employers comprehend, navigate, and mitigate the risks associated with the most common challenges within the workplace.  By narrowing our focus to this area, we aim to equip companies with the strategic acumen and essential tools required to proactively prepare to address the wave of legislative and regulatory changes on the near horizon.


What are some of the top challenges AI presents in today’s workplace?


1.     Bias and Discrimination: AI algorithms may inadvertently perpetuate or amplify existing biases in employment related decisions based on protected characteristics, such as race, gender, or age. This can lead to discriminatory outcomes, including disparate impacts on certain protected groups.  Scientific American reported recently that police recognition technology cannot accurately identify people of color, raising concerns about similar biases in workplace technology.  Earlier this year, the EEOC settled its first-ever AI discrimination-in-hiring lawsuit for six figures, after a company’s recruitment software allegedly automatically rejected applicants over a certain age.


2.     Increased Worker Surveillance & Privacy Concerns:  The integration of AI in the workplace often leads to more opportunities for increased employee monitoring. This may include items such as tracking employee productivity or reviewing communications between employees. President Biden’s Executive Order explicitly states that AI should not be deployed in ways that, among other things, encourages undue worker surveillance or causes harmful labor-force disruptions. On a practical level, workers that are constantly monitored raise not only privacy concerns, but may also negatively impact morale.  Furthermore, any data that may be collected through this surveillance could be misused or inadequately protected, leading to additional legal ramifications.


3.     Cybersecurity Risks: AI systems, brimming with sensitive data, are prime targets for cyber threats.  A breach not only compromises employee confidentiality, but can also corrupt the decision-making processes, leading to far-reaching operational consequences. Robust cybersecurity protocols are non-negotiable to safeguard these digital assets. 


4.     Job Displacement: There’s always the fear that AI will take over human jobs, and it’s not unfounded.  Automation presents the possibility of displacing workers, particularly those characterized by routine and predictability. The recent Executive Order addresses this, directing the Labor Department to analyze and report on worker support strategies for integrating AI into the workplace. The workplace landscape will evolve to integrate the strengths of both humans and machines.


5.     Disruption to the Workplace: AI’s integration can be disruptive, necessitating new workflows and skill sets. Without strategic management, this can lead to organizational upheaval.  Transparent communication and comprehensive training are essential to smooth the transition and align AI integration with the company’s human capital. The Biden Administration has made clear in the Executive Order that AI should not be deployed in ways that undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labor-force disruptions. 


6.     Inaccurate or Misleading Outputs: AI’s outputs are only as reliable as the data and algorithms they rely on. Faults in these inputs can result in erroneous outputs, potentially leading to flawed strategic decisions. Rigorous testing and validation processes are crucial to ensure the integrity of AI-generated data and recommendations. Meeting this goal requires “robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies, institutions, and, as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use.”  


7.     Reputational Risks: The potential impact of AI on corporate reputation and stakeholder trust.  Now more than ever, consumers are demanding companies protect their personal information or they will take their business elsewhere.  A data leak in connection with the use of AI tools can have serious consequences, even if done unintentionally. 


8.     Compliance and Regulatory Challenges: The legal landscape around AI is rapidly evolving.  The current administration is committed to equitable AI use and will not tolerate the use of AI to disadvantage those who are already too often denied equal opportunity and justice.[2] Companies need to stay informed and up to date on current and future regulations to ensure legal compliance.


In order to address these challenges, a proactive stance is key. This may include strategies such as regular AI system audits, stringent data security protocols, fostering a culture of transparency regarding AI’s role in the organization, and adhering to AI best practices and guidelines. For an in-depth guide on crafting your organization’s AI strategy, we invite you to listen to our podcast on this topic. 


Best Practices: Strategic Implementation of AI

To future-proof the workforce, companies must adopt a dual-focused approach that emphasizes both innovation and compliance, while also taking into account the organization’s unique culture and respective strategic goals.  Here are some practical recommendations and best practices for employers:


1.     Prioritize Cybersecurity and Safety.  Take proactive measures to minimize any negative impact from the use of AI technologies to ensure its being leveraged in a responsible and compliant manner. This may include items such as implementing systemic audits, bias detection tools, and regular monitoring and evaluation of the company’s technology use. At a minimum, employees should be prohibited from entering private or sensitive information into public AI platforms and properly trained on the use of the tools.


2.     Proactively Prevent Misuse or Harm.  Develop and disseminate clear guidelines for using AI technologies appropriately and clearly communicate that to employees.  The policy should specify when extra scrutiny may be needed, as well as identify and prevent certain high risk use-cases that are not appropriate (e.g., classifying people based on protected characteristics). Practically, this may mean companies need to build or supplement existing systems and infrastructure to help enforce their guidelines. Common strategies for this may include things like content filtering, limiting certain text or subjects, certain approval requirements, and other risk mitigation efforts.   3.     Establish Cross Functional Teams for Continuous Collaboration.  Assemble a team of cross-functional stakeholders to carefully evaluate the risks and benefits of integrating new technologies into your organization and to continuously monitor it.  Ensure team members are from varied backgrounds and across multiple levels of experience to ensure wide-ranging and diverse perspectives are represented. This is particularly important for effective risk mitigation.


4.     Employee Training and Education.  One of the most critical aspects of successfully future-proofing your workforce is employee education and training.  Decision-makers need to ensure they understand the capabilities and limitations of the technology, in order to train their employees effectively.  Companies should work to develop robust training and resources to equip their teams with the knowledge and skills that are now needed in the workplace.  This training may include interactive workshops, digital training modules, or customized learning experiences.  At a minimum, training should cover the practical applications of AI, relevant ethical implications, and the regulatory landscape.  The employee training is also a great opportunity to communicate the company policy on AI to set clear expectations for appropriate use.


5.     Cultivate a Responsible AI Culture.  Leaders should strive to foster a culture of responsible AI use throughout the organization.  It’s important to strike a balance that encourages employees to explore and utilize AI technologies, while still maintaining legal compliance. This initiative must originate from the top, with senior leaders committed to open dialogue, critical inquiries, and constructive feedback. The recent Executive Order on AI made clear that promoting innovation and competition is a top priority, and employers should too.  Thus, any approach should not only permit but encourage employees to voice questions and concerns without fear of stifling innovation or subjecting them to any negative scrutiny.


6.     Prioritize Employee Support, Upskilling & Well-Being. Companies should aim to support any workers displaced by AI adoption and work to minimize disruption in the workplace.  To this end, the Executive Order directs the Department of Labor to develop best practices and recommendations to strengthen or develop additional support for workers within 180 days of the date issued (before April 2024). The principles and best practices must include, at a minimum, specific steps for employers to take, so companies should stay abreast of further guidance in this area and be ready to respond accordingly.  In the meantime, companies can proactively create reskilling and upskilling programs to support their workers.   


7.     Consider Reframing Employee Assessment Criteria. In a recent report on the future of work, McKinsey recommended that employers may need to evaluate candidates not on their prior experience, as has traditionally been done, but instead on their capacity to learn, their intrinsic capabilities, and their transferable skills.


This shift in the way companies may approach hiring and training reflects a broader recognition of the increased value of adaptability and continuous learning that will likely be present in the future workplace. 


8.     AI Leader – Appoint a responsible individual within the organization as the primary AI point of contact, so that one person has primary responsibility for AI policies and practices. This will provide employees a clear route to safely escalate risks or concerns, discuss challenges, and promote continuous improvement within the workplace.

 

About the Authors

Natalie Pierce (l) is a Gunderson Dettmer Partner and Chair of the Employment & Labor Group. Her practice focuses on the needs of emerging companies, venture capital and growth equity firms. She also focuses on the future of work, including counseling companies on incorporation of generative artificial intelligence and other enhancement technologies. Natalie hosts Gunderson’s FutureWork Playbook podcast, and was selected as a Fast Case 50 Award Winner, one of Daily Journal’s “Top Artificial Intelligence Lawyers” and “Top Labor and Employment Lawyers,” Chambers USA’s “Minority Lawyer of the Year,” American Lawyer “Best Mentor,” San Francisco Business Times “Bay Area’s Most Influential Women,” and was a member of the Governing Council of the ABA’s Center for Innovation. Natalie earned her B.A. at UC Berkeley with Honors, and her law degree from Columbia University School of Law, where she was a Harlan Fiske Stone Scholar and recipient of the Emil Schlesinger Labor Law Prize at graduation.


Stephanie Goutos (r) is a Practice Innovation Attorney at Gunderson Dettmer, where she leads the strategic innovation and knowledge management initiatives for the firm’s Employment & Labor practice. Stephanie's accolades include successfully defending multi-state class actions and implementing legal tech solutions that have revolutionized firm-wide processes. Her strategic foresight identifies risks and opportunities well ahead of the curve, making her an invaluable asset in dynamic, complex environments. With her background in class action defense, litigation, and employment counseling, Stephanie bridges her traditional legal expertise with an unyielding passion for forward-thinking innovation strategies.  In doing so, she offers a uniquely holistic approach to problem-solving, providing exceptional value to stakeholders. She is passionate about spearheading transformative change, achieving tangible outcomes, fostering innovation across organizations, and mentoring women to become more involved in the legal technology industry. 

1 Comment


AI at Work: Building a Future-Ready Workforce is an endeavor that parallels the innovation and resilience found in the workwear crafted by STRAUSS, a German family enterprise renowned since 1948 for its durable technical attire designed to meet exacting standards. Just as STRAUSS has become a staple for the European creative class, symbolized by its iconic red ostrich logo, integrating AI into the workforce promises to redefine productivity and creativity in the workplace. The fusion of artificial intelligence with the human element aims to enhance efficiency, foster innovation, and prepare employees for future challenges, much like how workwear from Los Angeles and STRAUSS equips professionals with the necessary durability and comfort to tackle their daily tasks. This synergy between AI…

Like
bottom of page