When the world wide web arrived way back in 1991, it was met with optimism and pessimism in equal measure and although it was touted as a universally linked information system, adoption apprehension was high.
The same thing is happening now that generative AI has become mainstream. It’s been around for years, but we’ve never used it in our daily work and personal lives as much as we are now and its obviously sparking debate. Generative AI systems like Chat GPT have made remarkable advancements, capturing the attention of both the media and the public. While these systems bring numerous benefits, like increased efficiency and innovation, they also give rise to concerns and challenges. There are a few reasons behind the negative press surrounding generative AI and emphasises the importance of responsible development, transparency, and a focus on human-centric AI to harness its potential. By addressing the major concerns like job displacement, data privacy and ethics issues we can foster a more balanced and informed dialogue about the future of AI.
But how?
With the automation of routine tasks increasingly driven by AI technology, employment landscapes will experience inevitable changes and job displacement – but it doesn’t have to be doom and gloom. This transition demands a renewed focus on skills that cannot be easily replaced by machines, such as creativity, problem-solving, and critical thinking.
While AI promises to create new opportunities, many people may feel apprehensive about their employment and future prospects. When we reached out to our wider audience for comment on why Talent Leaders believe AI is generating the negative press, the consensus is that there are two main factors fuelling the negative fire around AI: Fear of the unknown and the media. Marcelle Foxcroft, Global Head of TA at Trustpilot made a valid point around fear masking its potential benefits: “Anything that is ungoverned and unregulated, makes people nervous. If we think back to the launch of Bitcoin, the majority of the articles were around the use of Bitcoin for organised crime. Few people focused on the blockchain technology that sat behind it and how innovative the technology was.”
While negative narratives can often surround generative AI, it is important to approach this technology with a balanced perspective and to not fall for the ‘clickbaity’ media headlines that promote sensationalism and fear. David Annable, CEO and Founder at specialist recruitment company, Franklin Fitch believes that these kinds of headlines only exacerbate the fear of losing your job to a robot and that this fear seems to grow more real than even the science fiction movies that initially depicted the scenario.
While negative narratives can often surround generative AI, it is important to approach this technology with a balanced perspective and to not fall for the ‘clickbaity’ media headlines
Proactively mitigating negative consequences of AI replacing jobs
Adopting a strategic workforce planning approach becomes essential, focusing on proactive measures to harness AI’s potential positively. While workforce planning seems an impossible task to TA leaders, the use of Talent Intelligence (market data plus gossip!) can support effective workforce planning and could potentially tell you what skills in future, and where to find them. Therefore, mitigating job loss thanks to AI because you’re able to forward plan.
Again, our audience shared that strategies like upskilling and reskilling should be implemented to mitigate any possible negative affects AI will have the job market. This involves implementing training programmes, workshops, and resources to foster a culture of continuous learning and adaptability.
Promoting a culture of continuous learning is vital for workforce development and offering upskilling programmes focussed on soft skills helps individuals transition into new roles and fosters adaptability and innovation. Alex Fourlis, President at Broadbean said, “AI won’t replace your job, but someone who knows how to use AI might…
“More and more we will see companies requiring AI proficiency in their job descriptions, as AI technology becomes as standard in workplace processes as the internet. Organisations shouldn’t shy away from AI but should instead seek to embrace AI tools and technologies that can empower their workers to be more efficient and effective. This includes educating their employees on how to get the most impact of their AI tech stack.”
There isn’t a conversation about AI that doesn’t touch on the notion that AI should be used to automate menial admin tasks to free up time for recruiters and TA leaders to do what they do best: foster relationships. The emphasis on human skills remains as AI can automate tasks, but it can’t replicate essential human qualities like creativity, emotional intelligence, and problem-solving. Encouraging and promoting these unique skills allows employees to complement AI systems effectively.
Organisations shouldn’t shy away from AI but should instead seek to embrace AI tools and technologies that can empower their workers to be more efficient and effective.
By putting employees at the centre of AI implementation it creates a people-centric approach. Understanding how AI can improve employee fulfilment and supporting them in adapting to new roles. As such, the integration of AI should be seen as an opportunity to relieve employees from mundane tasks, allowing them to focus on more strategic and impactful aspects of their roles. Travis Kessel, Vice President Talent Acquisition at Disney elaborated on this sentiment. “Similar to other implementations in technology, while this may automate more manual or tactical aspects of a business, it will free others up to focus on more strategic priorities or directly dealing with human interactions. Relationships drive business and the irreplaceable human element that is required for this will not go away.”
“AI as a tool can give us more time to focus on the high-value tasks that require the empathy, problem-solving and nuance that only humans can provide,” commented Lindsey Zuloaga, Chief Data Scientist at HireVue. Jason Heilman, SVP Product – Automation and AI at Bullhorn agrees. “When deployed thoughtfully, AI-powered tools will be an enabler of success to those who leverage them — not a replacement for employees. It can empower your junior employees to work more like senior staff, and senior employees to work faster.” Rather than fearing AI, organisations should embrace it as an empowering tool for their workforce. Educating employees on effective AI utilisation enhances efficiency while retaining human involvement in decision-making.
Ethical AI – An expanded role for HR
Major players like Microsoft, Google, and Facebook are investing heavily in its development. However, it is crucial that the benefits of AI technology are distributed fairly across society, and that potential negative social impacts are minimised. There should be policies and strategies that can help achieve this and foster a less frightening AI-driven future. According to the head of HR at one of the UK’s leading car manufacturers, the responsibility of the safe and ethical use of AI will fall to HR teams.
They said, “AI ethics is probably something that the HR function is going to need to understand in greater depth, along with a greater understanding of what machine learning and statistical inference actually is, so it can be deployed in a considered way to add value for employees and customers.” One pressing concern is discrimination. For example, the fear that older workers may be overlooked or face barriers when adapting to more tech-centric roles, further exacerbating existing inequalities.
Additionally, access to quality education remains unequal, particularly affecting poorer communities. A shortage of skilled workers in STEM fi elds, worsened by the pandemic and Brexit, has further deepened this divide, leaving many workers behind in the re-skilling process. Ed Holroyd Pearce, President and Co-founder of Virtual Internships believes that while big corporates have a role to play in the equitable distribution of AI, so do governments and regulators. “Ensuring that technology benefits a wide geographic area and broad sociodemographic backgrounds is going to be something corporates will want to showcase and prove. Equally, passing on cost savings to the customers that need it most will be in focus for some.”
The role of DE&I and ESG
The positive news is that diversity and inclusion (DE&I) and environmental, social, and governance (ESG) considerations have gained prominence in corporate agendas. Could emphasising these principles in AI strategies lead to more inclusive organisations and reduce negative social impacts? Perhaps, but how?
Ensuring that technology benefits a wide geographic area and broad sociodemographic backgrounds is going to be something corporates will want to showcase and prove.
To prepare the workers of tomorrow, AI education should be integrated into the curriculum and embraced as a positive tool. Promoting the use of AI and its benefits can elevate opportunities for everyone, regardless of their background.
As AI’s impact transcends borders, a global approach to regulation and cooperation between tech giants and government leaders is vital. Matt Comber, CEO at SourceFlow feels we all have a moral obligati on to regulate AI use. “We need to look at industry-wide charters and agreements to decide how we will use AI and ethical practices. There is, in truth, not a lot we can do from a policy point of view to stop bad actors when it comes to AI, we instead need to look at ourselves and decide on human and moral policies in terms of our use of it.
What can organisations do to actively mitigate job replacement by AI?
Internal mobility
“Knowing the role changes that AI will bring it is critical to create opportunities for our existing colleagues to move or reskill. We have already developed internal Academies where colleagues can learn new skills in tech or finance, among others and are creating a new platform where internal colleagues can display their skills, and potentially could be sourced by recruiters for new positions.” – Director of TA at a global consumer bank
Safe use for children – the future of the workforce
“I believe it is absolutely essential that the tech giants get together with leaders of governments around the world. The leaders need to understand how to shape new laws and prepare sectors such as education in how to adapt to the changes. Tech giants need to sort out how to protect people from misleading and untruthful information as well as protecting children, who are increasingly exposed to online content (good and bad).” – Julia Feuell, CEO at OTT – eLearning and Trade Marketing
Job security
“Individuals who lack the opportunity to reskill within their current business can explore alternative career paths by identifying industries or functions that face substantial shortages presently and are projected to experience such shortages in the future, as indicated by the skills priority list. Pursuing a transition into one of these areas, if feasible, can provide enhanced job security.” – Nicole Hart, Marketing Manager at Fuse Recruitment.