TALiNT Partners Insights provides invaluable information that enables businesses to make informed, strategic decisions.
Our curated insights are your tools for problem-solving, fostering growth, and achieving success within talent acquisition and staffing.

Ai training

Employees not confident on responsible AI-use at work

Employees express scepticism over responsible AI implementation in workplaces.

Content Insights

62% of leaders and 52% of employees welcome the adoption of AI.
AI should be developed to allow human review and intervention.
Business leaders have been warned about the risks of falling behind with AI.

A recently released report indicates that a significant portion of global leaders and employees are skeptical about the responsible implementation of artificial intelligence (AI) in their workplaces. The survey, conducted by Workday, reveals that 62% of business leaders and 55% of employees express confidence in their organisations’ ability to deploy AI in a responsible and trustworthy manner.

However, the study also brings to light some concerns within the workforce. Approximately 23% of employees admit to lacking confidence in their organisations prioritising employee interests over their own during AI implementation, a sentiment shared by 21% of business leaders. Workday’s Chief Technology Officer, Jim Stratton, commented on the findings, emphasising the lack of confidence and understanding among leaders and employees regarding their organisations’ intentions with AI deployment.

The lukewarm reception toward AI is further underscored by the research, which indicates that only 62% of leaders and 52% of employees welcome the adoption of AI within their organisations.

Four out of five business leaders have yet to release guidelines on responsible AI use

The report, encompassing responses from 1,375 business leaders and 4,000 employees across 15 countries, also highlights a lack of clarity regarding the role of individuals in the integration of AI in the workplace. While 70% of business leaders believe AI should be developed to allow human review and intervention, 42% of employees feel uncertain about which processes should be automated and which ones require human intervention.

Jim Stratton suggests that a comprehensive approach to AI, focusing on responsibility and governance, could help bridge the trust gap. He emphasises the need for organisations to advocate policies that strike the right balance between innovation and trust.

Interestingly, both leaders (42%) and employees (36%) agree on the importance of organisational frameworks and regulations in establishing trust in AI. However, the study reveals that four out of five business leaders have yet to release guidelines on responsible AI use, and three out of four employees feel that their organisations are not actively participating in collaborative efforts for AI regulation.

These findings align with other research suggesting that employers are lagging in embracing AI in the workplace, despite its widespread use among staff. Matt Rosenberg, Chief Revenue Officer and Head of Grammarly Business, has previously warned business leaders about the risks of falling behind, emphasising that the adoption of generative AI is not optional but a necessity in the evolving landscape.

Share