A survey of 13 000 employees, managers and leaders in large companies in 13 countries about their perceptions of the benefits of, and concerns over, artificial intelligence (AI) has highlighted significant gaps between the expectations of the impact of AI on work and organisations and the level of adaptation to ensure its beneficial use.
The survey, undertaken by global consulting firm Boston Consulting Group’s (BCG’s) technology unit BCGX, has shown that there are strong correlations between the use of AI tools and positive perceptions and lower levels of concerns across all employee groupings.
However, there remained marked differences in the perception levels between frontline employees and leaders, with 62% of leaders optimistic about the impacts of the use of AI and only 42% of frontline employees optimistic about the impact of AI, said BCGX senior partner and AI global leader Nicolas de Bellefonds.
This had significant implications for organisations, as leaders tended to have a more optimistic view of AI than frontline workers. Organisations, therefore, must think carefully if they wanted to drive and scale AI use across their organisations, said BCGX partner and AI ethics chief director Steve Mills.
More companies reported using AI this year – at just below 50% of companies surveyed – than the 22% of companies that used AI during the 2018 BCGX survey, he added.
In terms of the overall survey results, there was a 17 percentage point increase in optimism to 52%, compared with the 35% of respondents who expressed optimism in 2018, and a ten percentage point decrease in the levels of concern expressed by respondents to 30%, down from 40% in the 2018 survey.
However, only 14% of employee respondents to the 2023 survey said they had received any training to prepare them to use AI and for changes in their workflows, and 86% of respondents believed they would need upskilling to address how AI would change their jobs, highlighted BCGX partner and talent and skills global leader Vinciane Beauchene.
“Further, 36% of respondents said they believe AI is likely to eliminate their jobs. This is a significant number and, when taken with the larger number of respondents who think their jobs will change as a result of AI, explains why almost every respondent (86%), regardless of optimism level, said they will need upskilling to address how AI will change their jobs,” she said.
“The survey results show that employees realise that the AI revolution is taking place and that it will have a significant and qualitiative impact, and that companies are not yet ready to undertake what is needed to adapt to the revolution,” she emphasised.
Further, the survey showed that 80% of leaders use AI tools regularly, compared with 20% of frontline employees, and that 44% of leaders said they had already gone through upskilling, compared with only 14% of frontline employees.
Facilitating the use of AI in organisations required that organisations ensured there were spaces for responsible experimentation, as the comfort levels with technology played a key role, said Mills.
The more regularly employees used AI and generative AI, the more clearly they recognised its benefits, as well as its limitations and risks, he pointed out.
Companies must also invest in regular upskilling because training is essential and must be done continuously.
“Given how swiftly technology evolves, organisations cannot treat upskilling as a one-off effort. They must invest in training to help employees adapt to the ways AI will change their jobs,” he emphasised.
However, the training and upskilling required would be a stretch for most companies, and organisations must become better at anticipating the evolution of jobs and the skills that their employees had, as well as produce upskilling content more rapidly, said Beauchene.
This organisational change must extend beyond only human resources functions and must be tackled by managers and leaders to ensure upskilling was progressively embedded within employees’ workflows, she added.
Another recommendation from the BCGX survey is that companies must prioritise building a responsible AI programme. The responsible use of AI is paramount and employees want to be assured that their organisations are approaching AI and generative AI ethically.
Further, leaders wanted to be in a position to help frame emerging AI regulations, Mills said.
The survey showed that only 29% of frontline employees believed their companies had implemented adequate measures to ensure responsible use of AI, while 68% of leaders felt the same, noted Beauchene.
This result was a call to action for companies to see where the gap between these perceptions originated and then to mitigate it at a company-wide level, she said.
“Further, 79% of all respondents believe that AI-specific regulations are necessary. This figure is independent of the level of optimism expressed, indicating that responsible AI is a prerequisite for most respondents,” Beauchene said.
Meanwhile, when driving the use of AI as part of the strategic imperatives of organisations, companies should focus on driving the cultural change by encouraging the responsible experimentation and use of AI among employees, including frontline employees, said Mills.
Therefore, each company must set out the rules for how its employees can use AI responsibly and must then ensure that there are senior executives who are responsible for the company’s AI programme and are accountable for it.
This team must be appropriately funded and resourced to take action and to develop the AI “guard rails” within which the company will use AI.
However, an agile review process could be highly beneficial so that the teams experimenting with AI could reach out and get guidance in the event that they ran into ethical or responsible AI use challenges.