Accenture staff must demonstrate they have fully bought into the consultancy’s AI vision if they want to get on.
A memo sent to senior staff this week, and reported in the FT, informed them that promotions to top roles at the corporation would necessitate “regular adoption” of AI tooling, and it is tracking uage.
In a statement to The Register, Accenture said:
“Our strategy is to be the reinvention partner of choice for our clients and to be the most client-focused, AI-enabled, great place to work. That requires the adoption of the latest tools and technologies to serve our clients most effectively.”
We asked a number of rival consultancies whether they were taking a similar approach to that of Accenture. So far neither their PRs nor their chatbots have come back to us.
However, consultancies, in common with other big businesses, have poured massive amounts of resources into AI. So, if staff were not “encouraged” to use them, those investments might look like a bit of a waste.
According to reports, McKinsey even asks potential recruits to use its own internal tool during assessments.
Accenture’s AI lineup includes its AI Refinery Platform, developed with Nvidia – who else – and launched in 2024. At the time, chair and CEO Julie Sweet said:
“AI Refinery will create opportunities for companies to reimagine their processes and operations, discover new ways of working, and scale AI solutions across the enterprise to help drive continuous change and create value.”
In September, she reportedly warned that it would “exit” employees who did not embrace AI. Of course, there’s also the prospect of staffers reimagining other ways to use AI, which also necessitates monitoring.
It emerged at the start of this week that KPMG had fined a senior staffer who used AI to ace an internal training course on, what else, AI. In fact, over two dozen Aussie staffers have been caught out using AI for internal exams.
Andrew Yates, CEO, KPMG Australia, said in a statement sent to The Reg:
“As soon as we introduced monitoring for AI in internal testing in 2024, we found instances of people using AI outside our policy. We followed with a significant firmwide education campaign and have continued to introduce new technologies to block access to AI during testing.”
He added: “Monitoring and education around appropriate AI use are always on. Given the everyday use of these tools, some people breach our policy. We take it seriously when they do. We are also looking at ways to strengthen our approach in the current self-reporting regime.”
®







