Home | Insights & Newsroom | AI in the workplace: Interview with Erik Campanini

19/02/2025

AI in the workplace: Interview with Erik Campanini

Our expert

Erik Campanini

Erik Campanini

Associate

How did you become interested in the social and environmental impact of AI?


The starting point for reflection is: “What can AI bring to the company?”

This question comes from the 25 years I’ve spent consulting and supporting companies. 25 years spent finding levers for growth. Naturally, the technological side of things has played its part in the debate at the crossroads between business and tech impact. There has been an acceleration with digital and yet another acceleration with AI in business.

Notre expert

Erik Campanini

Erik Campanini

Associé Alixio Group

Comment en êtes-vous venu à vous intéresser à l’impact social et environnemental de l’IA ?

En fat, le point de réflexion de départ est : “Qu’est-ce que l’IA peut apporter dans l’entreprise ?”

Cette question vient des 25 ans de mon activité de conseil et d’accompagnement des entreprises. 25 années passées à trouver des leviers de croissance . Naturellement, la partie technologique a pris sa part au débat à la croisée des chemins entre l’impact business et tech.  Il y a eu une accélération avec le digital et encore une autre accélération avec l’intelligence artificielle.

What signs do you see of the acceleration of AI in business?

Artificial intelligence itself is not a new subject, having been around since the 50s. What has accelerated is its democratization.

Artificial intelligence, which up until now has been the preserve of experts, data scientists and mathematicians, has been brutally democratized by the introduction of generative artificial intelligence.

Algorithmic power is available to all. This has led to a number of uses in the workplace that raise – and this is what is absolutely fascinating – more organizational, social and societal questions.

What are the main AI-related challenges for businesses?

First and foremost, there’s the issue of productivity and performance. How can I make my activities more precise, whatever my position in the organization? So there’s a business and economic motivation aspect to this dynamic.

What’s interesting, but also worrying, is that we’re approaching artificial intelligence from a technological angle, as if we were introducing just another technology into the organization.

Historically, we have introduced technical innovations to support or augment human work, as with mechanization. However, we are mistaken in our approach if we reduce AI to a simple technological issue. It is first and foremost a social, political and corporate policy issue.

The introduction and generalization of artificial intelligence poses real risks, as several studies have shown, including one carried out in Northern countries, known for their attention to social dimensions. These studies reveal psychosocial risks, abusive surveillance, reduced autonomy and fragmentation of functions.

A figure cited by Franca Salis Madinier of the EESC (European Economic, Social and Environmental Committee) illustrates this trend: in 2023, the CNIL received 16,000 complaints of AI-related abuse, compared with just 2,000 in 2020. This clearly shows that the issue is not limited to business, but also includes fundamental social values within the company.

Isn’t AI, in a way, the mirror of all our society’s technological anxieties and fantasies?

I don’t know if AI is the mirror of all fantasies, but I think we’re getting the debate wrong by polarizing opinions to the extreme.

On the one hand, there are those who see AI as a revolutionary advance, a factor of progress and innovation, even as the future colleague or boss of workers.

On the other, there are those who see it as a threat, a source of casualization and a major risk to employment.

But the debate needs to be much more nuanced. Between these two extremes, there are many aspects to consider, not least the impact on work organization and the need for strong social dialogue. If we don’t set up a structured dialogue on artificial intelligence within companies, we’ll fall into caricature.

You talk about social dialogue, but isn’t there an AI for employees and an AI for managers?

There are a number of studies today, including one by Artefact, which show that AI saves us an average of 57 minutes a day. Gartner, on the other hand, speaks of a productivity drain. In fact, those 57 minutes could be reinvested productively. What I heard last Friday was fantastic: those 57 minutes saved per day could be reinvested in dialogue and qualitative exchanges with my colleagues. Okay, but is that too optimistic a vision? Because in reality, Gartner talks about productivity leakage, i.e. time saved, but not necessarily reinvested minute for minute in something else of value to the company. That’s why it’s essential that social dialogue takes place to ask the question: yes, there are productivity gains, which can be massive depending on the job – some people gain one hour, others up to three hours a day – but if we don’t ask questions about organization, skills, recognition of work, and how this development fits in over time, we’re missing the point.

This is a major challenge for organizational transformation and work structuring. If we don’t think things through properly, we run the risk of exploiting these productivity gains in an unbalanced way, and this is where social dialogue plays a central role.

Today, the social partners are lagging behind in this debate, letting the tech world dictate the rules?

It’s true that social dialogue is still tentative. While we’ve already seen a number of aberrations in the area of digital platforms, notably with algorithmic management, the issue of AI in traditional companies is still insufficiently addressed.

Few agreements have been signed, and there’s a real need to build skills on these subjects. It’s not just a question for employee representative bodies, but also for all managers.

There is also an ethical issue at stake, framed by regulations such as the AI Act, but the speed at which technologies are evolving is such that legislation is struggling to keep up. Hence the need for strong social dialogue, to re-establish a climate of trust. If AI is perceived as a black box, it generates mistrust and profoundly disrupts dialogue within companies. AI in itself is not responsible. It’s the way it’s used that is.

The real question is: what is the purpose of its use? Do we want to reduce the number of jobs? Improve the quality of work? Strike a balance between economic benefit and social and environmental impact? AI offers us a tremendous opportunity to rethink the organization of work, but this requires a structured and inclusive dialogue.

Is it really possible to have responsible AI in business, even though CSR issues are struggling to make headway?

Yes, responsible use of AI is possible, provided that effective social dialogue is in place.

We also need to ask the right questions about AI governance: what decisions can be delegated to the machine? What safeguards should be put in place? How can we ensure that humans retain ultimate control over decisions? Furthermore, if employees are not trained to understand how AI works, they run the risk of blindly following its recommendations, which poses a real problem.

AI offers us a real opportunity to rethink the organization of work, but this requires structured and intensive social dialogue.

What can we do in concrete terms to move forward?

There are several levers for action:

  • Strengthen social dialogue by involving all stakeholders.
  • Define benchmarks via a transparent observatory of responsible uses of AI.
  • Recognize AI skills and help companies structure their approach.
  • Promote best practices, for example by highlighting companies that manage to strike a balance between economic performance, social impact and environmental responsibility.

So, AI is not just a technological issue: it’s above all a social and political issue within companies.