De Vinci Higher Education organised a round table bringing together experts, educators and professionals to explore the challenges of generative artificial intelligence. Through a cross-disciplinary approach supported by its three schools (EMLV, ESILV, IIM), located in Paris, Nantes and Montpellier, the Cluster examined the impact of this technology: opportunity or threat for education?
30 November 2024 marked the second anniversary of ChatGPT, a tool developed by OpenAI that popularised the use of generative AI. Since then, this tool has revolutionised technological practices and sparked a profound reflection on its integration into education.
With the widespread adoption of generative AI, educational institutions are rethinking their methods to address technical, pedagogical, and societal challenges.
AI in education: adoption and dependence
Generative AI, used by 99% of students surveyed in a study conducted by the Léonard de Vinci Centre, is transforming educational practices.
While 83% see it as a time-saver and 79% as a valuable aid in tackling complex problems, 51% of users report a dependence on it to complete certain tasks. This statistic highlights a major challenge: how can AI be integrated without compromising learners’ autonomy?
AI: teaching approaches at schools in the Léonard de Vinci cluster
Moderated by Peter Saba, lecturer and researcher at EMLV and permanent researcher within the IS team at Montpellier Recherche en Management (MRM), this round table discussion enabled the three schools within the cluster, EMLV, ESILV and IIM, to offer a cross-disciplinary perspective on the integration of AI into teaching.
- EMLV: strategic and cross-disciplinary skills
Duc Khuong Nguyen, EMLV Dean, emphasises that companies expect graduates to have a mastery of technical tools (Python, algorithms) but also cross-disciplinary skills such as creativity and collaboration. AI is becoming a strategic lever, requiring guidance to fully exploit its potential. - IIM: co-creation and augmented creativity
Lydia Nikolic, Director of IIM, highlights the use of tools such as MidJourney in student projects, which accelerate and enrich creations. However, the goal remains to prevent standardisation and promote human-machine dialogue to ensure originality. - ESILV: methodology and responsible AI
Pascal Pinot, Director of ESIMV, emphasises the responsibility of engineers in the development of AI solutions. Transparency, documentation and interpretability of models are the key words for avoiding abuses and better regulating the use of these technologies.
Organisational and societal challenges: between risks and opportunities
The integration of AI also poses challenges at the corporate and societal levels.
- Generational clash and confidentiality
Florence Jacob, Senior Lecturer at IAE Nantes, highlights intergenerational tensions surrounding the adoption of AI tools. Young people, who are tech-savvy, can sometimes bypass confidentiality rules by using unregulated solutions. This lack of regulation exposes companies to ethical and operational risks. - Cognitive dependency and standardisation
In a context where automation is becoming the norm, academic and professional output risks losing diversity and depth. Training in the critical use of AI is crucial to avoid standardisation of ideas and preserve critical thinking.
AI: how to strike the right balance between productivity and autonomy?
Nicolas Glady concludes on an optimistic note, stating that generative AI can be a cognitive revolution if used wisely.
The challenge for institutions such as the Pôle Léonard de Vinci is to strike a balance between technological innovation and human development, through an approach that combines science, creativity and humanism. Ultimately, AI, whether generative or degenerative, reflects above all the use that is made of it.