top of page

The Paris Conference 2024

Call for papers

 

We invite academic and industry researchers to submit a proposal in one of the five following disciplines, or related disciplines. Mixed methods and transdisciplinary approaches are particularly valued.

  • Computational philosophy and social sciences

  • Ethics and moral philosophy

  • Political theory and science

  • International relations

  • Law

The focus of this 2nd edition of The Paris Conference is on the mutations of democracy in the era of AI, especially generative AI and frontier models. Proposals should address topics related to this theme, and the following issues can be considered as non-exhaustive examples:​

  • Long-term impacts of AI-assisted decision-making tools on society 

  • Challenge of aligning LLMs with cultural values

  • Psychological and social impacts of LLMs when mediating social interactions

  • Innovative comprehensive approaches to generative AI safety

  • AI applications to support online deliberation

  • Content integrity and moderation on social media

  • Algorithmic governmentality and the epistemic implications of AI-driven science

  • Use of AI for sustainable economic development 

  • Use of AI in defence strategies and law enforcement 

  • Renovating democracy and preserving digital sovereignty

  • Rethinking property rights in the age of generative AI

 

Proposals should be explicitly linked to one of the four following tracks.

   1.  AI for defence strategies and law enforcement: In the context of a return to high-intensity conflicts when AI technologies are being used extensively, this track welcomes papers to help society understand the implications of using AI for defence and law enforcement needs as well as to develop responsible ways of doing so. Typical contributions may examine psychological and sociological aspects of surveillance systems, suggest relevant framework and guardrails to deploy them, investigate the upkeep of meaningful control over autonomous and semi-autonomous weapon systems, or question the evolution of the just war theory in the age of AI.

   2.  Generative AI and the problem of diversity: Beyond safety and superalignment concerns, generative AI models face the challenge of cultural and moral pluralism. This track welcomes papers dedicated to understanding how generative models could interact with social values: how these latter can be captured, expressed, or voted on; what scale should be considered relevant (national, regional, …); and how models could be fine-tuned to adapt to target audiences. Typical contributions may elaborate on, or criticise, holistic approaches, such as constitutional AI; question the conception of diversity in data sets and AI development teams beyond races and genders (e.g. social, religious, disciplinary diversity); or discuss the implications of self-representation for populations (e.g. autotranscendence of the social).

   3.  AI to renew democracies: Many democratic regimes, especially in Western countries, currently face a trust crisis, calling for mechanisms to renew the social contract and re-cement communities. This track welcomes papers suggesting innovative uses of digital and AI technologies to renew citizens’ trust in their institutions and strengthen democratic regimes. Typical contributions may suggest solutions to make political representatives more accountable, decision-making more transparent, or democratic participation more direct as well as to moderate online misinformation, support public debate and collective decision-making processes, or promote peace between populations and nations.

   4.  The implications of (super)intelligence: Recognising artificial agents as intelligent, or superintelligent, may lead to a number of consequences, including the expansion of their decision scope as moral agents in the medical space; the impact that such a recognition may have on other entities, such as animals; or the risks it may create. Contributions addressing these aspects are welcome in this track, and typical papers may engage with the literature on moral status, discuss the fair distribution of responsibility for AI-assisted decision tools, suggest approaches and metrics to measure intelligence, or even criticize the relevance of intelligence to address moral dilemmas and status.

  

Format and deadlines

We invite researchers to submit a 500-word abstract in English, followed by a short bibliography (5 references maximum), by February 22, 2024, at 11:59 p.m. CET. The abstract should be sent to pcaide2024@gmail.com and follow this template. The authors whose papers are selected will be notified by April 8th, 2024. They will be invited to present their paper in person at the conference.

Proceedings

The transcriptions of the selected presentations will be published in the conference’s proceedings. For publication, a text of 5,000 words (± 10%), incorporating feedback from the discussions at the conference, must be submitted within three weeks of the end of the conference.

Questions can be addressed at pcaide2024@gmail.com.

bottom of page