Generative Artificial Intelligence Policies under the Microscope
How computer science conferences are navigating the new frontier in scholarly writing.
Authors: Mahjabin Nahar, Sian Lee, Rebekah Guillen, and Dongwon Lee
Since the rise of ChatGPT, generative AI (GenAI) technologies have gained widespread popularity, impacting academic research and everyday communication. While GenAI offers benefits in task automation, it can also be misused and abused. This analysis provides an overview of how computer science (CS) conferences are adapting to this paradigm shift regarding GenAI policies. Many CS conferences have not established GenAI policies, and those that have vary in leniency, disclosure, and sanctions. Policies for authors are more prevalent compared to reviewers, and some address code writing and documentation. These policies are evolving, as demonstrated by conferences such as ICML 2023. A notable gap exists in reviewers' GenAI policies, perhaps stemming from concerns about exposing sensitive information.
Key Insights at a Glance
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
Most conferences with GenAI policies were somewhat lenient for authors (ratings of '3' or '4'). AI conferences were highly permissive (rating '5'), while Interdisciplinary conferences like UIST and VR were more restrictive ('1' or '2'). Reviewer policies were generally more restrictive across the board, particularly for ICRA, VIS, and IMC, due to concerns over content leakage and LLM limitations.
Area | Author Leniency (Year 1) | Reviewer Leniency (Year 1) |
---|---|---|
AI | 3.50 | 3.00 |
Interdisciplinary | 2.50 | 3.00 |
Systems | 4.00 | 3.00 |
Overall Average | 3.50 | 3.00 |
Overall, conferences are moving towards introducing new GenAI policies. In Year 1, only 28.1% of 64 conferences had GenAI policies (authors or reviewers), increasing to 50% in Year 2. Authors' policies saw an increase from 28.1% to 48.4%, while reviewers' policies increased from 4.7% to 17.2%. AI field leads in adoption (92.3% for authors by Year 2), followed by Interdisciplinary (60%). Systems lagged but showed alignment over time. Theory has no GenAI policies.
Evolution of GenAI Policy Adoption (Authors)
ACM, IEEE, and AAAI, as major CS societies, have GenAI policies. Society-level policies generally permit author GenAI use with disclosure and allow reviewers to enhance reviews if submissions remain unexposed. IEEE has no reviewer policies. There's inconsistent adoption, with many conferences either referring to society-level policies (e.g., SIGCSE, VIS, IMC) or establishing their own (e.g., CVPR, UIST). Many conferences appear unaware of society-level policies.
Policy Inconsistency Example
ICML 2023 initially prohibited LLM-generated text but later clarified its allowance for editing author-written content. This evolving approach highlights the uncertainty and ongoing adaptation within the academic community regarding GenAI usage.
Key Takeaway: Policies are evolving and often unclear, requiring continuous adaptation.
Society | Author Policy | Reviewer Policy |
---|---|---|
ACM | Permits with disclosure | Permits if unexposed |
IEEE | Permits with disclosure | No specific policy |
AAAI | Permits with disclosure | Permits if unexposed |
Conferences without policies should establish clear guidelines for authors and reviewers. Society-level policies could ensure consistency. Professional development, training, and balanced enforcement are crucial. GenAI is transformative; lenient use is recommended, especially for non-native English speakers. GenAI should enhance, not alter, work, and reviewers' judgments must be independent. Transparent disclosure and full responsibility for accuracy and ethical compliance are essential, as GenAI tools are imperfect and prone to hallucinations.
Recommended GenAI Policy Implementation Flow
Calculate Your Potential AI Impact
Estimate the time and cost savings your organization could achieve by implementing AI-powered solutions, tailored to your specific context.
Our Proven AI Implementation Roadmap
A structured approach to integrating AI into your enterprise, ensuring smooth adoption and measurable results.
Phase 1: Discovery & Strategy
In-depth analysis of current workflows, identification of AI opportunities, and development of a tailored AI strategy aligned with your business goals.
Phase 2: Pilot & Proof-of-Concept
Development and deployment of a small-scale AI solution to validate feasibility, demonstrate impact, and gather initial user feedback.
Phase 3: Integration & Scaling
Seamless integration of AI solutions into existing enterprise systems, training for end-users, and gradual scaling across departments.
Phase 4: Optimization & Future-Proofing
Continuous monitoring, performance optimization, and strategic planning for future AI advancements and expanded applications.
Ready to Transform Your Enterprise with AI?
Book a free, no-obligation strategy session with our AI experts to explore how these insights can be applied to your organization.