Generative Artificial Intelligence Policy
CITA INSIGHT: Journal of Counseling and Educational Dynamic acknowledges the rapid development of generative artificial intelligence tools (such as ChatGPT, Claude, and other large language models) in the academic ecosystem. To maintain scientific integrity, data confidentiality, and scientific originality in the fields of guidance, counseling, and education, the editorial board has established the following guidelines for all authors:
Rejection of AI Authorship
Artificial intelligence in any form cannot be listed as a co-author. Authorship requires moral, legal, and scientific accountability that only humans can bear. Human authors are fully responsible for all ideas, analyses, and conclusions contained in the manuscript.
Limitations on the Use of AI in Counseling Data Analysis
In guidance and counseling research, sensitivity to psychological dynamics, emotions, and socio-cultural contexts is essential. Generative AI has been shown to have algorithmic biases and an inability to fully understand clinical or pedagogical nuances. Therefore, the use of AI to perform in-depth interpretations of qualitative data (such as counseling session transcripts or behavioral observation notes) is strongly discouraged. Clinical and pedagogical evaluations must originate from the researcher's critical thinking synthesis.
Privacy and Confidentiality Protection of Subjects (Data Confidentiality)
Counseling ethics place client confidentiality as the highest principle. Authors are strictly prohibited from uploading raw data containing personal information, concealed identities, or psychological data of research subjects to public AI platforms. Entering such sensitive data into third-party AI databases constitutes a serious violation of research ethics.
Transparency and Declaration of Use
Authors are permitted to use generative AI on a limited basis for the purposes of improving language readability (language polishing), editing sentence structure, or conducting preliminary literature searches. However, such use must be explicitly declared. Authors must include a statement in the Research Methods or Acknowledgements section, detailing:
- The name of the AI tool used (e.g., ChatGPT version 4.0).
- The specific purpose of use (e.g., grammar editing).
- A statement that the researcher has validated all AI outputs.
Citation Verification Obligation (Hallucination Control)
Generative AI is prone to producing fabricated data and fictitious references (hallucinations). Authors bear absolute responsibility for ensuring that every citation, reference, and claim in the manuscript is real, accurate, and relevant to contemporary mentoring and educational practices.