The policy on the use of AI

1. GENERAL PRINCIPLES

The journal “Art and Design” applies clear rules for the use of generative artificial intelligence (GenAI), based on the recommendations of WAME and Elsevier. The main aim of this policy is to guarantee scientific integrity, avoid falsifications and maintain trust in published materials.

The journal supports the transparent and responsible use of generative artificial intelligence (GenAI) tools in scientific research and publications.

The use of such tools is permitted provided that:

  • adherence to the principles of academic integrity;
  • ensuring transparency of their use;
  • maintaining full responsibility of the authors for the results of the research.

Generative language models cannot be considered as authors or co-authors, since they are not able to bear legal or ethical responsibility for the content of the article, approve the final version of the text or respond to the comments of reviewers. Permissible use of AI is limited to technical-editorial purposes, such as improving grammar, style, or submission structure.

2. DISCLOSURE OF THE USE OF AI

In the case of using generative AI, authors are required to disclose information about its use in the manuscript. Each case of using generative AI must be clearly declared. If the tool was used for editing the text, this should be noted in the “Acknowledgements” section. If AI was part of the methodology or was used in the process of working with data, this information should be indicated in the “Materials and Methods” section and, if necessary, in the annotation. Authors are required to describe the name of the tool, its version, the purpose of use and the nature of the interaction. Concealing the use of AI is considered an ethical violation.

Responsibility for the reliability, accuracy and correctness of the information in the manuscript remains entirely with the authors, regardless of whether artificial intelligence was used.

It is prohibited to use AI to create scientific results, interpretations, statistical data, tables, graphs or any visual materials, as well as to invent bibliographic sources or imitate scientific analysis. The only exceptions may be specialized studies where AI tools are part of the experimental methodology; in such cases, a detailed description of the method should be included in the "Materials and Methods" section.

To ensure transparency, the journal recommends

using GAIDeT (Generative AI Delegation Taxonomy) — an approach that allows you to clearly record the tasks delegated to generative AI, while maintaining responsibility for the results with the authors.

The declaration should contain:

  • indication of the tool used (name, version);
  • description of the tasks delegated to AI;
  • indication of the authors' responsibility for the final result.

The declaration is placed in the manuscript before the list of sources used.

The journal recommends using the GAIDeT Declaration Generator for standardized declaration generation:
https://panbibliotekar.github.io/gaidet-declaration/

Example of Declaration:
The authors declare the use of generative AI in the research and writing process. In accordance with the GAIDeT taxonomy (2025), the following tasks were delegated to GAI tools under full human supervision: literature search and systematisation; data analysis; translation; analysis of ethical risks. The GAI tool used: ChatGPT-5. Full responsibility for the final version of the manuscript rests with the authors. GAI tools are not listed as authors and bear no responsibility for the final results.

3. LIMITATIONS ON THE USE OF AI

Generative AI tools:

  • cannot be listed as co-authors;
  • cannot bear responsibility for the content of a publication;
  • cannot replace the scientific interpretation of results.

The use of AI does not exempt authors from responsibility for:

  • the reliability of the data;
  • the correctness of the conclusions;
  • compliance with ethical standards.
4. USE OF AI IN PEER REVIEW

The peer review of manuscripts must be carried out exclusively by experts.

The use of generative AI for preparing reviews is not allowed, because:

  • it may violate confidentiality;
  • it reduces the level of expert responsibility;
  • it does not provide proper scientific assessment.

Reviewers are prohibited from uploading manuscripts to any generative systems or generating reviews using AI; only technical editing is permitted, provided that the confidentiality of the manuscript is maintained.

Reviewers are prohibited from uploading manuscripts to any generative systems or generating reviews using AI; only technical editing is permitted, provided that the confidentiality of the manuscript is maintained.

In cases of violation of this policy by an author or reviewer, the editorial board may suspend the consideration of the manuscript, request revisions, reject the article, notify the relevant institution, or, if the violation is identified after publication, retract the article.

5. PRINCIPLES OF RESPONSIBLE USE

The journal assumes that generative AI is a tool to support research, not its subject.

The use of AI should:

  • be transparent;
  • be controlled by a human;
  • not replace the author's contribution;
  • not create risks to the reliability of scientific results.