Artificial intelligence as a factor in the transformation of contemporary cognitive practices in the digital age

Authors

DOI:

https://doi.org/10.21847/2411-3093.2026.813

Keywords:

Artificial intelligence, epistemic transparen-cy, epistemic responsibil-ity, epistemic practices, knowledge, cognition, epistemic agency

Abstract

In the present article, a philosophical analysis is carried out of the transformation of epistemic practices in the contemporary digital age, driven by the development of artificial intelligence (AI). The study explains the need for philosophical inquiry “in advance,” since the pace of technological improvement of AI systems is so rapid that philosophical reflection on these processes often occurs with a certain delay. Moreover, the question of the role of AI in cognitive activity cannot be considered separately from the philosophical problem of its potential epistemic agency. The article analyzes the main contemporary approaches to the possibility of reducing thinking to computational functions, which brings artificial intelligent systems closer to “natural intelligence”.
The theoretical basis of the study includes a wide range of conceptual developments, from the ideas of A. Turing to the phenomenological realism of T. Nagel, the biological naturalism of John Searle, and the Dennett’s functionalism. In addition, through the perspective of epistemic structural realism, the study distinguishes the problem of “AI self-consciousness” from its role in the production of new knowledge through the detection of stable correlational patterns that can be identified by AI systems independently, without the participation of a human researcher.
The article also examines models of “human–AI” interaction, both purely instrumental ones and those in which AI systems are delegated a leading role in the research process. Furthermore, the study highlights risks, associated with the “epistemic opacity” of complex neural networks (the black-box problem), as well as the possibility of generating “chimeric entities” that may potentially distort research results. At the same time, the article emphasizes that the identification of correlations in large data sets is not sufficient for the formation of a full scientific theory. This requires a complex path from the intuitive formulation of a hypothesis to its support by empirical data, as well as recognition by the scientific community. The study concludes that despite their high computational power, AI systems cannot replace the human researcher in matters of goal-setting, creative inquiry, and the acceptance of epistemic responsibility for the results of knowledge production.

Downloads

Download data is not yet available.

References

Basille, A. W. (2021). The problem of artificial qualia (Master’s thesis). Sorbonne Université. Retrieved from https://philarchive.org/archive/BASTPO-35

Brouwer, L. E. J. (1913). Intuitionism and formalism. Bulletin of the American Mathematical Society, 20, 81–96. Retrieved from http://thatmarcusfamily.org/philosophy/Course_Websites/Readings/Brouwer%20-%20Intuitionism%20and%20Formalism.pdf

Coeckelbergh, M. (2026). AI and epistemic agency: How AI influences belief revision and its normative implications. Social Epistemology, 40(1), 59–71. https://doi.org/10.1080/02691728.2025.2466164

Davis, L.H. (1982). What it is like to be an agent. Erkenntnis 18, 195–213 https://doi.org/10.1007/BF00227934

Dennett, D. C. (1991). Real patterns. The Journal of Philosophy, 88(1), 27–51. https://doi.org/10.2307/2027085

Fodor, J. A. (1975). The language of thought (Vol. 5). Harvard University Press. Retrieved from https://ru.scribd.com/doc/159523534/FODOR-The-Language-of-Thought-1975

French, R. M. (2000). The Chinese Room: Just Say “No!”. In Proceedings of the Annual Meeting of the Cognitive Science Society, 22 (22). Retrieved from https://escholarship.org/uc/item/9062452s

Herzog, D. J., & Herzog, N. (2024). What is it like to be an AI bat? https://doi.org/10.32388/63ELTC.2

Kadykalo, A. (2014). Problemnist vyznachennia svidomosti ta shtuchnyi intelekt. Visnyk Natsionalnoho universytetu “Lvivska politekhnika”. Filosofski nauky, (780), 9–16. Retrieved from https://ena.lpnu.ua/items/3783ab2e-e004-469d-8519-a1f0a5f9172b (In Ukrainian)

Ladyman, J., & Ross, D. (2007). Every thing must go: Metaphysics naturalized. Oxford University Press.

McCarthy, J. (1979). Ascribing mental qualities to machines. Retrieved from http://www-formal.stanford.edu/jmc/ascribing.html

McCulloch, W.S., Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, 115–133. https://doi.org/10.1007/BF02478259

Nagel, T. (1974). What Is It Like to Be a Bat? The Philosophical Review, 83(4), 435–450. https://doi.org/10.2307/2183914

Nieder, A. (2021). Neuroethology of number sense across the animal kingdom. Journal of Experimental Biology, 224(6). https://doi.org/10.1242/jeb.218289

Popovych, M. (1997). Ratsionalnist i vymiry liudskoho buttia. Kyiv: Sfera (In Ukrainian).

Putnam, H. (1960). Minds and machines. In S. Hook (Ed.), Dimensions of mind: A symposium (pp. 138–164). New York University Press.

Putnam, H. (1981). Reason, truth and history (Vol. 3). Cambridge University Press.

Rudenko, O., Bugrov, M., & Savolainen, I. (2025). Transcendence and artificial intelligence as a problem of metaphysics. Bulletin of Humanities, (14). https://doi.org/10.5281/zenodo.17879302 (In Ukrainian)

Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–424. https://doi.org/10.1017/S0140525X00005756

Shapovalov, V., & Morozov, A. (2025). Epistemolohichna problema ens rationis u posttrydentskii skholastytsi i suchasna filosofiia shtuchnoho intelektu. Sententiae, 44(1), 42–61 https://doi.org/10.31649/sent44.01.042 (In Ukrainian).

Shapiro, S. (1997). Philosophy of mathematics: Structure and ontology. Oxford University Press. Retrieved from https://altexploit.wordpress.com/wp-content/uploads/2017/10/stewart-shapiro-philosophy-of-mathematics_-structure-and-ontology-oxford-university-press-usa-1997.pdf

Shynkaruk, V. I. (Ed.). (2002). Filosofskyi entsyklopedychnyi slovnyk. Kyiv: Abrys. (In Ukrainian)

Simbolon, L., Manugeren, M., & Barus, E. (2025). Does AI know things? An epistemological perspective on artificial intelligence. Journal of English Language and Education, 10(5), 1022–1028. https://doi.org/10.31004/jele.v10i5.1592

Sullivan, E. (2019). Understanding from machine learning models. The British Journal for the Philosophy of Science, 73(1), 109–133. https://doi.org/10.1093/bjps/axz035

Turing, A. M. (1987). Computing machinery and intelligence. Mind, 59(236), 33–60. Retrieved from https://courses.cs.umbc.edu/471/papers/turing.pdf

Wiener, N. (1954). The human use of human beings: Cybernetics and society (2nd rev. ed.). Monoscop. Retrieved from https://monoskop.org/images/9/90/Wiener_Norbert_The_Human_Use_of_Human_Beings_1950.pdf

Yang, S., & Ma, R. (2025). Classifying epistemic relationships in human-AI interaction: An exploratory approach. https://arxiv.org/pdf/2508.03673

Downloads

Published

2026-03-26

How to Cite

Kreze, O. . (2026). Artificial intelligence as a factor in the transformation of contemporary cognitive practices in the digital age. Skhid, 8(1), 23–28. https://doi.org/10.21847/2411-3093.2026.813

Issue

Section

Philosophy of the Information Age: Value Dimensions and Digital Transformations