“The hard problem” of consciousness in the light of phenomenology of аrtificial intelligence
Purpose: The widest use of artificial intelligence technologies tends to uncontrolled growth. At the same time, in modern scientific thought there is no adequate understanding of the consequences of the introduction of artificial intelligence in the daily life of a person as its irremovable element. In addition, the very essence of what could be called the “thinking” of artificial intelligence remains the philosophical Terra Incognita. However, it is precisely the features of the flow of intelligent machine processes that, both from the point of view of intermediate goals, and in the sense of final results, can pose serious threats. Modeling the “phenomenology of AI” leads to the need to reformulate the central questions of the philosophy of consciousness, such as the “difficult problem of consciousness”, and require the search for ways and means of articulation of the “human dimension” of reality for AI.Theoretical basis. The study is based on a phenomenological methodology, which is used in the model of artificial thinking. The implementation of Artificial Intelligence technologies is not accompanied by the development of a philosophy of human coexistence and AI. The algorithms underlying the activities of currently existing intellectual technologies do not guarantee that their intermediate and final results comply with ethical criteria. Today, one should ponder over nature and the purpose of separating physical reality in the primary for our Self mental stream. Originality of the research lies in the fact that the solution to the "hard problem of consciousness" is connected with the interpretation of qualia as the representation of the "physical" as related to bodily states. From this point of view, the resolution of the "hard problem of consciousness" can be associated with the interpretation of qualia as the representation of the "physical". In the "thinking process" of AI it is necessary to apply restrictions related to the fixation of the metaphysical meaning of the human body with precisely human parameters. Conclusions. It is necessary to take a different look at the connection between thinking and purposeful action, including due action, which means to look at ethics differently. “The basis of universal law” will then consist (including for AI), on the one hand, of preserving the parameters of material processes that are necessary for human existence, and on the other, of maintaining the integrity of that semantic universe, in relation to which certain senses only exist.
Andler, Daniel (2007). Phenomenology in Artificial Intelligence and Cognitive Science. In: Companion to Phenomenology and Existentialism. Blackwell Publishing Ltd, 377-393. DOI: https://doi.org/10.1002/9780470996508.ch26 (In English).
Beavers, Anthony F. (2002). Phenomenology and Artificial Intelligence. Metaphilosophy. Vol. 33, Issue1-2: 70-82. DOI: https://doi.org/10.1111/1467-9973.00217 (In English).
Bilokobylskyi, O. (2018). Alhorytmy lyudskoyi svidomosti yak proobraz formuvannya svidomosti shtuchnoyi. Nauka. Relihiya. Suspilstvo. (Science. Religion. Society). No. 1: 107-112 (In Ukrainian).
Boddington, P. (2017). Introduction: Artificial Intelligence and Ethics. In: Towards a Code of Ethics for Artificial Intelligence. Artificial Intelligence: Foundations, Theory, and Algorithms. Springer, Cham. DOI: https://doi.org/10.1007/978-3-319-60648-4_1 (In English).
Chalmers, D. J. (1995). Facing up to the Problem of Consciousness. Journal of Consciousness Studies. Issue 2 (3), P. 200-219 (In English).
Kissinger, H. (2018). How the Enlightenment Ends. Retrieved from https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/ (Accessed: 25.01.19).
Rábová, I., Konečný, V., Matiášová, A. (2005). Decision making with support of artificial intelligence. Agric. Econ. Czech, 51: 385-388. DOI: https://doi.org/10.17221/5124-AGRICECON (In English).
Rayhert, K. (2018). The philosophical issues of the idea of conscious machines. Skhid, 6(152), 104-107. DOI: http://dx.doi.org/10.21847/1728-9343.2017.6(152).122367 (In English).
Vasiliev, V. (2009). Trudnaya problema soznaniya. Progress-Tradition. Moscow: 272 p. (In Russian).
Vetushinskiy, A. (2016). Tri interpretatsii naslediya Tyuringa: imenem chego yavlyayetsya iskusstvennyy intellekt? Filosofskaya mysl. 11 (11): 22-29. DOI: https://doi.org/10.7256/2409-8728.2016.11.21046 (In Russian).
GOST Style Citations
Білокобильський О. В. Алгоритми людської свідомості як прообраз формування свідомості штучної. Наука. Релігія. Суспільство. 2018, №1. С. 107-112.
Васильев В. В. Трудная проблема сознания. М., 2009. 272 с.
Ветушинский A. Три интерпретации наследия Тьюринга: именем чего является искусственный интеллект? Философская мысль. 2016. № 11 (11). С. 22-29. DOI: https://doi.org/10.7256/2409-8728.2016.11.21046.
Andler D. Phenomenology in Artificial Intelligence and Cognitive Science. In: Companion to Phenomenology and Existentialism. Blackwell Publishing Ltd, 2007. Pp. 377-393. DOI: https://doi.org/10.1002/9780470996508.ch26.
Beavers A. F. Phenomenology and Artificial Intelligence. Metaphilosophy. 2002. Vol. 33, Issue1-2: 70-82. DOI: https://doi.org/10.1111/1467-9973.00217.
Boddington P. Introduction: Artificial Intelligence and Ethics. In: Towards a Code of Ethics for Artificial Intelligence. Artificial Intelligence: Foundations, Theory, and Algorithms. Springer, Cham, 2017. DOI: https://doi.org/10.1007/978-3-319-60648-4_1.
Chalmers D. J. Facing up to the Problem of Consciousness. Journal of Consciousness Studies. 1995. Issue 2 (3), P. 200-219.
Kissinger H. How the Enlightenment Ends. 2018. URL: https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/ (дата звернення 05.01.19)
Rábová I., Konečný V., Matiášová A. Decision making with support of artificial intelligence. Agric. Econ. (Czech). 2005. 51, p. 385-388. DOI: https://doi.org/10.17221/5124-AGRICECON.
Rayhert K. The philosophical issues of the idea of conscious machines. Skhid, 2018. 6(152), 104-107. DOI: http://dx.doi.org/10.21847/1728-9343.2017.6(152).122367.
Copyright (c) 2019 Oleksandr Bilokobylskyi
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
ISSN 1728-9343 (Print)
ISSN 2411-3093 (Online)