Current AI often "appears" smart, but the Large Language Models (LLMs) are not very smart. I don't claim that products like ChatGPT are not impressive, but it is at the best "mimicking" output of what an intelligent (consious) device does. It does not mimic the tough process.
What we see now is basically the result of the Chinese room argument [wiki] as a counter-argument to the Turing test. This argument is not very new, in fact old chat bots that were less equipped like ELIZA could already trick humans into believing that these were actual humans. It did that with a small set of "rewrite rules" in English. So imagine says:
I am quite happy.
It can build a syntax tree [wiki] out of this sentence and swap "I" for "you" and swap the verb with the noun phrase. So without understanding anything out of it, it can generate a response:
Why are you quite happy?
This of course fails from the moment you make a sentence that is grammatically sound, but semantically makes no sense. In that case the rewrite rules will generate questions that are also grammatically sound, but semantically make no sense either, whereas a normal person would understand that this is invalid.
Now LLMs are a more advanced generation of this. These are also not created by humans that introduce rewrite rules, it is often the result of using statistics and huge amounts of data. This gives the system some "understanding" of words, where word2vec for example maps a word on an arbitrary large vector, which gives an indication how far two words are distant from each other. It thus can to some extent detect that things are non-sensical, just because it can "calculate" how likely it is for a sentence to occur naturally. This based on statistical analysis of huge amounts of data. This is also what humans often do to validate that a sentence is grammatically sound: you search on Google two or more variants, and you look for the number of search results. The statistical models have turned that into a more statistically sound model.
But these models thus don't in any way really "understand" what is asked, and what the answer looks like. These are machines that just aim to produce the same result a human would, but they use a different set of tools: statistical analysis to predict the sequence of words to produce a response.
I think what AI might have learned humans in the philosophical sense, is that it does not take that much to mimic intelligent behavior. But to some extent we can already know that from the word of animals, where a lot of animals don't have a brain, or at most a small brain, but still can show behavior that often looks as planned and organized. For example most insects have a very small brain (200'000 neurons).
This is essentially the difference between weak AI and strong AI, where weak AI means to mimic intelligent behavior, but not per se with an "intelligent" toolkit, just statistical analysis might be sufficient. Strong AI on the other hand tries to come up with a toolkit that does not only mimic intelligence, but actually has intelligence. Likely such strong AI devices, if they eventually will be produced, will also result in what one would call consciousness, as is specified by the strong AI hypothesis [wiki], but the current generation of AI is (almost) completely "weak AI". Where an algorithm uses a statistic model to produce content that may look as if it was produced by an intelligent device.
Probably the main problem is to design a "waterproof" test to what consciousness exactly is. I think a classic test was someone recognizing itself in a mirror. But that also has caveats: for example algorithms exist to detect yourself in a mirror: you somehow change the state of the machine in such way that the mirror device will reflect that, and then you analyze if the change in the mirror changed. If you do this an arbitrary number of times with the same outcome, the probability that the mirror shows yourself grows. Multi-agent systems [wiki] for example deals with a system of (individual) agents that with certain algorithms can organize themselves into a system, so there detecting oneself, and building "relations" with the other agents is done, but often this is done with (simple) deterministic algorithms, that don't require intelligence per se, just like ants for example don't need a lot of high-level intelligence to organize in a colony: these have a number of "algorithms" hard-wired that let the ant colony run smoothly.