We could make it so, if we wished
Yes, it can, if we assume that by "LLM" you are meaning the whole package that's visible to the user (i.e., something like ChatGPT) and not only the smaller component that is the LLM sub-module itself. If one were asking about the technical LLM sub-module itself, then no (it does not have state and doesn't remember anything, so trivially cannot have intent, in no interpretation of what that means). But the first meaning is what we're usually concerned with.
In this answer I also assume that we're not concerned with what "intent" actually is, philosophically, but only looking at the whole system and asking whether it behaves in a way that someone would recognize as intentional. I.e., pro-active and goal-oriented; not purely random, chaotic, or purely passive, reactive.
For this question, we'll look at some information system (again, something like ChatGPT), possibly consisting of many actual computers connected by a network, as a black box.
It is then almost trivial to implement a layer of software on top of the LLM that encapsulates intention. We are doing that all the time: any program controlling something can be interpreted as encapsulating an intention regarding whatever it is that is controlled.
For example: the heater in your house has a (simple or complex, does not matter) control element that decides whether it is on or off, or maybe even a value in between. The intent of the system (as a black box) is to keep your house at a certain temperature.
Your car has dozens if not hundreds (for modern cars) control programs that are intent on keeping some process within working limits.
And so on and forth.
In the context of LLMs, we are already implementing intention in the form of not answering bad questions (about bombs, crime, and so on and forth). It would be very easy (compared to the implementation of the LLM itself) to implement a thin layer with any goal-arbitrary intent that is mixed into the context given to the LLM.
For example: you could write a program that tries to fulfill some kind of "project" (whatever that is) and define the stages of the project in the form of a tree of individual tasks to be fulfilled. Nothing would then keep you from combining this with a LLM which would work bottom-up to fulfill all the tasks, as long as the only thing involved with fulfilling any of the tasks is to tell something or someone what to do. Technically speaking, the task itself would be the initial prompt for the LLM.
The output of the LLM would then be fed to whomever can perform the physical or intellectual task itself - that could be humans, or other, non-LLM computer programs. As long as the individual "workers" can only give status back to the LLM (which would be follow-up prompts), we could interpret this scenario as the LLM being the full source of intent. We would not even have to specify the whole tree of tasks ourselves; we all know how GPTs are readily able to break a task down into subtasks. ChatGPT even shows helpful prompts on its start page that help the human to get started. You could initialize your tree with a simple prompt like "Make suggestions how to increase the acceptance for AI within the population" and have the LLM work down from that, if you are into AI-world-domination dystopias.
All of this would be absolutely technologically feasible today. This kind of hierarchical scheme is in fact being done (without the involvement of LLMs) all the time already, as well (for example when simulating actors within a large computer game).
It cannot, technically, happen by freak accident when using something like ChatGPT, but it wouldn't require a genius programmer to develop it.
But it's not going to happen on its own
All of that said... the kind of "real" intentional layer cannot "converge" by accident, as it would be outside of the black box that a LLM is, it would be explicitly programmed by humans, and would need specific interfaces to the real world that cannot coalesce on their own.
In theory, the people training the LLM could, by choice or accidentally, infuse some kind of bias (if you remember, some versions of popular LLMs have fought with quite racist or sexist streaks); this could for example happen by, again, accidentally or by choice, using specific data as training volume. Calling this "intention" would be a little far-fetched maybe; if it was by choice on the part of the developers, it would be their intention, not the one of the LLM. If we wanted to antropomorphize, we would call it being "opinionated" or "biased", not "intentional".
Looking at only the LLM component itself, again, it has no state. It just receives some tokens (stripped-down words) and generates further tokens. All the remembering of the previous chat content happens outside of the LLM. The LLM itself is only a tool, it has no active components whatsoever.
So, no, a pure LLM, with some slim user interface like ChatGPT etc., will not accidentally develop intent.