AI-generated text can easily be defamatory: that is simply a matter of content. Let's say the text is "John Q Smith murdered my parents", and that the statement is untrue. The scenario, as I understand it, is that Jones is chatting with a bot maintained by Omnicorp, and the bot utters the defamatory statement. A possible defense is that the literally false statement also cannot be taken to be believable – to be defamation the statement also has to be at least somewhat believable and not just random hyperbolic insulting. Since these bots are supposed to be fact-based (not ungrounded random text generators like Alex), I thing this defense would fail.
It may be necessary in that state to prove some degree of fault, viz that there was negligence. For example, a person who writes a defamatory statement in their personal locked-away diary is not automatically liable if a thief breaks in and distributes the diary to others. It is very likely that the court would find the bot-provider to be negligent in unleashing this defamation-machine on the public. It is utterly foreseeable that these programs will do all sorts of bad things, seemingly at random. Perhaps the foreseeability argument would be very slightly lessened a couple of months ago, at this point it is an obvious problem.
There is some chance that the bot-provider is not liable, in light of "Section 230" which says that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider". If the bot is an information content provider, the platform operator is not liable. The bot is one if that entity "is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service". Needless to say, claims of "responsibility" are legally ill-defined in this context. It is not decided law what "responsibility" programs have for their actions. If the court finds that the program is not responsible, then the platform is relieved of liability as publisher.
The software creators are not liable for creating a machine with the capacity to create defamatory text, but the software creators could be the same as the platform-operators in a particular case.
Malicious intent is relevant for a subclass of defamation cases: defamation of public figures. You can defame famous people all you want, as long as you don't do so with malice. This is a special rule about public figures.