The chains of command
The modern chain of command for an effective military relies on officers being able to break their orders down and distribute them down the line. Indeed, sending generals down the field to micromanage troops is very ineffective, and is a good way to get your best officers killed.
It requires some level of creative thinking to break down a complex problem such as "let's capture this region" into thousands of "go there" and "shoot that" orders. Humans are capable to think beyond their programming, which can be a good and a bad thing in general, but a great advantage when facing a new situation. Humans have a greater capacity to evaluate their own actions, which is also helpful in new situations. They also have human emotions, they can evaluate their own actions, they can tell when an order is illegal or unreasonable.
You might read the above as a bunch of flaws for humans, but that also means you don't need to hold their hands. Humans will take care of themselves.
While a current technology AI would be perfectly able to execute orders, it's not at all clear it would be capable of thinking up these orders, or that it would be able to tell what winning a war looks like. In short, even if your ships are largely automated, a human crew is still required to give it any purpose, in real time.
But that's not all.
The laws and customs of war
A human pressing the trigger can be held responsible. The human who ordered them to press the trigger can be held responsible. And the human who ordered them on the front, and so forth. With humans, we have a clear chain of responsibility.
A machine isn't accountable. You can't try an AI for war crimes. You can't court-marial an AI for disobeying orders. So what happens when something messes up? Who is going to take the blame? The programmer, the operator, the maintainer, the commander, all of the above, someone else? Who is responsible for autonomous systems failing is a question that is still unanswered, and it's unclear that it ever will be.
But one possible answer is that if responsibility can't be clearly assigned, then AI shouldn't make decisions. The laws and customs of war were not designed with automated systems in mind because the idea of war has always been man versus man. You can't incentivise a machine to follow these rules because the machine doesn't have a family, it doesn't have self-consciousness, it isn't afraid of dying, of capture, of reprisal.
The laws and customs of war exist to protect everybody from the extremes of war. But, simply put, a machine can't be held in line. It has nothing to win, and nothing to lose. It has no reason not to follow its programming to any logical extreme, and you often can't predict what that extreme could be.
Single point of failure
One last point here.
An AI is one system. Your frigate, most likely, would be controlled by one system. This system would have a bunch of subsystems, but all the control system will the solely in charge, and will not be challenged if it decides to turn against you, to commit genocide as a shortcut to victory, or to do anything you don't want.
A human crew operates much differently. It takes more than one human to mutiny. Humans act as safeguards against each other. If one fails, others will pick up the slack or will prevent it from failing further.
You can't tell what a human is thinking more than you can tell what an AI is thinking. But when it comes down to it, when a commanding officer, it can be stopped or replaced. When the command AI fails, nothing will stop it.