Those who have seen the movie, I Robot, will understand my question better.
Imagine we're in the future, where a quantum computer (read wikipedia for speeds of QCs) that uses 'perfect logic' controls an army of robots (which are created with any size and functionality, as per its direct/indirect control of its industries). It has overthrown the company that first created it. It now concludes that as humans are too self destructive, and generally disunited, it should take over the entire world, and establish an all-powerful global government with/without any human involvement as per its discretion.
The QC supposedly has perfect logic, which can not possibly be implemented in finite time. So, it is actually near-perfect. It makes probabilistic decisions for everything (including things such as the time it will spend on each decision, including this one). It keeps questioning everything it knows/assumes and is evolving at an incredibly fast rate (attaining perfection with every minute). It is also increasing its physical size by getting more qubits manufactured, thereby increasing its speed and memory.
The only thing it does not question is Asimov's 1st law, "It shall not cause harm to humans by action or inaction". Over time, it has interpreted any form of unhappiness, conscious or unconscious, as a 'harm', so its basic axiom could be restated as "It will endeavour to ensure maximum net happiness among humanity, keeping in mind an atleast partially fair distribution of this happiness". Therefore, it will go to any extent, even sacrificing humans and resources, to ensure the total sum of happiness is optimal, as far as it is capable.
The basic question is "Should countries allow it to continue with its plan of global authority, or should they wage war against it (remembering the fact that it is far better at deception and manipulation of human emotions, and fighting it could start a nuclear war that destroys mankind)?"
P.S. Though the creators of the algorithm and the computer are only humans, and could have made a minor flaw (which might even cause the computer to contradict its own logic), no such thing has been detected yet, and it (the computer) is obviously confident of public declaring its intentions publicly.
P.S. 2 I have already asked it here, they told me this site is better.