Skip to main content

thisThis answer will look at the "bigger picture" context of your question. computerComputer science is actually a relatively young and somewhat open science and it doesntdoesn't yet have great or even good answers to some basic & fundamental questions. theThe basic question "what is efficiently computed" is either accurately or roughly formalized in CS (depending on opinion) as the famous P vs NP problem (or the closely related P vs Exptime problem), and its still open after more than four decades of being initially introduced by Cook/Levin ~1970 and intense work by the worlds greatest computer scientists (and many mathematicians are also interested in the problem as fundamental).

soSo in other words, even with a rough definition of "efficient" as P time, and one of the highest valued scientific awards — namely $1M award attached to the problem for over 10yrs — computer science cannot even prove that some problems (close to this borderline) must or must not have efficient (Ptime) algorithms. thereforeTherefore the exact definition of "efficient" more precise than P time is not necessary or even possible at this time. ifIf/when the P vs NP conjecture is settled one way or the other, a more strict definition of "efficient" may or presumably will be possible.

moreoverMoreover, one might feel that the Ptime definition of "efficient" might even be a bit "sloppy", and most computer scientists would probably agree, and almost all of them think the P vs NP conjecture is of the utmost importance to resolve, to the point that they might even regard this assertion or observation as trivial.... in other words, so to speak, its a work in progress / we're working on it. (in fact mainstream computer scientists even go so far, only half-jokingly, to routinely refer to the gap & lack of progress/definitive separations as embarrassing.)

inIn fact theresthere is even a closely related/significantly stronger conjecture than P vs NP, namely NP vs P/poly, which also cannot be resolved by computer science at this time. it conjectures that NP-time problems cannot be solved by any "P-sized" circuits, ie not even restricted to those circuits that can be created by algorithms/Turing machines.

asAs for how hard P vs NP may be — there is some solid reason to think it may be at least as hard as the very old Riemann conjecture in mathematics (now 1.5 century old), because both have had the same $1M award for over a decade, and neither has been solved yet/first.

soSo in other words, to precisely define what algorithms are really "efficient" is actually one of the most important & hardest existing open problems in theoretical science and mathematics.

inIn fact the question of "what is efficiently computed" is actually even more subtle, because there is a variant of the Church-Turing thesis called the P-time CT thesis, and it is not known if quantum computing actually violates it, with Shors. With Shor's breakthrough result of P-time QM, factoring considered a dramatic twist in this research. inIn other words, the problem of what is efficiently computed actually plausibly descends all the way to deep physics principles, and relates to whether quantum computing can compute more efficiently than classical computation, which is also a generally open problem in theoretical CS and advanced physics.

soSo one can even add that P vs NP & the question of efficient computing may be of crucial or fundamental importance to in — addition to CS and mathematics — physics.

[1] P vs NP problem, wikipedia

[2] Millenium prize problems

[3] P/Poly class, wikipedia

[4] ShorsShor's algorithm

this answer will look at the "bigger picture" context of your question. computer science is actually a relatively young and somewhat open science and it doesnt yet have great or even good answers to some basic & fundamental questions. the basic question "what is efficiently computed" is either accurately or roughly formalized in CS (depending on opinion) as the famous P vs NP problem (or the closely related P vs Exptime problem), and its still open after more than four decades of being initially introduced by Cook/Levin ~1970 and intense work by the worlds greatest computer scientists (and many mathematicians are also interested in the problem as fundamental).

so in other words, even with a rough definition of "efficient" as P time, and one of the highest valued scientific awards — namely $1M award attached to the problem for over 10yrs — computer science cannot even prove that some problems (close to this borderline) must or must not have efficient (Ptime) algorithms. therefore the exact definition of "efficient" more precise than P time is not necessary or even possible at this time. if/when the P vs NP conjecture is settled one way or the other, a more strict definition of "efficient" may or presumably will be possible.

moreover, one might feel that the Ptime definition of "efficient" might even be a bit "sloppy", and most computer scientists would probably agree, and almost all of them think the P vs NP conjecture is of the utmost importance to resolve, to the point that they might even regard this assertion or observation as trivial.... in other words, so to speak, its a work in progress / we're working on it. (in fact mainstream computer scientists even go so far, only half-jokingly, to routinely refer to the gap & lack of progress/definitive separations as embarrassing.)

in fact theres even a closely related/significantly stronger conjecture than P vs NP, namely NP vs P/poly, which also cannot be resolved by computer science at this time. it conjectures that NP-time problems cannot be solved by any "P-sized" circuits, ie not even restricted to those circuits that can be created by algorithms/Turing machines.

as for how hard P vs NP may be — there is some solid reason to think it may be at least as hard as the very old Riemann conjecture in mathematics (now 1.5 century old), because both have had the same $1M award for over a decade, and neither has been solved yet/first.

so in other words, to precisely define what algorithms are really "efficient" is actually one of the most important & hardest existing open problems in theoretical science and mathematics.

in fact the question of "what is efficiently computed" is actually even more subtle, because there is a variant of the Church-Turing thesis called the P-time CT thesis, and it is not known if quantum computing actually violates it, with Shors breakthrough result of P-time QM factoring considered a dramatic twist in this research. in other words, the problem of what is efficiently computed actually plausibly descends all the way to deep physics principles, and relates to whether quantum computing can compute more efficiently than classical computation, which is also a generally open problem in theoretical CS and advanced physics.

so one can even add that P vs NP & the question of efficient computing may be of crucial or fundamental importance to in — addition to CS and mathematics — physics.

[1] P vs NP problem, wikipedia

[2] Millenium prize problems

[3] P/Poly class, wikipedia

[4] Shors algorithm

This answer will look at the "bigger picture" context of your question. Computer science is actually a relatively young and somewhat open science and it doesn't yet have great or even good answers to some basic & fundamental questions. The basic question "what is efficiently computed" is either accurately or roughly formalized in CS (depending on opinion) as the famous P vs NP problem (or the closely related P vs Exptime problem), and its still open after more than four decades of being initially introduced by Cook/Levin ~1970 and intense work by the worlds greatest computer scientists (and many mathematicians are also interested in the problem as fundamental).

So in other words, even with a rough definition of "efficient" as P time, and one of the highest valued scientific awards — namely $1M award attached to the problem for over 10yrs — computer science cannot even prove that some problems (close to this borderline) must or must not have efficient (Ptime) algorithms. Therefore the exact definition of "efficient" more precise than P time is not necessary or even possible at this time. If/when the P vs NP conjecture is settled one way or the other, a more strict definition of "efficient" may or presumably will be possible.

Moreover, one might feel that the Ptime definition of "efficient" might even be a bit "sloppy", and most computer scientists would probably agree, and almost all of them think the P vs NP conjecture is of the utmost importance to resolve, to the point that they might even regard this assertion or observation as trivial.... in other words, so to speak, its a work in progress / we're working on it. (in fact mainstream computer scientists even go so far, only half-jokingly, to routinely refer to the gap & lack of progress/definitive separations as embarrassing.)

In fact there is even a closely related/significantly stronger conjecture than P vs NP, namely NP vs P/poly, which also cannot be resolved by computer science at this time. it conjectures that NP-time problems cannot be solved by any "P-sized" circuits, ie not even restricted to those circuits that can be created by algorithms/Turing machines.

As for how hard P vs NP may be — there is some solid reason to think it may be at least as hard as the very old Riemann conjecture in mathematics (now 1.5 century old), because both have had the same $1M award for over a decade, and neither has been solved yet/first.

So in other words, to precisely define what algorithms are really "efficient" is actually one of the most important & hardest existing open problems in theoretical science and mathematics.

In fact the question of "what is efficiently computed" is actually even more subtle, because there is a variant of the Church-Turing thesis called the P-time CT thesis, and it is not known if quantum computing actually violates it. With Shor's breakthrough result of P-time QM, factoring considered a dramatic twist in this research. In other words, the problem of what is efficiently computed actually plausibly descends all the way to deep physics principles, and relates to whether quantum computing can compute more efficiently than classical computation, which is also a generally open problem in theoretical CS and advanced physics.

So one can even add that P vs NP & the question of efficient computing may be of crucial or fundamental importance to in — addition to CS and mathematics — physics.

[1] P vs NP problem, wikipedia

[2] Millenium prize problems

[3] P/Poly class, wikipedia

[4] Shor's algorithm

"embarrassing"
Source Link
vzn
  • 11.1k
  • 1
  • 27
  • 50

this answer will look at the "bigger picture" context of your question. computer science is actually a relatively young and somewhat open science and it doesnt yet have great or even good answers to some basic & fundamental questions. the basic question "what is efficiently computed" is either accurately or roughly formalized in CS (depending on opinion) as the famous P vs NP problem (or the closely related P vs Exptime problem), and its still open after more than four decades of being initially introduced by Cook/Levin ~1970 and intense work by the worlds greatest computer scientists (and many mathematicians are also interested in the problem as fundamental).

so in other words, even with a rough definition of "efficient" as P time, and one of the highest valued scientific awards — namely $1M award attached to the problem for over 10yrs — computer science cannot even prove that some problems (close to this borderline) must or must not have efficient (Ptime) algorithms. therefore the exact definition of "efficient" more precise than P time is not necessary or even possible at this time. if/when the P vs NP conjecture is settled one way or the other, a more strict definition of "efficient" may or presumably will be possible.

moreover, one might feel that the Ptime definition of "efficient" might even be a bit "sloppy", and most computer scientists would probably agree, and almost all of them think the P vs NP conjecture is of the utmost importance to resolve, to the point that they might even regard this assertion or observation as trivial.... in other words, so to speak, its a work in progress / we're working on it. (in fact mainstream computer scientists even go so far, only half-jokingly, to routinely refer to the gap & lack of progress/definitive separations as embarrassing.)

in fact theres even a closely related/significantly stronger conjecture than P vs NP, namely NP vs P/poly, which also cannot be resolved by computer science at this time. it conjectures that NP-time problems cannot be solved by any "P-sized" circuits, ie not even restricted to those circuits that can be created by algorithms/Turing machines.

as for how hard P vs NP may be — there is some solid reason to think it may be at least as hard as the very old Riemann conjecture in mathematics (now 1.5 century old), because both have had the same $1M award for over a decade, and neither has been solved yet/first.

so in other words, to precisely define what algorithms are really "efficient" is actually one of the most important & hardest existing open problems in theoretical science and mathematics.

in fact the question of "what is efficiently computed" is actually even more subtle, because there is a variant of the Church-Turing thesis called the P-time CT thesis, and it is not known if quantum computing actually violates it, with Shors breakthrough result of P-time QM factoring considered a dramatic twist in this research. in other words, the problem of what is efficiently computed actually plausibly descends all the way to deep physics principles, and relates to whether quantum computing can compute more efficiently than classical computation, which is also a generally open problem in theoretical CS and advanced physics.

so one can even add that P vs NP & the question of efficient computing may be of crucial or fundamental importance to in — addition to CS and mathematics — physics.

[1] P vs NP problem, wikipedia

[2] Millenium prize problems

[3] P/Poly class, wikipedia

[4] Shors algorithm

this answer will look at the "bigger picture" context of your question. computer science is actually a relatively young and somewhat open science and it doesnt yet have great or even good answers to some basic & fundamental questions. the basic question "what is efficiently computed" is either accurately or roughly formalized in CS (depending on opinion) as the famous P vs NP problem (or the closely related P vs Exptime problem), and its still open after more than four decades of being initially introduced by Cook/Levin ~1970 and intense work by the worlds greatest computer scientists (and many mathematicians are also interested in the problem as fundamental).

so in other words, even with a rough definition of "efficient" as P time, and one of the highest valued scientific awards — namely $1M award attached to the problem for over 10yrs — computer science cannot even prove that some problems (close to this borderline) must or must not have efficient (Ptime) algorithms. therefore the exact definition of "efficient" more precise than P time is not necessary or even possible at this time. if/when the P vs NP conjecture is settled one way or the other, a more strict definition of "efficient" may or presumably will be possible.

moreover, one might feel that the Ptime definition of "efficient" might even be a bit "sloppy", and most computer scientists would probably agree, and almost all of them think the P vs NP conjecture is of the utmost importance to resolve, to the point that they might even regard this assertion or observation as trivial.... in other words, so to speak, its a work in progress / we're working on it.

in fact theres even a closely related/significantly stronger conjecture than P vs NP, namely NP vs P/poly, which also cannot be resolved by computer science at this time. it conjectures that NP-time problems cannot be solved by any "P-sized" circuits, ie not even restricted to those circuits that can be created by algorithms/Turing machines.

as for how hard P vs NP may be — there is some solid reason to think it may be at least as hard as the very old Riemann conjecture in mathematics (now 1.5 century old), because both have had the same $1M award for over a decade, and neither has been solved yet/first.

so in other words, to precisely define what algorithms are really "efficient" is actually one of the most important & hardest existing open problems in theoretical science and mathematics.

in fact the question of "what is efficiently computed" is actually even more subtle, because there is a variant of the Church-Turing thesis called the P-time CT thesis, and it is not known if quantum computing actually violates it, with Shors breakthrough result of P-time QM factoring considered a dramatic twist in this research. in other words, the problem of what is efficiently computed actually plausibly descends all the way to deep physics principles, and relates to whether quantum computing can compute more efficiently than classical computation, which is also a generally open problem in theoretical CS.

so one can even add that P vs NP & the question of efficient computing may be of crucial or fundamental importance to in — addition to CS and mathematics — physics.

[1] P vs NP problem, wikipedia

[2] Millenium prize problems

[3] P/Poly class, wikipedia

[4] Shors algorithm

this answer will look at the "bigger picture" context of your question. computer science is actually a relatively young and somewhat open science and it doesnt yet have great or even good answers to some basic & fundamental questions. the basic question "what is efficiently computed" is either accurately or roughly formalized in CS (depending on opinion) as the famous P vs NP problem (or the closely related P vs Exptime problem), and its still open after more than four decades of being initially introduced by Cook/Levin ~1970 and intense work by the worlds greatest computer scientists (and many mathematicians are also interested in the problem as fundamental).

so in other words, even with a rough definition of "efficient" as P time, and one of the highest valued scientific awards — namely $1M award attached to the problem for over 10yrs — computer science cannot even prove that some problems (close to this borderline) must or must not have efficient (Ptime) algorithms. therefore the exact definition of "efficient" more precise than P time is not necessary or even possible at this time. if/when the P vs NP conjecture is settled one way or the other, a more strict definition of "efficient" may or presumably will be possible.

moreover, one might feel that the Ptime definition of "efficient" might even be a bit "sloppy", and most computer scientists would probably agree, and almost all of them think the P vs NP conjecture is of the utmost importance to resolve, to the point that they might even regard this assertion or observation as trivial.... in other words, so to speak, its a work in progress / we're working on it. (in fact mainstream computer scientists even go so far, only half-jokingly, to routinely refer to the gap & lack of progress/definitive separations as embarrassing.)

in fact theres even a closely related/significantly stronger conjecture than P vs NP, namely NP vs P/poly, which also cannot be resolved by computer science at this time. it conjectures that NP-time problems cannot be solved by any "P-sized" circuits, ie not even restricted to those circuits that can be created by algorithms/Turing machines.

as for how hard P vs NP may be — there is some solid reason to think it may be at least as hard as the very old Riemann conjecture in mathematics (now 1.5 century old), because both have had the same $1M award for over a decade, and neither has been solved yet/first.

so in other words, to precisely define what algorithms are really "efficient" is actually one of the most important & hardest existing open problems in theoretical science and mathematics.

in fact the question of "what is efficiently computed" is actually even more subtle, because there is a variant of the Church-Turing thesis called the P-time CT thesis, and it is not known if quantum computing actually violates it, with Shors breakthrough result of P-time QM factoring considered a dramatic twist in this research. in other words, the problem of what is efficiently computed actually plausibly descends all the way to deep physics principles, and relates to whether quantum computing can compute more efficiently than classical computation, which is also a generally open problem in theoretical CS and advanced physics.

so one can even add that P vs NP & the question of efficient computing may be of crucial or fundamental importance to in — addition to CS and mathematics — physics.

[1] P vs NP problem, wikipedia

[2] Millenium prize problems

[3] P/Poly class, wikipedia

[4] Shors algorithm

fix the NP vs P/poly conjecture statement
Source Link
vzn
  • 11.1k
  • 1
  • 27
  • 50

this answer will look at the "bigger picture" context of your question. computer science is actually a relatively young and somewhat open science and it doesnt yet have great or even good answers to some basic & fundamental questions. the basic question "what is efficiently computed" is either accurately or roughly formalized in CS (depending on opinion) as the famous P vs NP problem (or the closely related P vs Exptime problem), and its still open after more than four decades of being initially introduced by Cook/Levin ~1970 and intense work by the worlds greatest computer scientists (and many mathematicians are also interested in the problem as fundamental).

so in other words, even with a rough definition of "efficient" as P time, and one of the highest valued scientific awards — namely $1M award attached to the problem for over 10yrs — computer science cannot even prove that some problems (close to this borderline) must or must not have efficient (Ptime) algorithms. therefore the exact definition of "efficient" more precise than P time is not necessary or even possible at this time. if/when the P vs NP conjecture is settled one way or the other, a more strict definition of "efficient" may or presumably will be possible.

moreover, one might feel that the Ptime definition of "efficient" might even be a bit "sloppy", and most computer scientists would probably agree, and almost all of them think the P vs NP conjecture is of the utmost importance to resolve, to the point that they might even regard this assertion or observation as trivial.... in other words, so to speak, its a work in progress / we're working on it.

in fact theres even a closely related/significantly stronger conjecture than P vs NP, namely PNP vs P/poly, which also cannot be resolved by computer science at this time. it conjectures that PtimeNP-time problems cannot be solved by any "P-sized" circuits, ie not even restricted to those circuits that can be created by algorithms/Turing machines.

as for how hard P vs NP may be — there is some solid reason to think it may be at least as hard as the very old Riemann conjecture in mathematics (now 1.5 century old), because both have had the same $1M award for over a decade, and neither has been solved yet/first.

so in other words, to precisely define what algorithms are really "efficient" is actually one of the most important & hardest existing open problems in theoretical science and mathematics.

in fact the question of "what is efficiently computed" is actually even more subtle, because there is a variant of the Church-Turing thesis called the P-time CT thesis, and it is not known if quantum computing actually violates it, with Shors breakthrough result of P-time QM factoring considered a dramatic twist in this research. in other words, the problem of what is efficiently computed actually plausibly descends all the way to deep physics principles, and relates to whether quantum computing can compute more efficiently than classical computation, which is also a generally open problem in theoretical CS.

so one can even add that P vs NP & the question of efficient computing may be of crucial or fundamental importance to in — addition to CS and mathematics — physics.

[1] P vs NP problem, wikipedia

[2] Millenium prize problems

[3] P/Poly class, wikipedia

[4] Shors algorithm

this answer will look at the "bigger picture" context of your question. computer science is actually a relatively young and somewhat open science and it doesnt yet have great or even good answers to some basic & fundamental questions. the basic question "what is efficiently computed" is either accurately or roughly formalized in CS (depending on opinion) as the famous P vs NP problem (or the closely related P vs Exptime problem), and its still open after more than four decades of being initially introduced by Cook/Levin ~1970 and intense work by the worlds greatest computer scientists (and many mathematicians are also interested in the problem as fundamental).

so in other words, even with a rough definition of "efficient" as P time, and one of the highest valued scientific awards — namely $1M award attached to the problem for over 10yrs — computer science cannot even prove that some problems (close to this borderline) must or must not have efficient (Ptime) algorithms. therefore the exact definition of "efficient" more precise than P time is not necessary or even possible at this time. if/when the P vs NP conjecture is settled one way or the other, a more strict definition of "efficient" may or presumably will be possible.

moreover, one might feel that the Ptime definition of "efficient" might even be a bit "sloppy", and most computer scientists would probably agree, and almost all of them think the P vs NP conjecture is of the utmost importance to resolve, to the point that they might even regard this assertion or observation as trivial.... in other words, so to speak, its a work in progress / we're working on it.

in fact theres even a significantly stronger conjecture than P vs NP, namely P vs P/poly, which also cannot be resolved by computer science at this time. it conjectures that Ptime problems cannot be solved by any "P-sized" circuits, ie not even restricted to those circuits that can be created by algorithms/Turing machines.

as for how hard P vs NP may be — there is some solid reason to think it may be at least as hard as the very old Riemann conjecture in mathematics (now 1.5 century old), because both have had the same $1M award for over a decade, and neither has been solved yet/first.

so in other words, to precisely define what algorithms are really "efficient" is actually one of the most important & hardest existing open problems in theoretical science and mathematics.

in fact the question of "what is efficiently computed" is actually even more subtle, because there is a variant of the Church-Turing thesis called the P-time CT thesis, and it is not known if quantum computing actually violates it, with Shors breakthrough result of P-time QM factoring considered a dramatic twist in this research. in other words, the problem of what is efficiently computed actually plausibly descends all the way to deep physics principles, and relates to whether quantum computing can compute more efficiently than classical computation, which is also a generally open problem in theoretical CS.

so one can even add that P vs NP & the question of efficient computing may be of crucial or fundamental importance to in — addition to CS and mathematics — physics.

[1] P vs NP problem, wikipedia

[2] Millenium prize problems

[3] P/Poly class, wikipedia

[4] Shors algorithm

this answer will look at the "bigger picture" context of your question. computer science is actually a relatively young and somewhat open science and it doesnt yet have great or even good answers to some basic & fundamental questions. the basic question "what is efficiently computed" is either accurately or roughly formalized in CS (depending on opinion) as the famous P vs NP problem (or the closely related P vs Exptime problem), and its still open after more than four decades of being initially introduced by Cook/Levin ~1970 and intense work by the worlds greatest computer scientists (and many mathematicians are also interested in the problem as fundamental).

so in other words, even with a rough definition of "efficient" as P time, and one of the highest valued scientific awards — namely $1M award attached to the problem for over 10yrs — computer science cannot even prove that some problems (close to this borderline) must or must not have efficient (Ptime) algorithms. therefore the exact definition of "efficient" more precise than P time is not necessary or even possible at this time. if/when the P vs NP conjecture is settled one way or the other, a more strict definition of "efficient" may or presumably will be possible.

moreover, one might feel that the Ptime definition of "efficient" might even be a bit "sloppy", and most computer scientists would probably agree, and almost all of them think the P vs NP conjecture is of the utmost importance to resolve, to the point that they might even regard this assertion or observation as trivial.... in other words, so to speak, its a work in progress / we're working on it.

in fact theres even a closely related/significantly stronger conjecture than P vs NP, namely NP vs P/poly, which also cannot be resolved by computer science at this time. it conjectures that NP-time problems cannot be solved by any "P-sized" circuits, ie not even restricted to those circuits that can be created by algorithms/Turing machines.

as for how hard P vs NP may be — there is some solid reason to think it may be at least as hard as the very old Riemann conjecture in mathematics (now 1.5 century old), because both have had the same $1M award for over a decade, and neither has been solved yet/first.

so in other words, to precisely define what algorithms are really "efficient" is actually one of the most important & hardest existing open problems in theoretical science and mathematics.

in fact the question of "what is efficiently computed" is actually even more subtle, because there is a variant of the Church-Turing thesis called the P-time CT thesis, and it is not known if quantum computing actually violates it, with Shors breakthrough result of P-time QM factoring considered a dramatic twist in this research. in other words, the problem of what is efficiently computed actually plausibly descends all the way to deep physics principles, and relates to whether quantum computing can compute more efficiently than classical computation, which is also a generally open problem in theoretical CS.

so one can even add that P vs NP & the question of efficient computing may be of crucial or fundamental importance to in — addition to CS and mathematics — physics.

[1] P vs NP problem, wikipedia

[2] Millenium prize problems

[3] P/Poly class, wikipedia

[4] Shors algorithm

Source Link
vzn
  • 11.1k
  • 1
  • 27
  • 50
Loading