最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

Why does Algorithms Illuminated use Turing reductions instead of Karp for NP-hardness - Stack Overflow

programmeradmin3浏览0评论

The book Algorithms Illuminated Part 4 has the following definition to prove problems NP-hard:

A problem A reduces to another problem B if an algorithm that solves B can be easily translated into one that solves A. When discussing NP-hard problems, “easily translate” means that problem A can be solved using at most a polynomial (in the input size) number of invocations of a subroutine that solves problem B, along with a polynomial amount of additional work (outside of the subroutine calls).

This definition diverges from the definition of reductions used elsewhere, such as CLRS or Wikipedia. The author uses Turing reductions instead of Karp reductions.

My question is: Why? Does the usage of the type of Turing reduction keep the set of problems in NP-hard exactly the same? Intuitively, I don't see why this would not be the case. Turing reductions match intuitively with what should be a valid reduction. (Poly-time algorithm for B implies poly-time algorithm for A if A <= B).

The book Algorithms Illuminated Part 4 has the following definition to prove problems NP-hard:

A problem A reduces to another problem B if an algorithm that solves B can be easily translated into one that solves A. When discussing NP-hard problems, “easily translate” means that problem A can be solved using at most a polynomial (in the input size) number of invocations of a subroutine that solves problem B, along with a polynomial amount of additional work (outside of the subroutine calls).

This definition diverges from the definition of reductions used elsewhere, such as CLRS or Wikipedia. The author uses Turing reductions instead of Karp reductions.

My question is: Why? Does the usage of the type of Turing reduction keep the set of problems in NP-hard exactly the same? Intuitively, I don't see why this would not be the case. Turing reductions match intuitively with what should be a valid reduction. (Poly-time algorithm for B implies poly-time algorithm for A if A <= B).

Share Improve this question asked Mar 19 at 14:05 MangoPizzaMangoPizza 2972 silver badges9 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 3

NP-completeness is defined in terms of many-one reductions, but for NP-hardness, many-one reductions don't quite work. You can only many-one reduce a decision problem to another decision problem, but we want to be able to define NP-hardness for problems that aren't decision problems.

This does mean losing distinctions like NP vs. co-NP. NP-complete and co-NP-complete are different notions, but if we use Turing reductions to define NP-hardness, we don't have a distinction between NP-hard and "co-NP-hard".

发布评论

评论列表(0)

  1. 暂无评论