Hacker News new | past | comments | ask | show | jobs | submit login

Hardware might have fixed size n, but a general approach certainly doesn't!

I must have failed badly at making my point clearly, because I've got that same criticism more than once.

Take as an example this paper [1], which explicitly states the delay of an n-bit Baugh-Wooley multiplier as O(log n).

I appreciate that this might not be what people are discussing here - if so, excuse me mixing things up. I answered to a question that asked about a connection between the Turing machine algorithm and practical circuit design, and I tried as best I could to give a practical circuit design perspective.

[1] https://www.irjet.net/archives/V3/i2/IRJET-V3I259.pdf




I'm familiar with such results. TAOCP has quite a few I've fiddled with over the years. It has an O(n) multiply, for example.

Most (all?) algorithms can be made O(log n) by throwing hardware at it. The hardware is basically a lookup table, and you can inspect each bit of a length n problem in O(log n) time, and output the answer.

Papers like this use problem structure to reduce the general exponential circuit growth to (usually) ~linear growth. But all I've seen require unbounded circuit growth, as does this one, which is O(n log n) circuit size.

By allowing hardware to grow at unbounded rate, it's not really solving the same problem.


Yep, I appreciate that there is a time/space tradeoff, see this [1] from ~25 minutes ago. I agree it's not exactly the same thing. I just wanted my original comment to not be misrepresented.

[1] https://news.ycombinator.com/item?id=19495509


Sorry if I misrepresented you. Allowing unbounded resources makes all problems trivial, whether hardware or software, using the same lookup trick.

Thus there's nothing gained from hardware, so claiming there is a constraint on software that is not in hardware isn't really correct. Both have the same constraints for fixed size machines, and both allow fast solution for unbounded size machines.

There's plenty of such "other machines" that solve NP hard problems in P time, such as closed timelike curves, chaos machines, infinitely precise engineering, etc., but such machines are generally not physically realizable, and most are not even cost effective for small n, so people use complexity to mean how the problem scales on physically realizable, cost effective devices.

If every conversation about complexity had to go through this same digression to clarify that there are indeed other areas of thought about computation that have different conclusions, then no one could cover the original topic.

Currently the Church–Turing–Deutsch principle seems to be the best formulation of such issues, and Deutsch added the physically realizable part to remove from consideration things that are math but not physics, since such machine concepts are not useful to do actual computation.

All good stuff :)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: