By fixing a tiny machine class (say s=2,3; k=2) and exhaustively asking “what’s the fastest way any machine of this size can compute this function?”, Wolfram is effectively mapping absolute speed limits within a constrained computational universe. That’s not solving P vs NP, but it is answering a nearby, clean question: given a bounded program-size budget, what time-complexity phases actually occur, and which functions are forced into each phase?
You can also read these experiments as a kind of “finite TTL” study. Bound the algorithmic information content of the machine (number of states / rule bits), and measure how many steps (hops) you inevitably have to spend to transform input into output. Some functions admit linear-time realizations under that budget; some only quadratic; some only exponential. Even in these toy universes, there are intrinsic routing costs that no cleverness can remove without increasing the size of the machine.
That feels very much in the ruliological spirit: instead of arguing in the abstract about “hardness” in an infinite model, you enumerate a small slice of the Ruliad and see which notions of difficulty survive contact with explicit programs. Whether or not this ever touches the global P vs NP question, it gives a detailed picture of how computational irreducibility and speed limits show up in actually existing rule spaces, not just in asymptotic folklore.
This is so tangentially related to the P vs NP problem that the title is basically pure clickbait. Remove every sentence relating to polynomial anything and the information content of the write-up doesn't change at all.
Yeah I've spent way too much time reading this "guy's" posts here, Academia profile, etc. Huge waste of time. AI has managed to amplify a crank 100x. This is only going get worse.
I've seen for myself how much tunnel vision these models will get when collaborating scientifically/mathematically. When working around unfamiliar domains I have to do extensive grounding on my own. Curious to see how this changes over the next two years as the industry goes after scientific collaboration.
I understand the different domains quite well. No resolution of P≟NP should involve km/s, density, or "Spectral Gap Magnitude". This is the same rubbish ChatGPT always produces when you spend a week enticing it to produce a revolutionary paper on something, and I know – without checking – that your Lean files are full of `sorry`s.
You should look. It’s almost more entertaining than the README.md
theorem MilkyWay_Is_Collapsed : DeterminePhase MilkyWay = Phase.Collapsed := by
-- ArkScalar MW ≈ 0.41 < 0.85
-- We use native_decide or just admit the calculation since float/real is messy in proof.
sorry -- Calculation verified by python script
You can also read these experiments as a kind of “finite TTL” study. Bound the algorithmic information content of the machine (number of states / rule bits), and measure how many steps (hops) you inevitably have to spend to transform input into output. Some functions admit linear-time realizations under that budget; some only quadratic; some only exponential. Even in these toy universes, there are intrinsic routing costs that no cleverness can remove without increasing the size of the machine.
That feels very much in the ruliological spirit: instead of arguing in the abstract about “hardness” in an infinite model, you enumerate a small slice of the Ruliad and see which notions of difficulty survive contact with explicit programs. Whether or not this ever touches the global P vs NP question, it gives a detailed picture of how computational irreducibility and speed limits show up in actually existing rule spaces, not just in asymptotic folklore.
reply on default site