Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> consider getting mathematics written down properly, i.e. in a formal system

This was already tried, and failed (Hilbert). In the aftermath of the failure we learned that mathematics cannot be completely formalized. So this points to a fundamental problem with using AI to do math.



While it's true that mathematics cannot ever be completely formalized, per Gödel's incompleteness theorems, a huge amount of it obviously can, as is demonstrated by the fact that we can write it down in a reasonably rigorous way in a combination of conventional mathematical notation and plain English.

Nor is this in any way "a fundamental problem with using AI to do math".


There are two problems with this.

First, writing something down in English is very different from formalizing it. Natural language interacts with human brains in all kinds of complicated ways that we do not fully understand, so just because we can make a compelling-sounding argument in English doesn't mean that argument doesn't have holes somewhere.

Second, the incompleteness theorems apply only to given formal systems, not to formality in general. Given a system, you can produce a Godel sentence for that system. But any Godel sentence for a given system has a proof in some other more powerful system. This is trivially true because you can always add the Godel sentence for a system as an axiom.

This is a very common misconception about math even among mathematicians. Math produces only conditional truths, not absolute ones. All formal reasoning has to start with a set of axioms and deduction rules. Some sets of axioms and rules turn out to be more useful and interesting than others, but none of them are True in a Platonic sense, not even the Peano axioms. Natural numbers just happen to be a good model of certain physical phenomena (and less-good models of other physical phenomena). Irrational numbers and complex numbers and quaternions etc. etc. turn out to be good models of other physical phenomena, and other mathematical constructs turn out not to be good models of anything physical but rather just exhibit interesting and useful behavior in their own right (elliptic curves come to mind). None of these things are True. At best they are Useful or Interesting. But all of it is formalizable.


The other end of the Church-Turing intuition is that if we couldn't possibly do it then the machines can't do it either. What they're doing is the same, it's not only not magically less powerful it's also not not magically more powerful.

You seem to be imagining that Gödel is just a small problem you can carve off and ignore, but it applies for everything powerful enough to do arithmetic.

Of course we can have the AI muddle along with intuition, but that's not actually an improvement, our mathematicians can already use intuition and they've got a lot more experience.


It's a fundamental problem with using AI to do "intuitive" math, but not a fundamental problem with AI to do formal math.

As you have stared, a lot of mathematics can be formalized. For me the problem for AI with math is going to be the generative part. Validating a proof or trying to come up with a proof for a given statement may be within reach. Coming up with meaningful/interesting statements to proof is another completely different story.


Reading these threads makes me feel oddly familiar. This is the same discussion we have with Rust:

"You can't write everything in safe rust, therefore rust is worthless." vs "You must throw away all C code, or the whole industry will collapse"

except we have thousands of years of "unsafe" math, only a few decades of C.


> While it's true that mathematics cannot ever be completely formalized, per Gödel's incompleteness theorems

That's a misleading way of expressing that.

Math can be formalized as completely as we want, if we concede that some true statements will exist without a possible proof.


Hilbert's program assumed math to be completely decidable, i.e. that for every possible statement, a decision procedure could tell whether it was true or false. We know now that no such procedure can possibly exist. But merely writing down all known math in a formal system is completely doable. (Of course, formalization systems can be used to state any additional axioms that may be required of any given argument or development, so the original concerns about incompleteness are just not very relevant.)


TFA isn't about AI; it's about taking the same arguments mathematicians are already writing down semi-formally with a hand-wavy argument that it could be made completely formal, and instead/also writing them down formally.

In cases where the Hilbert machine applies, the mathematician writing down the semi-formal argument already has to state the new axioms, justify them, and reason about how they apply, and the proposal would just take that "reason about how they apply" step and write it in a computer-verifiable way.


> So this points to a fundamental problem with using AI to do math.

No it doesn't, AI like an LLM has no issues with stating two inconsistent propositions, just like humans, which is why both AI and humans can reason about all of mathematics, eg. Godel theorems is that a formal system cannot simultaneously be complete and consistent.


> This was already tried, and failed (Hilbert)

Who likely underestimated how hard it would be.

And who didn't have a 2024 computer on hand.


The reason Hilbert failed is because it’s mathematically impossible (Gödels incompleteness theorem).


> because it’s mathematically impossible

I guess the Lean devs can all pack and go home then. Also: how could they possibly miss Gödel's theorem, how careless of them.


Look, you’re taking my response out of context. I’m just stating that the reason Hilbert failed is not that he didn’t have a computer. Lean devs aren’t trying to do what Hilbert was.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: