Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Baffling ABC maths proof now has impenetrable 300-page ‘summary’ (newscientist.com)
124 points by ColinWright on Sept 10, 2017 | hide | past | favorite | 92 comments


Two paragraphs that highlight the potential source of the problem in the sublinked article:

> Despite mathematics being a universal language, culture clash could be getting in the way, says Kim. “In Japan people are pretty used to long, technical discussions by the lecturer that require a lot of concentration,” he says. “In America or England we expect much more interaction, pointed questions coming from the audience, at least some level of heated debate.”

> There is a growing consensus that Mochizuki has over-engineered his work, contributing to the confusion. “Most of the large theories that he builds are not essential. He could have written things in a much more streamlined way,” says Voloch.

This is a problem in a lot of technical fields but also in writing in general. It's always easier to write 1000 words to say something than 100 to say it well.


> It's always easier to write 1000 words to say something than 100 to say it well.

As (probably) Pascal said: "If I had more time, I would have written a shorter letter".

https://quoteinvestigator.com/2012/04/28/shorter-letter/


Or writing a lot of code that does something, compared to writing less code that does something well.


Each takes time and concentration which is always a finite resource. But inexcusible when what you produce stumps even the experts who were willing to dedicate significant amount of time.


My verbosity is a shortcoming. Were I able, I'd be more precise and brevity would be the result. This is actually something I work on improving.


Not a dig at you, but I tend to use $5 words when a normal word would suffice. Even though the obscure words usually express my thoughts more precisely than common words do, they do a worse job of conveying understanding, since people often do not know the word and infer an incorrect meaning from context.


I try to communicate at a level where the intended audience can easily understand me. This is also partially out of laziness. I don't like having to repeat myself.


I'm not sure if this comment thread is a parody of itself.


That was at least part of my intent.


Me either!


"My verbosity is a shortcoming. This is something I work on improving."


That may be too brief. It doesn't quite convey the entirety of my intent. It'll do, however.


"I'm bad at speaking simply, but I'm trying to do better."


That's not too bad.

"I am working at being more concise."

Maybe, "I am working on brevity."

Related, I am working on this as I've been asked to write a book.


The Strunk and White admonition is "Omit needless words," which I always considered an excellent lead by example...


Thanks. I will take that with me as I continue to work on my book. This isn't really the thread for it, so I'll stop here.


> Related, I am working on this as I've been asked to write a book.

If this topic interests you I recommend this book on crafting sentences written by the mother of the famous "Edward Tufte" (one of the most influential designers in the history of software):

https://www.amazon.com/Artful-Sentences-Syntax-as-Style/dp/0...

I've read a number of writing books and I find the key to good writing comes down to writing good sentences. It's also a critical part of being succint, only second to trimming the non-essential from the higher level ideas.

This one highlights many great sentences written by famous authors and breaks down their syntax.


> I should be more concise.


The article didn't explain the abc conjecture in much detail ("The original proof is of the long-standing ABC conjecture that explores the deep nature of numbers, centred on the simple equation a + b = c."). That's unfortunate, because even though the proof is inscrutable, the conjecture itself isn't very complicated:

If three numbers a, b, and c are coprime and two numbers a and b have large prime factors, then their sum c generally does not.

To make that a little more formal, it means that the radical (product of distinct prime factors) of their product abc is usually less than the sum c, up to some exponent 1+eps:

    rad(abc)^(1+eps) < c
... for some eps. That eps is important! There are infinitely many numbers a, b, c where this isn't true for some eps (and there are ways to construct those), but the conjecture states that for any given eps, there are finitely many.

To find maximal counterexamples (which, because mathematics consistently uses intuitive terminology, are said to be "high-quality") you typically look for numbers that are smooth (small prime factors) but not powersmooth (larger prime-factors-raised-to-a-power). Typically you'll also fix a to be really small, to limit the search space somewhat. [MM14] describes other methods for finding triples.

Here's a simple example of a high-quality triple:

    a = 1
    b = 2*3^7 = 4374
    c = 5^4*7 = 4375
Because all factors are unique, rad(abc) = rad(a) rad(b) rad(c).

    c     > (rad(abc))^(1+eps)
    c     > (rad(a) * rad(b)     * rad(c))^(1+eps)
    5^4*7 > (rad(1) * rad(2*3^7) * rad(5^4*7))^(1+eps)
    5^4*7 > (1      * 2 * 3      * 5 * 7)^(1+eps)
    4375  > (210)^(1+eps)
This is a great example, because the exponent (sometimes called q) is about 1.57, which is extraordinarily high. In the first billion c's, there are only 34 with q > 1.4. This is the fifth best such triple we know of. [triples]

Consider a trivial case where all three numbers are prime; a = 2, b = 3, c = 5. Primes are their own radical, so:

    5 > (2 + 3)^1+eps
This is the edge case.

[triples]: https://www.math.leidenuniv.nl/~desmit/abc/index.php?set=2

[MM14]: https://arxiv.org/pdf/1409.2974.pdf


I can no longer edit this comment, but there's a typo at the bottom:

    5 > (2 + 3)^1+eps
... should be

    5 > (2 * 3 * 5)^1+eps
Which is a good example of the "common case", i.e. not a triple of particularly high quality (this is q=0.4-something).


Thanks for the edit, I was wondering how that would make sense otherwise. More generally, thanks for clearly communicating what's really going on! Math can only be extremely beautiful if it is actually understood.


> Math can only be extremely beautiful if it is actually understood.

This is why I've found relearning math very rewarding. Even if I haven't found as many applications in my day-to-day programming life as I hoped (other than maybe when writing Haskell-esque languages or when I'm fortunate to get to use pure FP concepts).


Here's a link to the best quick and elementary (yet detailed) explanation of what ABC conjecture is all about: https://www.youtube.com/watch?v=XYisYYhKKYA.


Some bitter conflict is referenced in footnote 1 of the 300 page summary.

""" The author hears that a mathematician (I. F.), who pretends to understand inter-universal Teichm¨uller theory, suggests in a literature that the author began to study inter-universal Teichm¨uller theory “by his encouragement”. But, this differs from the fact that the author began it by his own will. The same person, in other context as well, modified the author’s email with quotation symbol “>” and fabricated an email, seemingly with ill-intention, as though the author had written it. The author would like to record these facts here for avoiding misunderstandings or misdirections, arising from these kinds of cheats, of the comtemporary and future people. """

I wonder if "I.F." is referring to the same "I.F." quoted in the original article.

[1] http://www.kurims.kyoto-u.ac.jp/~gokun/DOCUMENTS/abc_ver6.pd...

[2] https://www.maths.nottingham.ac.uk/personal/ibf/activity.htm...


I've personally modified thousands of e-mails with the ">" symbol; it's called replying. That, per se, has nothing to do with fabricating an e-mail; why mention it?

Of course > symbols are fairly likely to be used in a forged e-mail, possibly next to fake quoted text; but that then is not simply an alteration of genuine text by the addition of those symbols.


>I've personally modified thousands of e-mails with the ">" symbol; it's called replying. That, per se, has nothing to do with fabricating an e-mail; why mention it?

I read it as "I.F." making it look like something the author wrote was the author quoting "I.F.", or vice-versa.

For example,

    From: John Doe <johndoe@gmail.com>
    The meaning of life is 42.
becomes

    From: John Doe <johndoe@gmail.com>
    > The meaning of life is 42.
    Thanks for the help!


For those unwilling to click through, I.F. is Ivan Fesenko. Among other things, he wrote this article about Mochizuki's work:

http://inference-review.com/article/fukugen


Yes, I.F. is the one and the same.


The opening sentence of a linked article sums it up quite well: "If nobody understands a mathematical proof, does it count?"

The abc conjecture may be solved today but only when a sufficient number of people understand and accept the proof as a proof.


Formalize it with Isabelle / Coq / Lean. Then it counts.


Math is more about figuring out why things are true than what is true.

Mathematicians want to understand Mochizuki. They couldn't care less if hn thinks it counts.


I don't know why this got flagged.

At the point that we no longer understand the relationship between the formal proof witnesses (and really, the class of possible witnesses) and the axioms we choose, we can no longer do mathematics, because we can no longer meaningfully explore axioms -- our ability to make guided changes is destroyed by our inability to understand their effect.

It's important for the community to understand something of why a thing, not just that it's true, because that's why drives the development of mathematics forward. (And indeed, particularly so in the ABC conjecture, which sits at a node between the nature of multiplication and addition, which don't usually have much to do with each other.)

I actually wonder if US (and perhaps other) math education is harmful here: the focus on rote learning and just knowing that a thing is true (to mechanistically apply it) has conditioned people to not understand why the hesitance over proofs that humans don't understand -- for most of those people, they never understood the proofs anyway.


That is an excellent point. The future is mechanical proofs. Which will bring tools for proof search, proof refactoring, proof minimization, proof navigation, theorem generation. We'll be able to tackle _much_ harder problems, while still being able to get the gist of it.

Sometimes I'm saddened that this future may come slower than we'd like due to imperfect funding structures. But I've grown a lot of patience over the years :)


I'm actually very pro-formalization and mechanical verification -- both for mathematics and computer science. $HOBBYPROJECT involves automated theorem proving, while I'm trying to convince $DAYJOB to adopt some formal methods.

I was just pointing out that the person got flagged for commenting that "witness and dump" isn't actually very useful for mathematics as a field, except as a signal that we should investigate a topic further. But in the case of the ABC conjecture, we already have plenty of incentive to investigate.

I think mathematics and science have a lot of learn from computer science in terms of managing large models, proofs, etc -- and that we'll get a lot of automatic tools. That will all be really great.

But there are proofs that are basically just brute-forcing a solution for which we have no higher-level understanding, and those don't really add much by way of knowledge to mathematics. At the point that those are all we can generate for "big" problems, we may be in trouble.


Another excellent point. Right now it's a "winner takes all" competition. It matters to prove a result, and much less to provide an "elegant" proof. I can only hope for a future where we measure the algorithmic entropy of a proof [log proof length][0], and results like "ABC theorem proof using half the bits as best known proof" become notable.

[0] https://en.wikipedia.org/wiki/Kolmogorov_complexity


I'm interested in your $HOBBYPROJECT. I'm hoping to develop some software similar to edukera.com, ie using proof assistants as educational tools for mathematics. Would love to swap some cool links and references. Thanks!


I'm actually in the process of (poorly) implementing my own proofs engine to try and gain a deeper appreciation for the notions behind type theory (and how eg Coq operates):

That a function is the same thing as a proof, with the theorem it proves being the type it constructs and its assumptions being the types it takes in. (Functions are proofs (which in this case, are things that take witnesses of their assumptions and produce witnesses of their theorem), values are witnesses that something is true, types are theorems, etc.)

I have an interest in topology, and want to understand HoTT, but my intuition around type theory wasn't up to par -- so I'm trying to tackle it in a constrained setting (ie, not proving theorems about mathematics as such, but a narrower problem space). Figured there was nothing to do besides get into the messy bits of it.

Can you tell me a bit more about what you're trying to do?


Hah, I'm in the same but opposite situation: I, too, am working on a Coq-like proof assistant, but I understand the type theory far more than the topology needed for HoTT.

Do you have any suggestions for simple introductions to HoTT, especially for someone without the topology background?


My background may be showing, but it may be hard to appreciate HoTT without understanding the role of homotopies in topology. My (limited) understanding of the material is that it's an attempt to introduce a homotopy structure on the type system, and then use that to talk about logical equality (mumblemumble) being the same as homotopy equivalence.

It's then using that equivalence structure between the proofs to reason about constructing proofs, as you can reason about the constructions that are possible out of classes of proofs. And that's basically where I get lost, because I don't quite know enough type theory to understand the structure they're trying to build, so I can't quite get the specific motivations. (The obvious high-level one is better formal reasoning.)

I haven't been following HoTT super closely for a year or two, getting sidetracked into the background, but last I checked there wasn't a ton of simple material on it -- it was sort of read the book, read the Coq proofs/framework, and figure it out. (Though, this easily could have changed.)


Fully agree. But person probably got flagged for commenting that what is important to hn may not be what's important to mathematicians.


> The future is mechanical proofs.

That almost sounds like "the future is mechanically composed novels" (or music). Understanding why something is true is just as important as knowing that it is. Mechanical proofs will be impossible for humans to understand, so the value of such proofs will be rather limited (in that people will still continue searching for a "real" proof).


Mechanical proofs and understanding are not mutually exclusive, that's the whole point! Just like mechanical processes [aka software] are still understandable. A stronger conjecture is that mechanical proofs will enable _better_ understanding, as tools and metrics are developed for tackling "understanding".


Have you ever read a mechanical proof or studied mathematics at the graduate level? It's incredible difficult to prove even basic things in the current mathematical proof dialectics and reading the proofs is even more painful. Maybe there will a come a time when mathematical proofs are helpful, but it's not a particularly active area of research and the amount of work to be done in it is monumental in order to get to the point you're proposing.


I stole a small part of http://compcert.inria.fr for a hobby project. While this is not "graduate level mathematics", it is "graduate level computer science". I could have never done it if I had to memorize half a book of definitions just to warm up.


> The future is mechanical proofs.

For some value of "future" - sure, maybe. But the vast majority of theorem solutions do not lend themselves to mechanical proofing, and it takes great effort to do it at all.

I don't really agree with your thesis here. I don't see how we're going to get to a future where we're tackling much harder proofs, because the hardness is already what prevents them from being so easily mechanized and checked in an automated manner. We have no problem coming up with new theorems, either - half the job of a pure math researcher is coming up with interesting questions that are too hard to solve immediately but not quite so hard that they're inaccessible.

The only way to make the proofs "harder" is to make theorems that are even further removed from our current mathematical capabilities. Otherwise you're stepping into the undecidable territory. Simply put - solving open problems is a major activity in mathematics because it develops new mathematics, not because the actual end result is useful in of itself. If I prove to you that Pi is normal, the fundamental contribution is (hopefully) a method that is partially or fully generalizable to other domains. No one really cares if Pi itself is normal, and most already expect it to be. To mechanize that process (if it's even possible at all) requires that the mathematics for solving it already exists, which means that the problem is most likely either 1) uninteresting, 2) overlooked.


People have a lot of very valid complaints about common core, but at its core (hehe) it does seem to attempt to start to remedy your complaint about US math education at least at the high school level.

Hopefully that can propagate upward.


I don't see these approaches as necessarily opposed. It seems like in principle, formalizing a proof and verifying that it doesn't have any errors could be a preliminary step towards understanding it? (Though in practice, perhaps this is overly roundabout.)


Also, once enough people figure it out, one (or some) of them can make better explanations.


Indeed. We now actually have some math results of interest for which we have only formal proofs and no human proofs. https://arxiv.org/abs/1509.05468 is an example.


We are very far from the point where that is feasible for many new results in math.


>"If nobody understands a mathematical proof, does it count?"

Just wait until general AI really kicks off, then all of new math will be like that. It's not that humans are bad at logical thinking, our weakness is memory. That won't be the case for an artificial agent with instantaneous perfect recall of everything it has ever seen.


lol. I think the resurgence of interest in constructionist mathematics via Homotopy Type Theory will lead to better proofs ( since all proofs in HoTT are like programs and can be computationally verified in a straightforward way ).


Even an AI will have various levels of cache. Some memories will be register-level instant, some will be thousands of miles and quite a few servers away, and others in between.


That's a good point. Lets throw "instantaneous" out, having the ability to store and perfectly recall data is a critical advantage for any thinking entity.


>instantaneous

as the memory will exceed ram and then disk-space, that will be less true.


Mathematics is a language. You don't really need to memorize it, you can read it. Unfortunately, most people aren't exposed to anything higher than arithmetic.


yeah, well, have fun reading e.g. the stacks project[1] of round about 4000 pages without remember the necessary steps leading up to a corollary.

[1] https://stacks.math.columbia.edu/browse - abstract algebra as far as I can tell


You're sure as hell not going to memorize it.


I remember having to memorize the multiplication table. And that was just the beginning...


I'm not a mathematician, so I don't know the significance of the ABC conjecture, but for those that are laymen like me, here's a quick introduction to the ABC conjecture:

https://m.youtube.com/watch?v=RkBl7WKzzRw


Not sure why you're being downvoted. The video actually explains the ABC conjecture, which the article really doesn't.

Also, the second-highest comment in this thread right now is one explaining this conjecture.


I don't understand it either, but I've learnt to mostly ignore downvotes.


It's because of math hipsters who disapprove of such mainstream sources as Numberphile.


This one is a bit more informative: https://www.youtube.com/watch?v=XYisYYhKKYA.


I know there has been a lot of work in automated theorem proving.

I'm curious if anyone here is able to describe the state of the art in that. Is it really just a toy for basic things right now, or how realistic is it that we'd ever get to a point where the author of a paper like this could use a formal language to show that their proof is indeed valid and complete?


It is realistic in the sense that it has already been done! But it's still laborious in the extreme.

Kepler conjecture was stated in 1611 and had been unsolved since. Thomas Hales started a project to attack it in 1992. After six years of work, he announced the proof in 1998, in the form of 250 pages of argument and 3 gigabytes of computer calculation. He submitted it to Annals of Mathematics, one of the most prestigious math journal, for review. Reviewers and the author tried valiantly for five years, gave up, and published it in 2003 with a warning that while reviewers were 99% certain, it couldn't be completely reviewed.

Soon after getting this both rejection and acceptance, Thomas Hales announced the plan to formalize his proof to remove any uncertainty. It was enthusiastically received by automated theorem proving community. For a while Thomas Hales "shopped" for the prover tool to use and basically leaders of every significant provers tried to "sell" it to him. He decided on HOL Light, wrote the detailed plan for formalization, and estimated it would take 20 years. He actually carried out this plan, announced the completion in 2014, wrote the paper on formalization, and submitted the formalization paper, with 21 collaborators, in 2015. The formalization paper was published in 2017.

So there's that. The formal proof of Kepler conjecture is at the moment the most significant corpus of formalized mathematics in existence.


> After six years of work, he announced the proof in 1998

then

> plan to formalize his proof ... estimated it would take 20 years

so is it just me, or is the "work" of the normal proof only took 1/5th of the time it took for the automated/formalized proof?! that seems counter-productive imho...


"Automated proof" means that the verification is automated. Actually automating the process of finding proofs is still mostly an open problem.

Formal proof software will help you on small stuff, but you will still do most of the work, and you have to go much more in details, so it takes much more time.


No surprise there; it's why the software development industry is reluctant to adopt such techniques; nobody wants to write 5 times the amount of code, or take 5 times as long to get to the market.


> so is it just me, or is the "work" of the normal proof only took 1/5th of the time it took for the automated/formalized proof?! that seems counter-productive imho...

It took 5 times to make it not a hunch but a real demonstration. If that's counterproductive or not remains as a something opinable.


It looks counterproductive when it's not demonstrating informal work to be false.

> make it not a hunch

That's not what automation is doing; it doesn't turn conjectures into proofs. It finds mistakes in proofs; those proofs are not "hunches" but rigorous efforts.

Something which finds faults demonstrates is value mainly whenever it finds a fault. (Or at least that is a very easy perception to slide into.)


And a fault was found! The original proof, as published, is incorrect. The error and the fix was published in "A revision of the proof of the Kepler conjecture" (2009). https://arxiv.org/abs/0902.0350

Note that we now actually know there is no more fault, because formalization is complete.


21 collaborators. Depending on how much time they each spent on it, he could have easily gone over budget.

Also, as it was all unknown stuff at the time, any estimation would have been made without prior experience.


How does that compare to the time it takes to write code vs test code? Maybe not quite that extreme, but test code is time consuming, and is essentially what theorem proving is.


How is test code related to theorem proving?


I am not an expert, but I recently took a course on formal systems, which included theorem proving. It was more focused on proving properties for code since this is the focus of my professor, but I was really surprised how well it was working. I also recently tried out Liquid-Haskell, which adds an SMT solver (i think...) to Haskell. In my opinion, theorem proving has come a long way and I think the challenge is now equal in theorem-proving and programming language. We have to get the ergonomics right, deriving properties for imperative code is awful. We need a super elegant, easy to use and not too restrictive language that can be verified automatically. The overhead of using it must be minimal.


[Idris](https://en.wikipedia.org/wiki/Idris_(programming_language)) looks promising for theorem proving. I'm not sure I would say the overhead of using it is minimal, but it's at least a very expressive language.


There is Agda, which was created for that, and even HoTT has a formalization in this language.


By the way, if anyone is interested in getting a feel of the difference between these types of systems but is more just generally interested and doesn't necessarily have anything they want/need to prove themselves this: http://www.cs.ru.nl/~freek/comparison/comparison.pdf is a good document to skim through/jump around.


It's still a significant order of magnitude more time consuming to write a formal proof as you can't gloss over any details. Proof automation helps somewhat but the search space is so enormous it's not going to be feasible any time soon that you can avoid doing the difficult parts yourself. The formal proof for the Kepler conjecture involved around 15 people and was estimated to take around 20 years of work I think. If the summary for this proof is 300 pages, the formal proof would probably be 3K pages at least.


So, there are two main areas of development in the automation of theorem proving.

The first one is proof assistants, such as Isabelle/HOL, HOL Light, HOL4, Coq, PVS, Lean, TLApm (and others). These are typically languages to be used directly by a human, and share many common features with programming languages. In a sense, you "program" your proof. There are some large developments in such languages, as pointed out in other comments, including the Kepler conjecture (in HOL Light), the 4 colors theorem and Feit-Thompson theorem (in Coq), a fully verified micro-kernel (SEL4, in Isabelle/HOL), etc. It's an active research area with multiple teams around the world, but the provers are still difficult to use for non-experts. Isabelle/HOL has some nice features and good documentation that make it a bit more approachable to newcomers, afaik, and Lean has promising design decisions, but clearly right now it's a lot of effort to formalize anything consequent in proof assistants.

The second area is fully automatic provers — an endeavour started very early (50s, 60s?). Such provers accept a logic formula as input, and try to compute if it's a theorem or not. They typically accept more restricted fragments of logic (because it's a super hard problem!). Even first-order logic, which is less expressive (and convenient) than the higher-order logic usual in proof assistants, is semi-decidable only because of Gödel's incompleteness theorem. This means a prover for this logic will never terminate on some inputs that are not theorems. There still are interesting provers for FOL, e.g. [E](https://eprover.org). More restricted fragments that tools can effectively deal with are SAT (purely propositional logic, solvers are very good nowadays) and SMT (SAT + theories such as arithmetic) which is very useful in software verification (e.g. liquid haskell's type system, why3, boogie, F*, etc.).

There are people trying to join both domains of research by developing so-called "hammers", that make automatic provers available in proof assistants to make small proof steps entirely automatic.


question: if a reviewer is given a proof to verify, then is it possible to simply step through each transformation and verify it, without really understanding the entirety? Or in cases such as this are there steps that are new or obtuse enough that there isn't a way to verify them without understanding the whole thing. Or, are there individual steps that require so much work that it is impractical to verify them?


In practice, a proof contains many steps that are highly non-obvious unless you have a deep understanding of the subject. This is done not just for the sake of brevity, but also clarity. In a proof, there are a few key ideas you want to show. Around that, you need glue to convince the reader that your key ideas work, and save them from some the tedium and hard work of verifying that stuff.

A 1/4 page proof, when made as simple as you suggested, could easily bloom to 2 pages. At this point, your reader can no longer find those few key ideas, and is instead lost in the rigor.


If you write a proof in sufficient detail (which nobody does but is manifestly possible), it can be verified with no understanding at all. Indeed, it can be verified by ~300 lines of Python: http://us.metamath.org/downloads/mmverify.py


This process is very similar to analyzing the plot of a novel to the utmost detail: it depends not only on the "facts" but very much on the writing style (compare f.i. Joyce and H. James).


This sheds an interesting light on the referee process in that referees are given an inordinate amount of power in the scientific process.

Suppose I am a referee, then I am a mathematician and I generally want new and interesting work released having to do with my subject area because I want more students interested in those topics validating my general area of research. If I dont plan to personally build on this work (because maybe its too confusing or doesn't shed light on what I am working on), then in a case like this where the paper is so confounding, whats to keep me from simply saying "I understand this and its correct" even if that is not the case? Its not like if you referee and approve a paper that is later proven wrong (especially in a case like this) you are run out of the mathematical community. Do they need to provide detailed explanations as to why they think its right?

What would stop a referee from simply going along with it and approving this paper, especially when the community is confounded by it and doesn't want to do the work itself.

Does the rigor of scientific process ultimately just come down to unrigorous consensus?


Professional mathematician here.

That can occasionally happen with much less important papers; e.g. someone is asked to referee something, and doesn't really feel like actually reading it but would still like to see it published. If they are not conscientious they might only briefly skim the proof and write a hasty referee report.

But ABC is a major claim. The community will not be nearly so quick to reach a consensus. In particular, from what I can tell now there are a handful of mathematicians other than Mochizuki vouching for the proof, but even that is still not enough: overall, the community has yet to come to a consensus that this is enough.


> What's to keep me from simply saying "I understand this and its correct" even if that is not the case?

Then you should be able to have intelligent discussions about the paper and answer questions that come up. If you're pretending to understand it, a quick 5-minute discussion with a mathematician would reveal that this is not the case.


The proof apparently rests on an entirely new (not yet vetted) branch of number theory developed by the proof's author, "Inter-universal Teichmüller Theory". 45-page overview linked:

http://www.kurims.kyoto-u.ac.jp/%7Emotizuki/Panoramic%20Over...


> “The language strikes me as substantially more accessible than that of the original papers.”

So is not really a summary, is more of a reformulation, an easier to read rewrite.


This reads like an academic paper created by 'machine learning' to be unintelligible yet look legitimate at-a-glance...


They should probably break it up into more manageable pieces. Maybe by starting with Frobenioids?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: