I think this, as much of higher mathematics, will change soon.
Today's technology in natural language processing is reaching the point where it will be possible to marry a natural language processing system to an automated theorem prover and have it generate formally verified proofs from math prose proofs.
Once this technology can readily process the textbooks making up the PhD curriculum, I think there will be a culture shift. Quickly there will be a new standard that math results should be formally verified. The hallmark of math, after all, is that it can be proven correct!
But with an increased role for computers will come an increased appreciation for the things that only humans can provide. Motivation and explication will be more valued when the technical aspects of theorem-proving are automated.
If what you are saying is indeed going to happen, then the "problem" will become even worse. The formally verified proofs tend to be much more unintelligible than the human generated ones, and when they stop being so, the humans will become deprecated in general.
Practically speaking though, working mathematicians do not care that much about formally verified proofs. Working mathematicians are more interested in insight and understanding, and not necessarily in being completely sure of every detail. Formally and automatically verified proofs are much better suited for programming, as the automatic verification of the correctness of the program is after all _the_ best regression test.
So, while interesting in principle, I doubt formal methods will change much in how we do mathematics. Hopefully they'll change how we do software engineering though.
Perhaps it would mean that theorems are sometimes known to be true or false before anyone understands why. Digging through automatically generated proofs looking for interesting insights seems like a rather different experience than groping in the dark, not knowing whether a proof is possible?
If you have a statement that humans doesn't know how to prove, finding a proof via automatically generated proof is kinda like trying to decrypt RSA by factoring the key. In both cases, you're looking for a specific key in the search space is extremely large. You can put some insights into the tool that search for the solution, from simpler ones (e.g. using fast multiplication algorithm to verify the candidate quicker), or more sophisticated ones (e.g. using generalized number field sieve instead of trial division), but in the end, they don't help you much in practice -- the search space is still too large to expect to find a key in the lifetime.
It's an interesting analogy but I think it proves too much. You could argue that computers will never win at Chess or Go because the search space is too large, and look what happened.
Although it's not proven, we have fairly good reason to believe there are no sufficiently efficient shortcuts for factoring large prime numbers, while there are shortcuts for proving many difficult mathematical theorems. After all, humans can do one but not the other.
You make a good point. I agree that the analogy is not perfect, and that if you assume that breaking RSA is computationally hard for some intrinsic reasons[1], then the theorem proving is more like chess and go, rather than RSA. However, theorem proving is still much more difficult than chess and go, if you consider time spent on a theorem vs. on a single game, and on the number of good mathematicians vs. the number of good chess/go players. I think we'll have human-level theorem proving solved by machines at some point in future, though not very soon. Either way, humans will be well deprecated by then.
[1] - Practically speaking though, the biggest reason we believe that factoring is hard is that we haven't really figured out how to do this, so our belief that it's hard is really build upon our feeling on hardness of theorem proving. :) I think we have more intrinsic reasons to believe than P != NP than that factoring is hard.
I don't think so. I'm not really competent to say if the NLP part will be solved soon, but (fully) automated theorem proving is very hard, and I don't see deep learning applying to it very well.
What could happen, though (hopefully), is that mathematicians start using more and more the various proof assistants (Coq, Isabelle/HOL, lean, …). These allow to write definitions, statements, theorems in a more programming-like way, with structured proofs; to develop modular libraries; and even in some cases to use automated provers to wave away details of the proofs. The problem of adoption of such provers, however, is a culture problem, not really a technology problem: as said elsewhere, mathematicians do not seem to care too much about formal correctness. It's only if automated tools help gaining insights (automatic search for (counter)examples, for example; function plotting; builtin algebra solving system… ?) that they will convince more people to use them. Modularity might also be a selling point, but not sufficient to overcome the steep learning curve of these tools.
I have some hopes in [lean](https://github.com/leanprover/lean), a new proof assistant inspired from Coq, which puts the focus on good UI and automation.
Today's technology in natural language processing is reaching the point where it will be possible to marry a natural language processing system to an automated theorem prover and have it generate formally verified proofs from math prose proofs.
Once this technology can readily process the textbooks making up the PhD curriculum, I think there will be a culture shift. Quickly there will be a new standard that math results should be formally verified. The hallmark of math, after all, is that it can be proven correct!
But with an increased role for computers will come an increased appreciation for the things that only humans can provide. Motivation and explication will be more valued when the technical aspects of theorem-proving are automated.