Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Cool post! The stuff about Curry-Howard was really interesting and relates to a question i have been thinking about. But I'm a novice when it comes to the theory of type systems or theorem provers. Maybe someone here can enlighten me?

As type systems become more complex (like say in Scala or Haskell), they move towards having specialized syntax and weird hacks to express what are arguably simpler logical theorems/invariants about values in a type.. so why can't we have a programming language that gives you a full blown logic system(Coq? or Microsoft Z3? a SAT Solver?.. i don't understand the differences in detail) rather than part of one? Are there attempts at that, or does it turn out to be unworkable?



> so why can't we have a programming language that gives you a full blown logic system

Most dependently typed programming languages, including Coq, give you the ability to basically "do logic" in a mostly unrestricted way.

There are a few challenges that come to mind with making a practical programming language that can also do arbitrary math:

a) For the language to be useful for proving things, it needs to be strongly normalizing. That means all programs halt—otherwise, you could trivially prove any proposition. Unfortunately that limits its utility as a general purpose programming language.

b) The languages which attempt to be useful for theorem proving are generally pure functional languages (due to the strong correspondence between functional programming and logic). Most programmers don't like working in pure functional languages, although the success of Haskell shows that there are certainly programmers who do (myself included).

c) The more expressive a type system is, the more difficult type inference becomes. Yet, paradoxically, I find myself writing more types in languages like C++ and Java than I do in equivalent Coq programs. I believe Coq uses a variant of a bidirectional type inference engine, which is quite usable in practice.

Idris is an attempt to make a practical programming language with the full power of dependent types. So it's probably the closest real language to what you're describing.

Z3 and other SAT/SMT solvers take a different approach: they try to search for proofs automatically, and for certain kinds of math they are very successful. But these are generally only for first-order theories or other restricted kinds of propositions, and not great for general theorem proving. They are good for things like proving array accesses are within bounds, or that you never try to pop from an empty stack (at least that's my understanding of their capabilities).


I should add that something like FStar [0] combines the capabilities of automated theorem proving and a more manual-proof-based dependent type system like that in Idris.

It doesn’t have some of the power of Idris’ elaborator reflection, for example, but it can eliminate many long-winded manual proofs via the SMT solver.

[0] https://www.fstar-lang.org/


> For the language to be useful for proving things, it needs to be strongly normalizing. That means all programs halt—otherwise, you could trivially prove any proposition. Unfortunately that limits its utility as a general purpose programming language.

It’s not true that a strongly normalising language must be limited in utility. See section 4 on codata of Turner’s 2004 “Total functional programmming” for more information.

https://github.com/mietek/total-functional-programming/blob/...


> I believe Coq uses a variant of a bidirectional type inference engine, which is quite usable in practice.

Edit (too late to edit the original comment): After thinking about the kinds of type errors I've seen in my proofs, I think Coq's inference algorithm is based on unification (not bidirectional?). Whatever it is, it's quite nice to work with in any case.


> The more expressive a type system is, the more difficult type inference becomes.

This is always mentioned and I always fail to see the relevance. Inferable terms stay inferable when we add dependent types.


The basic issue lies with `decidability'. Generally we want type systems to tell us whether our program is well typed. However as your type-system becomes `too expressive' it starts to become much harder to determine if a program is well typed.

When finally a type system becomes so expressive as to be Turing complete, you get 'undecidable types'. Those are the types that correspond to undecidable programs for Turing machines.

A consequence is that any `sound' typing system will have to reject some programs that actually are `well typed'. This is why many type systems have a "Just trust me, this is fine" option. Like casting in C/C++ and unsafe in rust.

To fully eliminate those "Just trust me" and still accept all well typed programs you'd need to accept potentially never ending type-checking (or else you'd have solved the halting problem).


Various difficulties arise in how to have the expressive power play well with type inference

[0]: http://smallcultfollowing.com/babysteps/blog/2016/11/04/asso...


We do have such languages. For example, Java has JML [1], C has the very similar ACSL, and a programming language like Idris has a type system that can express arbitrary logical propositions (there are various tradeoffs that may influence the choice between a rich type system a-la Idris or a rich contract system a-la JML, but many of the principles are the same).

The problem is that we do not yet know how to make effective use of such code-level specifications. Finding a cost/benefit sweet spot is not easy (I personally prefer the JML approach precisely because I think it allows for greater flexibility in the choice of tradeoffs vis-a-vis confidence vs. cost).

[1]: http://www.openjml.org/


Don't forget Ada/SPARK and Dafney. The former probably has the most actual experience and documentation and the latter is pretty well integrated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: