Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have about a half a year's worth of Racket (acquired over around a year and half) under my belt, and intend to learn Haskell in the coming two years. Is there anything startlingly different in Haskell compared to the LisPs?

EDIT: Thank you for the responses, I have bookmarked the relevant github page and feel like my question has been well-answered :) Also, even more interested by Haskell than I was before.



First, as several people have already mentioned, types, and in particular type-directed programming. In Haskell, I often find the type of a function to be some of the best documentation for it, and I write the type of a function first to describe what I need to write. Partly that's a property of having a strong type system, and partly it's a product of the Haskell culture to want to pack as much semantic content into the type system as possible to help ensure correctness. It's shocking how often you're done the moment you get a new addition to the program to compile and typecheck; it's also shocking how painful it is to go back to C or a dynamically typed language once you're used to that level of static typechecking.

Second, expressivity and modularity. I find that Haskell programs tend to have two types of functions: very short functions (a few lines long at most) implementing the logic of one particular operation (heavily composed of other such functions), and long but extremely simple functions expressing high-level program flow. Good Haskell style encourages writing functions with the most general type they can work with, which results in many functions becoming very general, reusable helpers. As a result, I rarely find myself staring at a half-dozen functions at once trying to figure out what one does and how logic and state get threaded through all of them; each individual piece makes sense in isolation. That's also quite nice when trying to refactor a function.


The single biggest thing you'll notice is how much time you'll devote to thinking about types, and how that makes your code much more declarative.

Even casually skimming through the GetOpts implementation in Cabal should give you and idea of what I mean:

https://github.com/haskell/cabal/blob/master/Cabal/Distribut...


> thinking about types, and how that makes your code much more declarative.

it's not that it's more declarative (tho that becomes a side effect), but that thinking about the type forces you to model your problem domain in a formal way, which brings to the forefront all of the problems that would otherwise have been hidden away as undeclared assumptions.

E.g. in java or C++, the states that a class can be in is often invalid, but is only enforced procedurally via programmer checks, where as in haskell, you often have to holistically model the problem as a type, and all values that the types can take up must be accounted for. Thus, you hit the edge cases right up front, instead of finding out later (as bugs).


I am not a Haskell programmer so I look at this with interest.

The biggest thing I notice is that type specifiers have comments next to them akin to variable names describing what they are for.

This worries me. Comments are just so awful compared to clear meaningful variable/function naming that propagate through the code when used. Does Haskell discourage and even prevent clear naming or am I missing something? It would be a tradeoff I couldn't accept.


Like quchen said, the comments are for the API documentation. They are documenting that function's use of a type. The type alone frequently will not communicate any semantic meaning. Consider this type signature.

    exp :: Double -> Double -> Double
You don't inherently know from the type which argument is the base and which is the exponent. Double is not a name that you have control over, so we write a comment for it that will automatically show up in the API documentation. They only get names in the implementation.

    exp base exponent = ...
But these names are for the programmer, not the API documentation. You actually want these names to be separate from what is shown in the API docs. Consider this function:

    map :: (a -> b) -> [a] -> [b]
    map _ [] = []
    map f (x:xs) = f x : map f xs
This code is very concise and easy to read. It is easy to read because the names are small and the important thing is the pattern, not the actual meaning of the names. This function is also extremely general, which means that there's not much use in names. Position in the function means more than a name. Here's what it would look at written with "clear" OO-style names:

    map :: (a -> b) -> [a] -> [b]
    map function [] = []
    map function (firstElement:restOfList) = function firstElement : map function restOfList
First of all, we see that sometimes we want to pattern match instead of using a name. GHC would actually give a warning with this code saying that the name "function" in the first case is never used. This is actually a very useful warning that has helped me catch bugs on multiple occasions. Secondly, the "clear" names here completely obscure our ability to understand the code. It's just too much noise. Now, I'm not trying to get into the whole naming debate here. The point I want to make is that there can be good reasons to not show the parameter names in the API documentation, which is why you'll see comments on type signatures for the purpose of auto-generated documentation.


I guess the complaint is that Java-like languages would let you write the type declaration as

  Double exp(Double base, Double exponent)
while the Haskell syntax for types doesn't provide a place to write down names for arguments. Perhaps a nice fix would be to add Agda-style syntax for arrow-types, like

   exp :: (base : Double) -> (exponent : Double) -> Double
At some point we will want to add dependent types anyway. :)


You can relabel Double with a type alias. Really, adding a newtype gives you a sort of partial equivalent of tagged arguments (requires tags, doesn't permit reordering), and might be best practice when there's more semantic content and no clear typical order (eg, you're passing price and quantity to makeOrder, rather than base and exponent to exp). I make a habit of doing this with my C (where a single element struct adds no overhead, much like a Haskell newtype).


Yeah, type aliases can be really useful for some kinds of type documentation. But they seem a bit on the heavy side for one-off things like the exp and map examples.


I agree that it doesn't make sense for exp or map, but not because they are one-off; rather because they are already clear enough that it's not going to add much.

For map in particular what are you going to rename? You can name unbound parameters without a newtype:

    map :: (input -> output) -> [input] -> [output]
might add a bit;

    map :: MapFunction a b -> [a] -> [b]
takes more away than it adds I think.


It's usually a matter of taste. You shouldn't have types or values with 20 characters, but you also shouldn't do C-style "no vowels" naming.

It's also worth mentioning that the "-- ^ comment" syntax is so that Haddock, Haskell's automatic documentation generator, can display the comments in the HTML documentation. Example: http://hackage.haskell.org/package/Cabal-1.18.1.2/docs/Distr...


Haskell is a lazy-by-default language, this has some interesting consequences (though you might have seen examples of lazy evaluation in Scheme, but it's not so omnipresent there).

Haskell is pure: there's no mutable values in the core language (though IORef's can be used for that), so you can be sure that a function does not call `set!` and all data structures are persistent and immutable unless mutability is specifically emulated.

Haskell uses some deeply abstract and powerful concepts like monads, arrows and functors that give a great insight into programming languages theory.

Haskell lacks Lisp-way macros but gets by surprisingly well with its type system magic, laziness + combinators can get you ~90% of macro power.

Haskell enables you to reason about code formally, making invariants (code properties that must be preserved during its execution) in your code explicit. This makes programs much more safer and predictable.

On the other hand, its core is very close to the lambda calculus (that Lisps are also so close to), so with Haskell you basically get the power of Lisp + a solid and powerful layer of type system that verifies your code + nice, clean and consistent syntax.


"Haskell lacks Lisp-way macros but gets by surprisingly well with its type system magic, laziness + combinators can get you ~90% of macro power."

You can get 90% of the rest with Template Haskell, at the cost of some compile time and (arguably) some prettiness.


Yes. Although I have no experience with Racket (so correct me if I'm wrong), the Lisps are inherently multi-paradigm languages. Haskell, on the other hand, is functional programming in its purest form. I would consider it worth learning if only for that reason, even if you don't end up using Haskell a lot.

EDIT: ...Did I just accidentally paraphrase what ESR used to say about Lisp?


The way I see it is that Racket is like ANSI Common Lisp. The language (or ecosystem in Racket's case) is designed purely for practicality and not based on any purity of theory implementation. Racket gives the programmer immutable values, type safety, laziness, pattern matching and objects. It also gives the programmer all their compliments. It doesn't enforce good taste or the one true way.


If by Racket you mean the #racket language, that's not really true. It is a Scheme and embraces (and encourages) the purity of modeling that's associated with that.


   #lang racket 
is just a corner of the Racket ecosystem. If anything underpins it, it's the idea of teaching computer programming and computer science research. For elegance it #:extra-constructor-name for (struct...); contracts, units, modules, collections, objects, mixins and packages (two types); and two syntaxes for Regex.

That's of course not to say that the Racketeer ethic doesn't favor elegant code. Only that language design is not ideologically bound to it.


You evidently have never used Common Lisp :)


One word: 'loop'.

Common Lisp encourages all sorts of theory derived ideas. The community favors good looking code. But functions that start with 'n' as in "No" and the ability to redefine symbols linked to atomic values are always there for chainsaw juggling foo.


Racket is definitely multi-paradigm, it has a pretty nice line in OOP on the side. Pretty sure at least some OOP is required for any GUI stuff in Racket.


"Is there anything startlingly different in Haskell compared to the LisPs?"

Static types, possibly more purity (depending on the lisp you've been writing), pervasive laziness, slightly awkward (but usually unnecessary) macros.


I'm noticing a distinct lack of "productivity." :)


I'm not convinced that's "startlingly different in Haskell compared to the LISPs."


Meaning you feel it is not that much more productive?

Are there things in haskel that would make you recommend it over lisps?


Personally I feel like the gap between Haskell and Lisp is measured less in productivity (the way the gap between X and Lisp is for most Xs) but instead in comprehension and robustness. The code I write in Haskell might be only marginally easier to write than the similar lisp code, but it comes with much nicer, more sustainable structure and 60% of my unit tests for free.


Frankly, I'm starting to feel very strongly that static typing in and of itself makes you much more productive -- in the long run. Not so much because you write stuff faster, but because you spend less time patching it up.


Funny, because I am coming to the other conclusion. Static programming is great when you can build one giant unified model of everything in your code.

When you just want some parts that can be fitted together to get you what you want, dynamic programming is tough to beat.

Consider, how many extensible editors (or other applications) have been written in Haskel? Or, really, any static language.

Contrast that with how far emacs has managed to come. And, among the lisps, elisp is not highly regarded.

For that matter, consider how far javascript has managed to take web programming.


Both of those examples obviously have very little to do with the quality of the language used, and everything to do with the ecosystems that were built around them.


Then give me a compelling example otherwise.

Also... I was not meaning to make an argument to the quality of the languages. If anything, my argument is admittedly about the ecosystems garnered by the different types of languages.


The problem with these massive ecosystems of dynamic code extensions is that when you layer leaky abstraction on top of leaky abstraction, each layer being broken in its own subtle ways, the end result is a ticking time bomb waiting to cost its users thousands or millions of dollars in lost productivity. Your example of JavaScript in web browsers is a perfect demonstration of this.

We are the only engineering discipline where there are a significant number of people who actually think extensibility is more important than stability or robustness. Imagine a civil engineer pooh-poohing the stability of the bridge he's building, pointing instead to how easy it is to add additional lanes. Imagine a mechanical engineer decrying running a formal analysis of a new design for a car, saying "forget about that, look at how easily customers can plug in custom dashboard attachments!" This is lunacy, plain and simple.


This reeks of what I have seen as a common attitude among software developers where we make sweeping statements about other professions with little to back it up.

I will not claim that extensibility is the be all end all attribute. I will claim that it is a valuable attribute. More so for some applications than others.

Similarly, I will make the same claim for formal checked programs. In some fields/industries, why wouldn't this be the norm?

So, if the claim is that static typing can make for a more completely specified application and that we should demand that for some fields. I agree.

If the claim is that static programming is superior to dynamic, I take issue.


"If the claim is that static programming is superior to dynamic, I take issue."

I think this would benefit from a little more clarity about what you mean when you say "static programming" versus "dynamic programming" - I'm sure you don't mean "Dynamic Programming".

If you mean static types, then I think that they are a tremendous win wherever they apply, and I think that sufficiently sophisticated type systems exist that they can apply most places. Trying to express meaningful constraints in a brain dead type system is awkward and you wind up moving between over- and under-constraining yourself - though I've been surprised at what I can express in C (with zero runtime overhead) with a little creativity.

If "static programming" is taken to mean "compiled, with runtime compilation of additional code made difficult to impossible", which often correlates with "statically typed" in existing languages but is technologically orthogonal, then I agree that this kind of "static programming" is not uniformly superior.


Yeah, apologies. I did not mean "dynamic programming." I thought the context made what I meant fairly clear, though.

If you have any examples that show how this is technically orthogonal, I'm all ears. Hence my request for examples of things that are as extensible as emacs.

And to be clear, my understanding is that ghc is actually fairly extensible. I would love if there were more examples. Preferably in more approachable domains than compilers.


'Yeah, apologies. I did not mean "dynamic programming." I thought the context made what I meant fairly clear, though.'

No worries - as I said, I'd understood that didn't mean "dynamic programming".

"If you have any examples that show how this is technically orthogonal, I'm all ears. Hence my request for examples of things that are as extensible as emacs."

Well, Typed Racket would presumably be one example. More generally, as a theoretical proof, one could bundle the entire compiler into the runtime and link in arbitrary new code.

"And to be clear, my understanding is that ghc is actually fairly extensible. I would love if there were more examples. Preferably in more approachable domains than compilers."

I'm not aware of it being exceptionally easy to write plugins for GHC compared to other compilers - it has incorporated a lot of extensions to the Haskell language but that's not the same as a plugin ecosystem (which might itself exist - just "I'm not aware"). It certainly has a plugin interface, but so does GCC. As GHC it itself implemented in Haskell, a lot of pieces of it are also available as libraries.


There are no non-leaky abstractions outside of toy applications. Extensibility is the unique strength of software. Otherwise you might as well do it with hardware. Other disciplines are (more) limited by their physical constraints. If air-tight abstractions could solve everything machines could do the programming and there wouldn't be much need for human programmers.


Yes, clearly the reason that machines aren't doing all of the programming is that there's no such thing as a non-leaky abstraction.

ಠ_ಠ


Well it is pretty self-evident that non-leaky abstractions don't exist -- or I haven't really seen one in all these years. Take functions in functional programming: if tail recursion matters that is one leak; if strict or lazy evaluation matters that is another leak; if you want to share intermediate results well that springs a big leak; etc. Engineering is about trade-offs. We are there to judge what matters to us and therefore "leaks" and abstract away details that may not matter to us (at the level we are working on). Yes fundamentally I do see this as the barrier to complete automation.


Funny, I just swapped out the network card on my work desktop, because the new card has features the old one lacked.


I hear this forms the common theme for PL grants :P


XMonad is an example of a Haskell application that is compellingly extensible.

Edited to add: Also, since you asked specifically about extensible editors: http://www.haskell.org/haskellwiki/Yi


I don't think XMonad is a poster child for advertising how great (read: awkward) haskell is for runtime extensibility. Sure it does the job, but it's based on a giant hack (invoking the compiler on your configuration, forking a new process, etc.), this is pretty specific to xmonad and is non-trivial, so you can't easily reproduce this kind of extensibility for other applications. It's completely different to the kind of extensibility offered by something like emacs/lisp, which is done purely in the runtime and doesn't require transferring some state to a new process.


The only real difference is that you can't extend live - which matters, but not tremendously for a window manager.


Have you seen StumpWM ? The sad part is that CLX is terribly instable.


I actually use ratpoison :-P


I'm pretty sure behemoths like NetBeans and Eclipse rate pretty damn high on extensibility.

Anyhow, you don't have to take an all-or-nothing approach. It's pretty common to write game engines in C++ and then use a dynamic language to provide game-specific scripting on top (e.g. World of Warcraft uses Lua for the UI, Civilization V uses Python for most of the non-engine stuff). Also, Photoshop has javascript builtin.


MS Windows and a lot of Windows applications are extensible via COM. The core of COM is a dynamic cast at runtime to a static interface. You see something similar in a lot of Go code. For me this is the right balance. Static types but extremely late binding. Or static types, dynamic dependencies.


They are high, no doubt. In my view they still fall well short of the heights that emacs reaches.

Take a look at skewer-mode for emacs sometime, and realize that is less than 1.5k lines of javascript and elisp.

I was also avoiding the approach of bundling in specially crafted hooks for extensions. If anything, that really just kind of makes my point. That for some things there is a highly perceived benefit to dynamic languages. (And yes, the converse is true.)


You're not going to get concrete comparable examples because Haskell has not had enough adoption and resources thrown at it to be able to compare it in this way. What I can do is point you at another comment I wrote about this the other day and the responses it got.

https://news.ycombinator.com/item?id=7299034


Amusingly, I was in that thread, too. :)


I've not done extensive development in any lisp, so I'm not able to make a robust comparison. There are clearly ways in which Haskell leads to more productivity (static typing when sufficiently expressive is a tremendous win), and some ways in which LISP has an advantage. It sounds plausible to me that it's a wash; something other than a wash is marginally more likely, but I'm not confident about which direction it would go.


Makes sense. I think that mirrors my view, mostly. I have lately begun to fall on the side of lisp more so than haskel. Sadly, I can not really give a good reasoning as to why.

I can say that finally going through SICP has been borderline mind blowing. It is hilarious/sad/crazy to see how many hot topics today were covered in a bloody introduction textbook from the 90s.


On a side note, this may be worth looking at: https://www.cs.drexel.edu/~mainland/2013/05/31/type-safe-run...


Types

Haskell being statically typed will outlaw a large-ish number of direct lispisms until you provide the compiler sufficient type-based justification that the operation is OK. For instance, new lisp->haskell programmers often want to make arbitrary nested lists

    [[1,2,3],4,[[5,6],7]]
which is not well-typed so the compiler complains. Honestly, what you need is a different type than the strict Haskell list called a Rose Tree

    data Rose a = Leaf a | Branch [Rose a]
which tells the compiler that you explicitly want arbitrary nesting.

---

Laziness

All of Haskell has default laziness and this means that your environment is fully tilted toward laziness. Generally this means that a thing called equational reasoning holds nicely and this forms a MAJOR component of your ability to reason about Haskell code. In particular, given any repeated substatement, you can "lift" it up with a let

    ... e ... e ... e ...
    ==
    let x = e
    in ... x ... x ... x ...
Lazy Racket gives this to you too, but having it everywhere is a new thing. You also are able to rely on control structures as combinators much more so that something like

    fold x c . map f . map g 
is very common in Haskell since laziness will automatically fuse each step, while in (strict) Racket you'd want to manually fuse them together

    fold (c . f . g) x
reducing composability.

---

Purity

You can get pretty close to pure in Racket, but Haskell takes this concept much further. Purity and laziness are a driving force which leads to the need for things like Monads... and strong types allow the boundaries of various semantic segments to be very sharp. As an example, the STM libraries in Haskell are remarkably nice due to a combination of strong types and purity. In particular, you tend to build up computations of types like

    STM a, STM b, STM (a -> b -> c)
where the STM marks that these computations only make use of transactionally safe memory and are allowed to be re-run as many times as needed to ensure linearity. Then you use

    atomically :: STM a -> IO a
which "upgrades" STM to IO allowing now the entire set of side-effects by interpreting the STM computation as IO... and thus choosing to run and re-run it until it linearizes.

---

In each of these cases, Racket, being the flexible language it is, has methods of nicely including the feature—typed racket, lazy racket, base racket without ambient state or io—but Haskell goes fully and confidently into these three choices and that subsequently leads to very interesting combination effects and a community and ecosystem designed in a particular, interesting style.

In other words, I believe that even if you're intimately familiar with each of those things in isolation, Haskell may be the first time you've ever seen them together... and that changes everything.


For List types, you still need the (incomplete) https://ghc.haskell.org/trac/ghc/wiki/OverloadedLists work to make ListLike structures have a friendly syntax, but today you could cobble something almost-readable like

     [ R 0 , [ R 1 , R 2 ] ]


You might also want to play around with Typed Racket

http://docs.racket-lang.org/ts-guide/index.html


Aww, do I have to? I've been putting that off for so long, haha! I'll give it a go.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: