Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Only Sure Thing In Computer Science (michaelrbernste.in)
97 points by mrbbk on Nov 13, 2013 | hide | past | favorite | 63 comments


Maybe this is true in the ivory pillars of academia, but in the real world a large amount of technology is simply worthless - the world would be a better place if it didn't exist: Dead code that was never cleaned up, encapsulations of encapsulations of encapsulations, code that solves nothing but to express the astronaut architecture of a dogmatic developer, code based on poorly communicated specifications, code that has so many bugs as to consider it worthless...

To suggest "everything is a tradeoff" is a sort of moral relativism. It implies that all code has value and is a shield for criticism to even the poorest work.


I don't think the intention was to imply "everything is equal". I think the intention was closer to "everything has a cost" or "you can't simultaneously optimize for everything". AKA the Project Management Triangle.


I realize what they were trying to say and I agree with it; they just stated it poorly. Is it that hard to not use the word "everything?" I suppose it does not make as good of a headline. If you are going to give advice, you need to consider how your advice can be misused. There exists techniques and technologies that are measurably superior to others. Knowing how to recognize them is a critical skill for a competent developer.


You could go in and clean up or refactor all of that bad code, or you could use that time for something else. So it's still a trade-off.


The fact that everything is a tradeoff does not mean that incompetent developers in the real world made a reasonable tradeoff.


I'm being completely twitchy and pedantic and missing the point entirely but I'm still compelled to say, as a Mathematics and Computer Science teacher:

Computer Science is a branch of mathematics. It consists of axiomatic systems. There are many, many, many sure things in computer science.


It might have been better phrased as "the only sure thing in software engineering" but that doesn't sound as good.


Oh, I understand. I just have a personal quixotic crusade about the distinction between the two. Computer programming and computer science are parallel fields that complement each other. Too many students who want to learn to program major in computer science, because nobody tells them what CS actually is. IS or IT with a CS minor might suit them a lot more.


Or pretty much... all engineering.


There are many things built off of a basic set of axioms which are internally consistent.

Unless you can prove that those axioms are sure things then how can anything be a "sure thing"?


I don't understand this. Not everything is always a trade-off. Take array sorting, a classical example. There are many stupid solutions that have both unnecessary memory/time requirement and unnecessarily large code size and method complexity. Such code can be always traded for a more efficient solution "for free", at least before deployment.


You're right in a way, but I was surprised by how even the worst algorithms you learn in school have some properties that are better than the ones considered the best (e.g. bubble vs quicksort, quicksort isn't stable). This website makes it pretty clear: http://www.sorting-algorithms.com/

To the larger point: everything is a tradeoff once we refuse to consider the "stupid solutions." Yea, bogosort is absolutely stupid, but we don't really think about it when seriously designing software. Instead, we're left with the smart solutions, none of which can be considered optimal for all applications.

Everything is a tradeoff is a tautology. Because if one algorithm was proven to completely dominate another algorithm, we would no longer seriously consider the weak algorithm.


It's pretty easy to make quicksort stable. The more interesting comparison is with insertion sort, which is often used for small arrays.


But whether to ship what you have or spend time searching for a better solution is a tradeoff.


But that's not computer science.


My point exactly. That's a human resources and engineering level tradeoff. It's not a tradeoff on the level of pure computer science.


The grad assistant in my computer structures (programming hardware in assembler) class suggested that every CS student should take a semester of microeconomics to understand trade-offs. He was right.

EDIT: Made microeconomics more explicit.


Liked this point. The notion of a budget frontier and the boundary it defines is really helpful for design decisions of any sort. There are always limits, whether that's time, money, CPU cycles, IO bandwidth, or any other resource. I remember that one of my early econ instructors defined economics as the study of decision-making under limited resources. (Of course, many other disciplines could be defined similarly!)


Exactly. Much of the academic world (humanities?) operates thinking that there are no constraints. I felt economics was refreshing in forcing tradeoffs. I assume these would be equally relevant to engineering too.


"Should"? I'm slightly surprised that CS courses don't have at least an introduction to hardware/low-level level design as a compulsory component.

Then again, I'm probably showing my age...


I certainly would support both microeconomics and some form of introductory systems course (low-level programming/architecture) for any CS program.

Unfortunately, the other end of this CS guarantee is that there are not enough credit hours to provide a foundation that everyone would agree with. University restrictions prevent 'high-unit' majors from exceeding the credit hour requirement for a degree.

We just went through a long process that ultimately resulted in adding a requirement for CS students to take a course which involves concurrent programming. Unfortunately, this came at the cost of removing a requirement for numerical analysis. Even with this update, you can still graduate without ever hearing the term 'OSI Model' or ever being exposed to anything dealing with the interwebs.

There are, however, amazingly good opportunities to expand the horizon outside of the department that also nicely tie into general education requirements, if students are made aware of them. In addition to economics, philosophy is another rather nice area to take courses in. Our philosophy department has a rather nice course in symbolic logic.

It's all about the tradeoffs though.


I did all kinds of crap related to physics, chemistry, calculus, and the arts. I'd much rather have done those kinds of courses than the (almost completely) useless things. There is definitely credit room, just a wrong focus. Also being able to skip early level courses for more advanced ones would have been nice (I challenged a few anyway, but they were expensive and still consumed some time.) Then maybe I would have enjoyed CS and not dropped out.


This is another one of those interesting areas to explore in CS education that comes up a lot in conversation. Of course, there is no one path for everyone and there are many people who would thrive and reach greater levels of proficiency and success outside of this sort of environment.

For my bit, I received a 2-year degree in a programming-centric degree program (systems programming) and my 4-year degree in a CS-focused degree program, which placed a much heavier emphasis on theory and models. Coming from my first college, I was upset over having to 'repeat' a lot of subject areas, only to find out that under a different focus these courses offered a better perspective over areas I was missing understanding in.

Looking at the two years I worked as a TA and the time I have spent assisting in an applied programming student organization, I've come across many students who have complained about not being able to skip ahead to the more advanced materials. Several complain vociferously about having to take the theory and logic courses. Others complain about the course in low-level programming (C -- which is not even low-level) or systems programming. Most of the time that I've seen, this is a result of simply not understanding the concepts as result of a lack of foundational knowledge. Mathematics is an area continually complained about amongst the newer students as a requirement for 'programming'.

I've worked with students who wanted to push straight to the senior-level courses while having no real understanding of basic data structures, fundamental algorithms, core programming structure concepts (ie. have never learned/used recursion), or even had any exposure to taking the time to structure their thoughts before diving into the code.

I've always been more of the 'renaissance man' when it comes to education, but I've learned to appreciate and I do support the balancing that universities engage in, in order to provide a core CS framework for their degree programs. University degrees should not be able gaining the information needed to go out and program, but should be focused more on these topics involving theory and the mathematical underpinnings of the field. This should involve some levels of useless, pedantic academia.

While we do not offer the ability to challenge courses in this department, one area I do highly push students to get involved in is laboratory research with professors. We had two undergrads with no prior systems education build and write the drivers for custom robots recently, after they came in looking to get involved in such a project. Many professors, in my limited experience, are eager to see students who are looking for additional challenges in their studies and are only too happy to provide additional experience in desired areas for them.


I should explain - I'm in the UK and, at least when I was at University, there was very little that was optional in CS course I did (at least for 3 years of the 4). So the class given by the Electrical Engineering department on mucking about with stuff like UARTs was as required as the Lambda Calculus course - all CS students had to pass both.

[Amusingly, networking was one of the optional 4th year classes - but in the pre-intraweb days nobody seem to regard it as very important...]


I very much enjoyed the 2 logic classes I took through the philosophy department. My school used to offer an interdisciplinary major between Computer Science and Philosophy. Unfortunately not any more.


Microeconomics, not micro-processors. I'll correct the original. It was in the context of tradeoffs.


My CS education at South Dakota State University included a class on digital systems design. I had to build a circuit to control a pop machine as part of it.

Said education also included a class on macro- and microeconomics, incidentally.


Depends on your country actually, the University-level ones (all) do in Germany.


well, aachen does not.


Really? Thanks for correcting me then... thought some basic EE courses were part of the curriculum in at least most places...


CS in Aachen has at least basic introductions to EE and hard ware.


I'm better at macro, that's why I play Zerg.


The one thing that micro and macro taught me was that I didn't want to be a Business major.


Accounting is what did it for me. :-)

I disliked how overly simplified micro and macro 101 were. The models were counter to what I knew was obviously true, so I didn't go any deeper until graduate school. Once I took some advanced courses, I realized how valuable the topics were.

Being an econ major is also very different from being a business major. I view it as analogous to being a math major versus being mechanical engineering. One is more theory, the other is more practical. Neither is better or worse in an absolute sense.


This is a bit of topic, but your comment brings up a point that I often see talked about on HN, but doesn't match my own personal experience. You say:" I view it as analogous to being a math major versus being mechanical engineering. One is more theory, the other is more practical." Now here's the thing. I have a degree in electrical engineering, and to get that degree, I had to do a full unit load of maths, physics and computer science. By full unit load, I mean that I had to do as many units in these subjects as a maths major / physics major / CS major would do in their respective courses. And these weren't 'lite' versions of the units, they were the same - we had maths / physics / CS majors in the lecture hall with us, and we did the same exams. The only real difference was that we didn't get to choose which units we did in these subjects - whereas the subject majors got to choose from a wider range of curricula, we were simply told that we would be studying statistics, or quantum mechanics, or operating system design.

Was my university a special case, or is the oft-repeated claim that engineering is less theory and more practice actually just a false generalisation?

I should of course note that after graduation the stereotype probably does hold true - engineers spend much less time worrying about formal proofs, or highly-accurate physics calculations, approximations are generally fine for what we do. But that is not the case whilst studying, at least not in my experience.


I think the issue is, if it's like the schools I'm familiar with, you took the 1xxx, 2xxx and maybe 3xxx stats/calculus/linear algebra courses. You didn't take their 3xxx/4xxx follow-ups that the math and CS (different courses, same idea) took. At Georgia Tech, CMPEs took 2130 Languages and Translation. The course introduced compilers, it introduced language hierarchies (regular, context-free, etc.), but the 3xxx or 4xxx CS theory and compilers courses were where the material was really taught. And CMPEs rarely took those courses. If they did they were seeking a second major (no minor in CS was offered at the time, IIRC).

The same is true in math. EE/CMPE, probably every E, took through differential equations, maybe another math course or two, but few took number theory, numerical analysis, real analysis, the 4xxx stats or 4xxx linear algebra courses.

The depth at the undergraduate level between what an engineering major gets from the other departments (math, physics, cs) is nowhere near the depth that those majoring in those programs will see (depending on school, I'll grant some schools may not have strong science or math programs beyond the core engineering majors need).


My impression is that Engineering majors take as many Physics and Math classes, but they tend to be more on the practical side.

For instance, a math major can take a bunch of set theory, number theory and topology that an engineer wouldn't see. The math major could also take a curriculum that is indeed more similar to what the engineer takes.


They should also take a cooking class to understand project management. Seriously.


Yes. Cooking multiple dishes simultaneously for multiple people.


What is "micro"?


Micro-economics, I'd guess.


Yes - micro-economics. It's mostly about tradeoffs, referenced in the OP.


micro-processors?


Very well said. There is no one perfect programming language, no one perfect algorithm and no one perfect data structure for all problems and constraints you will face as a CS practitioner.

Really, a CS education is just preparing you to pick the right solution for the problem/constraints at hand. For example, you can loop through a list. That approach works fine. However, when you begin to scale, you may find that look-ups against a tree-based data structure or perhaps a hash table are much more time efficient at the cost of more complexity, more space and more educated programmers.


A self-reminder I use to stay humble: If X really is so perfect, I better start looking for a new job because I'm no longer required.

We need look no further than the existence of the programming profession to see that there ain't no such thing as a free lunch.


As usual, silly little phrases like this are just silly. Lots of things are tradeoffs, but not everything. For example, picking good variable names doesn't involve any tradeoffs.


length of name vs completeness capturing scope vs ignoring it (foo.foo_thing vs foo.thing) capturing type vs ignoring it (studentList vs students)

and so on. I often struggle with competing desires in naming. Code Complete dedicates an entire chapter to variable naming.


In the same vein, length of name vs abundance of other candidate tokens of the same length. By example, in C language, there are only 52 possible one-character names (26 if you stick to the convention of using only lower case characters), 5263 two-character names, 5263^2 three-character names, etc, etc.

It follows that shorter names should be reserved for more restricted scopes where additional context is available. i.e. the use of "i" and "j" as a conventional names for counters, "x" and "y" for arithmetic operands, and "N" for the number of elements in a collection is a good thing. On the other hand, the use of terse names like "atoi" in standard library is terrible, because its scope is in all existing programs (or at least in all C modules that directly or indirectly include stdlib.h


Picking takes time. It might be a small trade-off or you might be able to pay the cost only once by building up a habit of standardizing your variable names but still.


Picking variable names can definitely involve tradeoffs. For example, how long the variable name will be could be a tradeoff based on how much information the rest of your team has about the context in which the variable will be used. Do you need a variable to be very long so as to explicitly spell out a lot of details about itself? Is a very short name (such as i for iteration) acceptable in the current context?


Well, of course. But what is good? C.f. https://twitter.com/paskow/status/398811660057837568


I get the impression that the priority, among a large crowd here, is to say profound things rather than true things.


And here I thought that the only sure thing in computer science was that all problems can be solved by another level of indirection: http://en.wikipedia.org/wiki/Fundamental_theorem_of_software...


Except throughput of course.


The quote actually does not appear in Concepts, Techniques, and Models of Computer Programming but is Bernstein's summary of the work. Although it is an appropriate summary, it can easily be taken out of context. Van Roy and Haridi, authors of CTMCP, were mainly concerned with the trade-offs of programming paradigms. They observed that functional languages, like Haskell, and object-oriented/shared-state-concurrency languages, like Java, sit at opposite ends of a continuum. The functional paradigm is very easy to formally reason with, but can be very kludgy when expressing certain real world concepts, like state. The OO paradigm and the shared-state concurrency paradigm, on the other hand, are very powerful and expressive, but can be quite a pain to reason over. Van Roy and Haridi believed that programmers, rather than having to make a boolean decision between the two, should be allowed to work anywhere on the continuum, and they designed the Oz programming language to make that easy to do.

CTMCP is the bible for Oz. But it's more than just a language reference book. It is an exploration of this trade-off between language expressiveness and ease of comprehension. CTMCP outlines how programming concepts can be layered to form gradual steps between functional and OO languages by controlling the use of mutable state. The Oz language like Haskell, starts in a functional paradigm and makes the use of state explicit, but unlike Haskell it also tries to make the use of state easy and intuitive, rather than overly verbose. CTMCP advocates an avoidance of using a single paradigm to solve all problems.

Oz also has concurrency built in natively, and one of the overarching themes of CTMCP is that the use of concurrency greatly benefits from an understanding of this trade-off between expressiveness and ease of comprehension. Concurrency doesn't have to be difficult, and in the case of functional languages (think dataflow concurrency, as seen in Unix pipes) can be quite easy to use and comprehend. CTMCP posits that as one moves from functional to OO, concurrency analogously becomes both more expressive and more difficult to formally reason over. The use of concurrency therefore can become more tractable when programmers are made more aware of these trade-offs.

(CTMCP actually goes further and classifies the programming paradigms into an elaborate branching hierarchy, as can be seen here http://www.info.ucl.ac.be/~pvr/paradigmsDIAGRAMeng108.pdf)


However, keep in mind that it is only a tradeoff if you tried at least two things. Otherwise it's just random choice.


You don't need to try things to make a choice. That's why we have theory.


Its easy to think of programs as existing in the unbounded abstract, but as soon as you run them, you'll find the logic fragments are competing for physical representational resources (bits in RAM, CPU cycles etc.). So having feature X does cost feature Y for a fixed CPU and RAM budget.

finite resources implies tradeoffs


The Only Sure Thing In Software Development: Never Underestimate the Incompetence of the End User


As an econ nerd turned programmer, this is why I feel right at home.


I disagree that there is only one sure thing, it is also certain that:

A or not A


true for all branches of engineering. pretty much holds for all of life.


Economics got there first.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: