Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Parallella, A $99 Supercomputer Running Ubuntu (ubuntuvibes.com)
46 points by vilgax on Sept 29, 2012 | hide | past | favorite | 39 comments


This is hyperbole. A supercomputer in 2012 is a computing platform that can do at least tens or maybe hundreds of Tera FLOPS. What is described here is dual core ARM platform with some sort of vector like co-processor.

Even a huge grid of these probably wouldn't qualify as a supercomputer. Gigabit ethernet has much too high of a latency to be a valid interconnect for coupled parallel problems.

Don't get me wrong, its neat and I'd love to see some benchmarks, but its not a SUPERcomputer.


'This is hyperbole.'

Agreed, but I think the cost, performance, power consumption ratio's are very interesting.

Strong possibility the bitcoin miners could be all over this, if it ever makes it out the door.


I'd like to see what instructions are available on the co-processors. It seems like it would be great for physical simulations and other embarrassingly parallel things. I guess there's some need for things in between regular CPUs and expensive computing grids. Offhand, Master's and Phd. candidates might have simulations to do that this would be useful for and other platforms would be overkill/too expensive.


I guess there's some need for things in between regular CPUs and expensive computing grids.

Like GPUs?


Yes GPU-like devices that consume 5 watts.


If you cut down a GPU to only 64 "cores" and only 700 MHz it may very well consume only 5 W. e.g. AMD Brazos/Kabini.


Come on, are you being difficult on purpose? How are you going to boot the graphics card? Access network or disk resources?


I'm pretty sure the Adapteva chip can't do that either; it requires a host processor just like a GPU does.


... which is on the board.


i'm sure it will perform effectively for more than just embarrassingly parallel problems, but a 2 gbit interconnect is a rather steep limitation. much lower than the few thousand we're used to with existing GPUs.


A definition which works well outside of any specific timeframe is that a supercomputer is a very expensive computer.

So, worse than hyperbole, it's oxymoron.


As a rule of thumb, I avoid buying supercomputers from people who state their performance figures in "GHz".


Definitely a good point, but maybe the project creators are trying to appeal to a broader audience (one that only understands GHz as a measure of CPU power)? I'm sure they know that GHz does not directly correlate to performance. Just playing devil's advocate, I have nothing riding on their project, but I would like to see someone shake up the chip industry (if possible).


Haha, exactly. I was thinking how i could express my doubts about this project. That's it :)


Couldn't agree more.


1. "it's laggy running ubuntu - what's the big deal". It's running Ubuntu with the dual-core ARM CPU on board, not the Epiphany chips. The guy is demonstrating that the boards they are shipping allow for a user friendly environment for which you can jump onto and use Eclipse to write code for the multicore chips. This has NOTHING to do with the multi-core chips themselves, and has no intent of demonstrating the power of the "supercomputer" part of the board.

2. "Why would you use this if you could just use a GPU - they're really parallel right??" - GPUs are very very different beasts to CPUs. They are great at what they do, but they are tailored for very specific problems. Look up SIMD. A tonne of general purpose programs which need, for example, a simple 'if statement' quickly break down under SIMD.

3. "This will be great for mining bitcoins" - yeah. but you can do it on a GPU so stick to that. As far I can see (and why I backed the project), this board will be great for those problems which are not immediately or easily implementable as a wavefrontable algorithm for the GPU. I'm hoping you can just write a c program utilising pthreads which will be run on the Epiphanies cores


"Making parallel computing easy to use has been described as "a problem as hard as any that computer science has faced". With such a big challenge ahead, we need to make sure that every programmer has access to cheap and open parallel hardware and development tools."

But the real challenge is in parallelizing the algorithms, reducing data dependencies, and so on. I can get my feet wet with parallel processing on a multi-core PC just fine; making a program run efficiently in parallel is an entirely different challenge, and I don't see how this platform can help me do that.


"I can get my feet wet with parallel processing on a multi-core PC just fine"

No, you can't, unless you pay severals orders of magnitude more than $99. (Affordable)Multi core today means 2,4 cores at most . You can get your feet wet with the graphics card though. I did, and this is the reason I'm backing this project.

Parallel computing is a different paradigm that serial, in fact is almost the opposite, instead of a big central memory you program for small distributed blocks of memory. Taking this into account means x200 faster than just not.

Once you have a parallel design you can change it for different platforms or even hardware.(witch is parallel by nature), very easy. But you need a platform that is flexible(more than FPGAs) and near to software tools enough for testing and this is great.


This was covered by http://news.ycombinator.com/item?id=4583263

I'm cautiously optimistic The $99 board will succeed.


It's interesting enough and I applaud the effort, but a $750k funding goal is ludicrous. Places like Penny Arcade didn't even crack $600k to an infinitely bigger audience.


$750k is peanuts to many of the people who will be interested in this. The problem is, it's not a B2C sale, it's the kind of thing where you need a few salesmen.

Hit up universities, research groups, Boeing, Ford, the NSA. Tell them it won't just save them costs, but help train the next generation of modellers.

At the very least, they need resources (like a PPT deck) for internal advocates to use.


Size of audience is hardly the only variable in determining what kind of funding goal can be reached.


The FLOPS/watt ratio might be higher than regular servers or desktop machines (might be, its not clear), but for a lot of small scale homebrewed deployments it's not realistic to expect linear scaling (given the problems/algorithms/available amount of development time/expertise) - in practice decent single-core performance is important.

Ie. in many cases if you have the development and ops expertise to get stuff to scale to many cores, then you probably have the budget to get more serious hardware.


Watched a 20 minute long interview, watched a 45 second video of a person using Ubuntu (performance was laggy, to be honest), and read numerous articles.

There's only one question unanswered, and it's the most important one: What can we DO with this thing?

It's not about the hardware, it's about the software! Show me demos of things that are not possible without this hardware and I'll be impressed. Show me how this new $99 multicore solution will offer new experiences and I'll be interested.


I'm inclined to really like this, even if the CPUs aren't really open, but one of the videos is a bit odd:

https://dl.dropbox.com/u/1237941/vlcsnap-2012-09-29-01h48m13...

Never mind the dubious use of pure C rather than SIMD instructions... why are they doing benchmarks with a function that has all the arguments marked volatile!?


This chip doesn't have SIMD; in theory that makes it easier to program.


Advantages over multi-gpu core?

Other than the obvious of running a standard OS...


This should excel in the "computation per joule" metric.

I am cautiously hopeful that funding will succeed and I will be using mine in conjunction with a broadband radio from end to simultaneously receive and decode a great number of FM voice channels on a remote, solar powered location with long periods of clouds.

(I have existing ARM boards whose GPU hardware might have been useful, but they are not openly accessible.)


Cores are actually general-purpose: not SIMT/warps with expensive branches.


I was trying to figure this out. Where did you find the information confirming that the cores are general purpose. I had inferred that they were GPU like from the use of the openCL programming language. Perhaps ignorantly.


http://www.adapteva.com/introduction/

Each core is a RISC processor with local memory. OpenCL is designed to target heterogeneous architectures and map to whatever compute is available.


"Advantages over multi-gpu core?"

Well, multi GPU debugging is terrible. You need different cards(you can't use the one that powers the display) and there is only one company that counts there, Nvidia.

Nvidia is married with Microsoft, and the only intuitive tool you can use for debugging is Windows-only, no mac or Linux support.

No UNIX support in a pro tool is a big no-no for me.

Another problem is that it evolves from graphics and you need to use graphic concepts whether you need it or not.

The good side of doing that is that we can take advantage of the economies of scale of game tech to get good prices.

The bad side is that you can't use it as a stand-alone tool for what you want, like chemical or physical problems.


I take it you never used CUDA or OpenCL before? Because everything you said is complete bullshit.

>Well, multi GPU debugging is terrible. You need different cards(you can't use the one that powers the display) and there is only one company that counts there, Nvidia.

You can run computations on the same card as the display. You can compile to software emulation to debug logic code.

>Nvidia is married with Microsoft, and the only intuitive tool you can use for debugging is Windows-only, no mac or Linux support.

CUDA works on Windows and Linux, not sure how good the mac support is.

>Another problem is that it evolves from graphics and you need to use graphic concepts whether you need it or not.

You don't need to understand any graphics concepts. It's parallel programming concepts you need.


CUDA development on UNIX is arguably easier than on windows, I have no clue where you got your information but it definitely isn't accurate. None of it. Where did you get that you can't use a card that powers the display? That works just fine. They have good driver support. You don't need to use graphics concepts, yes, there are still some remnants of that (for instance, in naming conventions) but for the most part GPUs are best described as coprocessors that happen to be able to drive a display.

You can use them for chemical or physical problems just fine (provided you are willing to do the programming).

What a load of nonsense.


> can't use the one that powers the display

Not true. I'm running OpenCL GPU code while reading this on the same machine with one AMD 6570 GPU right now.

> only Nvidia

Not completely true. Nvidia are doing much more, but AMDs cards are more than capable and OpenCL can work. AMD was/is certainly the favourite of the Bitcoin miners.


Single chip running at 2W.


GPU are more expensive. I mean why not just run a stack of FPGA?


With appropriate peripherals, would this make a great router platform?


perhaps if you frequently perform computational fluid dynamics on it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: