Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It sounds good, but it ultimately fails to comprehend the question: ignoring the word "bandwidth" and just spewing pretty nonsense.

Which is appropriate, given the applications!

I see that they mention it uses LPDDR5x, so bandwidth will not be nearly as fast as something using HBM or GDDR7, even if bus width is large.

Edit: I found elsewhere that the GB10 has a 256bit L5X-9400 memory interface, allowing for ~300GB/sec of memory bandwidth.



For comparison, the RTX 5090 has a memory bandwidth of 1,792 GB/s. The GX10 will likely be quite disappointing in terms of tokens per second and therefore not well suited for real-time interaction with a state-of-the-art large language model or as a coding assistant.


It doesn't sound good at all. It sounds like malicious evasion and marketing bullshit.


It gives you a very good idea of the capability of the models you'll be running on it!


It doesn't give a good idea of anything. We already know it has 128GB unified memory from the first bullet point on the page.


GP was subtly implying that the text was written by an LLM (running in the very same Ascent GX10).


Ah! Thanks for explaining. haha


With a little tinkering we can just have the AI gaslight us about it’s capabilities.


I think the previous user made a joke about LLMs spewing nonsense on top of AI bs thus this product being quite fitting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: