Hacker Newsnew | past | comments | ask | show | jobs | submit | skayvr's commentslogin

I've worked in recommender systems for a while, and it's great to see them publicized.

SASRec was released in 2018 just after transformer paper, and uses the same attention mechanism but different losses than LLMs. Any plans to upgrade to other item/user prediction models?


I'm not an expert by any means but as far as sequential recommendations go, aren't SASRec and its derivatives pretty much the name of the game? I probably should have looked into HSTUs more. Also this / sparse transformers in general: https://arxiv.org/pdf/2212.04120


There's a few alternatives, but SASRec is a good baseline for next-item recommendation. I'd look at BERT4Rec too. HSTU is definitely a strong step forward, but stays in the domain of ID models. HSTU also seems to rely heavily on some extra item information that SASRec does not (timestamps).

Other models include Google's TIGER model which uses a VAE to encode more information about items. Similar to how modern text-to-voice operates.


Thank you for the recommendations. I didn't try BERT4Rec because I assumed it would perform the same or worse as what I already had after having read https://dl.acm.org/doi/pdf/10.1145/3699521. The TIGER paper seems interesting - I definitely want to explore semantic IDs in general and also because I think it could allow including more long-tail items.


Recommend OneRec which is an improvement of HSTU and it recently became open source


We're missing out on the serendipity of search and possibly duplication of work. Answer's are handed out without work, which leads to bland results.

Treated properly, I think AI proofreading wouldn't necessarily lead to this. Your initial work is like the 'hypothesis'. Then AI does the cleanup and a high-level lit review. Just don't let it change your direction like the writer did in the comic.


I used to think that technology, or lack of it, would solve these problems. More connection. More communication. But it really won't.

It's just as much about the outreach as it is about the writing. Email authors you like. Give them thoughtful feedback. Be generous. It's hard work. It's a relationship and community building.


This! Not many people will reach out to a blogger to say thank you or give their take on a post. That's the first building block to a thriving community.


Reminds me of Wolfram's 'A New Kind of Science'. Specifically his principle of computation equivalence: https://en.wikipedia.org/wiki/A_New_Kind_of_Science#Principl...


My progression has been st -> kitty -> ghostty. I wanted to love st, but found too many unpolished corners. Kitty was great, but it felt like the exact opposite of st. Very large and opinionated. ghostty, at least originally, was new and something between st and kitty. With claude code I wonder where the landscape of personalized software will land. st and others may be on to something in this era.


[foot](https://codeberg.org/dnkl/foot) is an excellent st alternative on wayland.


Thanks, this is right down my alley. When I make the switch from X to Wayland, I'll check this out.


I recently switched back to kitty from ghostty. Ghostty was taking > 1s to boot, and I really don't have the time to debug it.


Funnily enough, I saw this too. Yesterday I upgraded from ghostty 1.0.0 to 1.2.0 and was hit with startup delay. 1.0.0 didn't have it. My delay was around 5s on a fresh reboot. However, after I opened a few ghostty windows, the delay went away. I'll be keeping my eye on it.


Odd. I’ve never had a new window or tab take more than maybe 20ms.


I spent a long time on the gtk vte based terminals (sakura, I wrote my own called svte later), then to st until i had too many patches stacked, then alacritty, but that took a long time to get to because I couldnt figure out how the kerning was different than ST!

Now i am on that and it is fine


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: