I've worked in recommender systems for a while, and it's great to see them publicized.
SASRec was released in 2018 just after transformer paper, and uses the same attention mechanism but different losses than LLMs. Any plans to upgrade to other item/user prediction models?
I'm not an expert by any means but as far as sequential recommendations go, aren't SASRec and its derivatives pretty much the name of the game? I probably should have looked into HSTUs more. Also this / sparse transformers in general: https://arxiv.org/pdf/2212.04120
There's a few alternatives, but SASRec is a good baseline for next-item recommendation. I'd look at BERT4Rec too. HSTU is definitely a strong step forward, but stays in the domain of ID models. HSTU also seems to rely heavily on some extra item information that SASRec does not (timestamps).
Other models include Google's TIGER model which uses a VAE to encode more information about items. Similar to how modern text-to-voice operates.
Thank you for the recommendations. I didn't try BERT4Rec because I assumed it would perform the same or worse as what I already had after having read https://dl.acm.org/doi/pdf/10.1145/3699521. The TIGER paper seems interesting - I definitely want to explore semantic IDs in general and also because I think it could allow including more long-tail items.
We're missing out on the serendipity of search and possibly duplication of work. Answer's are handed out without work, which leads to bland results.
Treated properly, I think AI proofreading wouldn't necessarily lead to this. Your initial work is like the 'hypothesis'. Then AI does the cleanup and a high-level lit review. Just don't let it change your direction like the writer did in the comic.
I used to think that technology, or lack of it, would solve these problems. More connection. More communication. But it really won't.
It's just as much about the outreach as it is about the writing. Email authors you like. Give them thoughtful feedback. Be generous. It's hard work. It's a relationship and community building.
This! Not many people will reach out to a blogger to say thank you or give their take on a post. That's the first building block to a thriving community.
My progression has been st -> kitty -> ghostty. I wanted to love st, but found too many unpolished corners. Kitty was great, but it felt like the exact opposite of st. Very large and opinionated. ghostty, at least originally, was new and something between st and kitty. With claude code I wonder where the landscape of personalized software will land. st and others may be on to something in this era.
Funnily enough, I saw this too. Yesterday I upgraded from ghostty 1.0.0 to 1.2.0 and was hit with startup delay. 1.0.0 didn't have it. My delay was around 5s on a fresh reboot. However, after I opened a few ghostty windows, the delay went away. I'll be keeping my eye on it.
I spent a long time on the gtk vte based terminals (sakura, I wrote my own called svte later), then to st until i had too many patches stacked, then alacritty, but that took a long time to get to because I couldnt figure out how the kerning was different than ST!
SASRec was released in 2018 just after transformer paper, and uses the same attention mechanism but different losses than LLMs. Any plans to upgrade to other item/user prediction models?