Hacker Newsnew | past | comments | ask | show | jobs | submit | podlp's commentslogin

Rig sounds cool, I just joined the waitlist! I’m building something similar although with a much narrower purpose. Excited to learn more

Tell me more! Thanks for the waitlist

Sent a LinkedIn request. I’m building a language-specific coding agent using Apple Intelligence with custom adapters. It’s more a proof-of-concept at this point, but basic functionality actually works! The 4K context window is brutal, but there’s a variety of techniques to work around it. Tighter feedback loops, linters, LSPs, and other tools to vet generated code. Plus mechanisms for on-device or web-based API discovery. My hypothesis is if all this can work “well enough” for one language/ runtime, it could be adapted for N languages/ runtimes.

That’s awesome! I’ve got a similar project for macOS/ iOS using the Apple Intelligence models and on-device STT Transcriber APIs. Do you think it the models you’re using could be quantized more that they could be downloaded on first run using Background Assets? Maybe we’re not there yet, but I’m interested in a better, local Siri like this with some sort of “agentic lite” capabilities.

> Do you think it the models you’re using could be quantized more that they could be downloaded on first run using Background Assets?

I first tried the Qwen 3.5 0.8B Q4_K_S and the model couldn't hold a basic conversation. Although I haven't tried lower quants on 2B.

I'm also interested on the Apple Foundation models, and it's something I plan to try next. AFAIK it's on par with Qwen-3-4B [0]. The biggest upside as you alluded to is that you don't need to download it, which is huge for user onboarding.

[0] https://machinelearning.apple.com/research/apple-foundation-...


Subjectively, AFM isn’t even close to Qwen. It’s one of the weakest models I’ve used. I’m not even sure how many people have Apple Intelligence enabled. But I agree, there must be a huge onboarding win long-term using (and adapting) a model that’s already optimized for your machine. I’ve learned how to navigate most of its shortcomings, but it’s not the most pleasant to work with.

Try it with mxfp8 or bf16. It's a decent model for doing tool calling, but I wouldn't recommend using it with 4 bit quantization.

Neat! I’ve actually been building with AFM, including training some LoRA adapters to help steer the model. With the right feedback mechanisms and guardrails, you can even use it for code generation! Hopefully I’ll have a few apps and tools out soon using AFM. I think embedded AI is the future, and in the next few years more platforms will come around to AI as a local API call, not an authorized HTTP request. That said, AFM is still incredibly premature and I’m experimenting with newer models that perform much better.

I’m also working on agents in Swift with the AFM, just having it locally already installed is a huge selling point. I think narrowly-focused agents with good tooling and architecture could accomplish quite a bit, with tradeoffs in speed and cost. But I’m under the assumption that local models (like frontier models) will only get better with time

Location: Boston, MA

Remote: true

Willing to relocate: Seattle

Technologies: TypeScript, JavaScript, Java, Ruby, AWS, Docker, Android, React, Svelte, CSS, Etc

Résumé/CV: https://resume.barrasso.me

Email: tom@barrasso.me


Pretty nice to have menu bar integration. For macOS 26+, why not use the already-installed on-device speech transcription models?


I didn’t realize this was a thing: will try it out :)


Love the idea, would be great to see a modes to switch to circular interface for the Time Round 2. Definitely lots of potential for rapid iteration on new watch faces and apps!


I tried Elixir a few months back with several different models (GPT, Claude, and Gemini). I’m not an Elixir or BEAM developer, but the results were quite poor. I rarely got it to generate syntactically correct Elixir (let alone idiomatic). It often hallucinated standard library functions that didn’t exist. Since I had very little prior experience, steering the models didn’t go well. I’ve since been using them for JS/ TS, Kotlin/ Java, and a few other tasks where I’m much more familiar.

My takeaway was that these models excel at popular languages where there’s ample training material, but struggle where the languages change rapidly or are relatively “niche.” I’m sure they’ve since gotten better, so perhaps my perception is already out of date.


How do sites stop background playback? I’m guessing there’s more than just FocusEvent on the window.


The extension tackles two problems:

1) it deals with the browser normally pausing any media when you switch tabs/apps

2) it tells websites they are still in the foreground

To prevent any potential side-effects. These changes only apply when you start playing media on a website.


Cool app! It’s quite polished and scrolling is smooth on a 13 Mini. Some quick thoughts:

- I didn’t see any loading indicator, and some videos took multiple seconds to load - When I scroll up then back down, I get a white gradient at the bottom of the screen that goes away after a few seconds - I’d like to be able to expand the description text, but clicking it didn’t do anything - Without signing in, my initial feed was almost entirely ICE videos


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: