Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
pu_pe
5 months ago
|
parent
|
context
|
favorite
| on:
Context is the bottleneck for coding agents now
I'm curious about it too. I think there are two bottlenecks, one is that training a relatively large LLM can be resource-intensive (so people go for RAGs and other shortcuts), and making it finetuned to your use cases might make it dumber overall.
koakuma-chan
5 months ago
[–]
> making it finetuned to your use cases might make it dumber overall.
LoRa doesn't overwrite weights.
pu_pe
5 months ago
|
parent
[–]
Do you need to overwrite weights to produce the effect I mentioned above?
koakuma-chan
5 months ago
|
root
|
parent
[–]
Good point
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: