I wonder if there is a path where a model can be trained for variable reasoning layer reuse and determine at a token level how many times to traverse the reasoning blocks. Much like adjustable reasoning levels now, but only repeating thinking circuits instead of running through full output reasoning chains.
While the author mentioned multiple passes through the block didn't help in this instance, I can't help but wonder if it would work if it were built in during training.
Markdown UI and my approach share the "markdown as the medium" insight, but they're fundamentally different bets:
Markdown UI is declarative — you embed predefined widget types in markdown. The LLM picks from a catalog. It's clean and safe, but limited to what the catalog supports.
My approach is code-based — the LLM writes executable TypeScript in markdown code fences, which runs on the server and can render any React UI. It also has server-side state, so the UI can do forms, callbacks, and streaming data — not just display widgets.
I tend to think people who argue about the economics or community issues tend to miss the forest for the trees. For the most part, other than biological drive, having kids is stupid. The systems that most people complain about failing - mostly around the community or economic costs of childcare - exist to make having children less stupid. We dramatically reduced teen and early 20s pregnancy rates, when hormones are yelling at us to make babies, and expected people to have them later in life when they're better at self-control?
Then, people who have a child that young are far, far more likely to have additional children. Outside of the first few years, a sibling often reduces the strain on the parents, and provides additional value. Your life starts to orient around the kid(s), and we get a couple of other hormone boosts so we love them and want more of them.
I am consistently confused that this conversation never seems to touch on just how many births are mostly because two people's biology overrode their judgement and that initial failure results in a feedback loop where you have another child or two. If that poor judgement doesn't happen, you don't kick off that loop, and then you're trying to rationally choose to do something that never made all that much sense in the first place.
I think it's clear that the reduction in teen pregnancy is indeed a big contributor to the decreasing fertility rate. I would guess the reason this doesn't get brought up in discussions about how to _increase_ the fertility rate is that reversing the trend on teen pregnancy is just really not a palatable solution to many people. Although there are some, usually on the religious right, who advocate for banning contraception, teaching abstinence-only sex education, etc., which would most likely have the effect of reversing the teen pregnancy trend.
I think not talking about it skews the conversation towards incorrect remedies - the discourse is about what has changed about the economy, communities, family life, etc, that makes people want fewer kids and then trying to derive solutions from those things as the assumed problem. It makes too much of the discourse a question of “how do we go back to the previous conditions?”
If instead we say this is a biological imperative that we have interrupted and many people don’t rationally want children no matter how perfect those conditions are, then instead of looking back to previous states, we can ask what new conditions must occur to change this behavior.
> Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?
I come from a world where customer support is a significant expense for operations and everyone was SO excited to implement AI for this. It doesn't work particularly well and shows a profound gap between what people think working in customer service is like and how fucking hard it actually is.
Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.
> shows a profound gap between what people think working in customer service is like and how fucking hard it actually is
Nicely fitting the pattern where everyone who is bullish on AI seems to think that everyone else's specialty is ripe for AI takeover (but not my specialty! my field is special/unique!)
As someone who does support I think the end result looks a lot different.
AI, for a lot of support questions works quite well and does solve lots of problems in almost every field that needs support. The issue is this commonly removes the roadblocks from your users being cautious to doing something incredibly stupid that needs support to understand what they hell they've actually done. Kind of a Jeavons Paradox of support resources.
AI/LLMs also seem to be very good at pulling out information on trends in support and what needs to be sent for devs to work on. There are practical tests you can perform on datasets to see if it would be effective for your workloads.
The company I work at did an experiment on looking at past tickets in a quarterly range and predicting which issues would generate the most tickets in the next quarter and which issues should be addressed. In testing the AI did as well or better than the predictions we had made that the time and called out a number of things we deemed less important that had large impacts in the future.
I think that's more the area I'd expect genAI to be useful (support folks using it as a tool to address specific scenarios), rather than just replacing your whole support org with a branded chatbot - which I fear is what quite a few management types are picturing, and licking their chops at the resulting cost savings...
Tickets are a very different domain though. Tickets are the easiest use case for AI (as you have the least constraints on real-time interaction), but reference cases in tickets have ridiculously low true-resolution (customer did not contact you about the same issue again).
The default we've seen is naive implementations are a wash. Bad AI agents cause more complex support cases to be created, and also make complex support cases the ones that reach reps (by virtue of only solving easy ones). This takes a while to truly play out, because tenured rep attrition magnifies the problem.
to be fair at least half of the software engineers i know are facing some level of existential crisis when seeing how well claude code works, and what it means for their job in the long term
and these are people are not junior developers working on trivial apps
Yeah, I've watched a few peers go down this spiral as well. I'm not sure why, because my experience is that Claude Code and friends are building a lifetime of job security for staff-level folks, unscrewing every org that decided to over-delegate to the machine
Cleanup is less enjoyable than product building. If every future job is cleaning up a massive pile of AI slop, then that is a less fulfilling world than currently.
Perhaps even more-so given the following tagline, "Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems", lol. I suppose it's possible eightysixfour is an upper-middle management executive though.
IMO we can augment this criticism by asking which tasks the technology was demoed on that made them so excited in the first place, and how much of their own job is doing those same tasks--even if they don't want to admit it.
__________
1. "To evaluate these tools, I shall apply them to composing meeting memos and skimming lots of incoming e-mails."
2. "Wow! Look at them go! This is the Next Big Thing for the whole industry."
3. "Concerned? Me? Nah, memos and e-mails are things everybody does just as much as I do, right? My real job is Leadership!"
4. "Anyway, this is gonna be huge for replacing staff that have easier jobs like diagnosing customer problems. A dozen of them are a bigger expense than just one of me anyway."
We're working on this problem at large enterprises, handling complex calls (20+ minutes). I think the only reason we have any success is because the majority of the engineering team has been a customer support rep before.
Every company we talk to has been told "if you just connect openai to a knowledgebase, you can solve 80% of calls." Which is ridiculous.
The amount of work that goes in to getting any sort of automation live is huge. We often burn a billion tokens before ever taking a call for a customer. And as far as we can tell, there are no real frameworks that are tackling the problem in a reasonable way, so everything needs to be built in house.
Then, people treat customer support like everything is an open-and-shut interaction, and ignore the remaining company that operates around the support calls and actually fulfills expectations. Seeing other CX AI launches makes me wonder if the companies are even talking to contact center leaders.
There are some solid usecases for AI in support, like document/inquiry triage and categorization, entity extraction, even the dreaded chatbots can be made to not be frustrating, and voice as well. But these things also need to be implemented with customer support stakeholders that are on board, not just pushed down the gullet by top brass.
Yes but no. Do you know how many people call support in legacy industries, ignore the voice prompt, and demand to speak to a person to pay their recurring, same-cost-every-month bill? It is honestly shocking.
There are legitimate support cases that could be made better with AI but just getting to them is honestly harder than I thought when I was first exposed. It will be a while.
Demanding a person on the phone use the website on your behalf is a great life hack, I do it all the time. Often they try to turn me away saying "you know you can do this on our website", I just explain that I found it confusing and would like help. If you're polite and pleasant, people will bend over backwards to help you out over the phone.
With "legacy industries" in particular, their websites are usually so busted with short session timeouts/etc that it's worth spending a few minutes on hold to get somebody else to do it.
Sorry, I disagree here. For the specific flow I'm talking about - monthly recurring payments - the UX is about as highly optimized for success as it gets. There are ways to do it via the web, on the phone with a bot, bill pay in your own bank, set it up in-store, in an app, etc.
These people don't want the thing done, they want to talk to someone on the phone. The monthly payment is an excuse to do so. I know, we did the customer research on it.
Recurring monthly payments I set to go automatic, but setting that up in the first place I usually do through a phone call. I know some people just want somebody to talk to, same as going through the normal checkout lines at the grocery store, but I think an equally large part of this is people just wanting somebody else to do the work (using the website, or scanning groceries) for them.
> but I think an equally large part of this is people just wanting somebody else to do the work (using the website, or scanning groceries) for them.
Again, this is something my firm studied. Not UX "interviews," actual behavioral studies with observation, different interventions, etc. When you're operating at utility scale there are a non-negligible number of customers who will do more work to talk to a human than to accomplish the task. It isn't about work, ease of use, or anything else - they legitimately just want to talk.
There are also some customers who will do whatever they can to avoid talking to a human, but that's a different problem than we're talking about.
But this is a digression from my main point. Most of the "easy things" AI can do for customer support are things that are already easily solved in other places, people (like you) are choosing not to use those solutions, and adding AI doesn't reduce the number of calls that make it to your customer service team, even when it is an objectively better experience that "does the work."
There needs to be some element of magic and push back. Every turn has to show that the AI is getting closer to resolving your issue and has synthesized the information you've given it in some way.
We've found that just a "Hey, how can I help?" will get many of these customers to dump every problem they've ever had on you, and if you can make turn two actually productive, then the odds of someone dropping out of the interaction is low.
The difference between "I need to cancel my subscription!" leading to "I can help with that! To find your subscription, what's your phone number?" or "The XYZ subscription you started last year?" is huge.
>Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.
Sure, but when the power of decision making rests with that group of people, you have to market it as "replace your engineers". Imagine engineers trying to convince management to license "AI that will replace large chunks of management"?
I wonder if you could set it as a sensor for Home Assistant, then we could build our own smart home automations outside of the app instead of you needing to do anything.
I was thinking about ways for the app to receive webhooks, but being able to send webhooks is an even better idea! I'm going to start writing down some thoughts for creating generic webhook triggers. That should also work for zapier (and possibly IFTTT) as well.
I have seen this twice in my life. One person who freaked out because they stopped in the tracks and then turned onto them, the second I still have no idea how they got there.
I think there are two really big issues with the roll out of self-driving cars that are going to be hard for us to overcome:
1. Their mistakes are going to be highly publicized, but no one is publicizing the infinite number of dumbass things human drivers do every day to compare it to.
2. They're going to make mistakes that are extremely obvious in hindsight or from a third party perspective, that most humans will say no human would have ever done. It is likely that a human has and would have made similar and worse mistakes, and makes them at a higher rate, and we will have to accept these as a reality in a complex world.
> Their mistakes are going to be highly publicized, but no one is publicizing the infinite number of dumbass things human drivers do every day to compare it to.
Idea: "Waymo or Human," a site like Scrandle where we watch dashcam clips of insane driving or good driving in a challenging situation and guess if it's a human or self-driving car.
> 1. Their mistakes are going to be highly publicized, but no one is publicizing the infinite number of dumbass things human drivers do every day to compare it to.
People still complain about that one cat that got run over. As if the Waymo jumped the curb and chased it down.
> 1. Their mistakes are going to be highly publicized, but no one is publicizing the infinite number of dumbass things human drivers do every day to compare it to.
Don't drive on railway tracks is pretty simple requirement. Human who do this lose their driving licence.
Suddenly Waymo is above the law.
Frustratingly Americans seem to inherently despise public transit (probably because owning a car has become so necessary due to poor city planning ON TOP OF the classist appeal) despite the advantages and local/state govs refuse to give public transit options proper funding and oversight - leading to even more distaste for public transit.
Personally I won't be using one of these cars because I want to contribute to other humans' paychecks, but I would much rather be using public transit over adding more and more cars to more and more roads/lanes.
All of the negative publicity around the autonomous cars is justified IMO because, even if these cars are "safer" than a human, they are still clearly not as safe as they need to be when considering liability, local laws, basic driving etiquette and the cost to other humans' incomes.
> but I would much rather be using public transit over adding more and more cars to more and more roads/lanes.
Good luck rearchitecting the entire way of life of the vast majority of Americans, not to mention somehow tearing out and replacing the entirety of our transportation infrastructure. I'm generally of the persuasion that we should reduce our reliance on cars and I intentionally live in a dense city with half-decent transit but this fever dream that highly individualistic Americans are going to get on board with shared transit is just that, a fever dream.
It would be good for us, but that doesn't mean it is inevitable or even possible at this time. Acknowledging that is important because it means you invest in alternatives that may actually get adopted.
> All of the negative publicity around the autonomous cars is justified IMO because, even if these cars are "safer" than a human, they are still clearly not as safe as they need to be when considering liability, local laws and the cost to other humans' incomes.
So now we come to the other half of your argument. Waymos are safer and it isn't even close. If I am an insurance company and you are asking me to cover a human or a Waymo I'm taking the Waymo 10/10 times. Humans are actually pretty bad at driving and we're getting worse as we're more distracted, not better. The simple math of the liability is going to move us towards self-driving cars more rapidly, not slow it down.
The only other argument I see buried in here is "cost to other human's incomes." Whether you mean gig economy workers, taxi drivers, or transit operators, I have a dozen arguments here but the simplest is maybe you should prioritize the 40k lives lost every year to motor vehicle accidents over income. We'll find other places for productivity and income. You don't get the dead people back.
Americans don’t despise public transit. They despise poorly maintained / insufficient public transit. Outside of New York and San Francisco, public transit is really not sufficient to get you where you need to go.
Many cities could do better to have more robust public transit, but the reality is America is vast and people commute long distances regularly. The cost of deploying such vast amounts of public transit would be prohibitively expensive.
> Americans don’t despise public transit. They despise poorly maintained / insufficient public transit. Outside of New York and San Francisco, public transit is really not sufficient to get you where you need to go.
I used to believe this, I'm not sure it is actually true though for a large percentage of Americans. There is some unmet demand that would be satisfied, but beyond that, most Americans value their individualism and control (even if it is controlling where a driver takes them via an app) too much unless they were raised around good transit. That means that even if we build good transit, it will probably take more than a generation for someone to use it fully and effectively.
It also depends a lot on the culture of other riders. It takes relatively few undesireables to cause the preference to swing back to personal transport options
They specifically call out why the semantics matter in the actual article, in the first paragraph.
> President Donald Trump premised his mass deportation agenda on the idea that he will be “returning millions and millions of criminal aliens.” Department of Homeland Security (DHS) Secretary Kristi Noem has repeatedly claimed that they are arresting the “worst of the worst.”
When speaking to Trump supporting friends who employ illegal immigrants they specifically defend that it is only the "bad ones."
> When speaking to Trump supporting friends who employ illegal immigrants they specifically defend that it is only the "bad ones."
They still feel this way because their news sources don't tell them about restaurants being raided and the entire kitchen being arrested or ICE raids on agriculture.
Problems aren't problems until it happens to them.
Yes- that's a big problem. The business owners are getting away with massive employment fraud, tax fraud, and any number of OSHA/employee law violations. They need to be arrested and brought to trial.
If you can't run a business without breaking the law (including illegal labor), then that business shouldn't exist.
I agree, however the law must be applied uniformly and consistently for an even playing field. It is not, which allows the government to “pick” winners and losers via selective prosecution.
I think so, but it is a loosely held opinion at this point. Fundamentally, I think it is a huge, asymmetric power grab by Flock and local police to install these systems. It only takes one officer looking up their local politician and finding them doing something that could even look like a bad deed (or to fake it in the era of AI videogen...) to enable blackmail and personal/professional gain.
If they're going to exist, it may be better for that to be spread among the public than to be left in the hands of the few.
I don't want these cameras to exist but, if they're going to, might we be better off if they are openly accessible? At the very least, that would make the power they grant more diffuse and people would be more cognizant of their existence and capabilities.
Did you see the other post about this where the guys showed a Flock camera pointed at a playground, so any pedo can see when kids are there and not attended?
Or how it has become increasingly trivial to identify by face or license plate such that combining tools reaches "movie Interpol" levels, without any warrant or security credentials?
If Big Brother surveillance is unavoidable I don't think "everyone has access" is the solution. The best defense is actually the glut of data and the fact nobody is actively watching you picking your nose in the elevator. If everyone can utilize any camera and its history for any reason then expect fractal chaos and internet shaming.
> Did you see the other post about this where the guys showed a Flock camera pointed at a playground, so any pedo can see when kids are there and not attended?
If it's inappropriate for any pedo to see when kids are in a park then certainly it should inappropriate when those pedos just happen to be police officers or Flock employees. The nice thing about the "everyone has access" case is that it forces the public to decide what they think is acceptable instead of making it some abstract thing that their brains aren't able to process correctly.
People will happily stand under mounted surveillance cameras all day long, but the moment they actually see someone point a camera at them they consider that a hostile action. The surveillance camera is an abstract concept they don't understand. The stranger pointing a camera in their direction is something they do understand and it makes their true feelings on strangers recording them very clear.
We might need a little bit of "everyone has access" to convince people of the truth that "no one should have access" instead.
> so any pedo can see when kids are there and not attended?
Sure. It also lets parents watch. Or others see when parents are repeatedly leaving their kids unattended. Or lets you see some person that keeps showing up unattended and watching the kids.
> Or how it has become increasingly trivial to identify by face or license plate such that combining tools reaches "movie Interpol" levels, without any warrant or security credentials?
That already exists and it is run by private companies and sold to government agencies. That’s a huge power grab.
> The best defense is actually the glut of data and the fact nobody is actively watching you picking your nose in the elevator. If everyone can utilize any camera and its history for any reason then expect fractal chaos and internet shaming.
This argument holds whether it is public or not. It is worse if Flock or the government can do this asymmetrically than if anyone can do it IMO, they already have enough coercive tools.
I didn't want to get into an argument over whether kids should be unattended at playgrounds or not - I don't know where the other poster is front and it seems to be based on age, density, region, etc. Where I grew up it would be weird to stay, in the city I am in it would be weird to leave them.
If you leave your kids unattended at a playground I don't see how the camera changes the risk factor in any meaningful way. Either a pedophile can expect there to be unattended children or not.
> Try to think like an evil person with no life and very specific and demonic aims if you’re still having trouble seeing why this would be an issue.
That person already has incredible power to stalk and ruin someone's life. Making Flock cameras public would change almost nothing for that person. It fascinates me how fast people jump to "imagine the worst person" when we talk about making data public.
We have the worst people, they're the ones who profit off of it being private, with no public accountability, who don't build secure systems. The theater of privacy is, IMO, worse than not having privacy.
“almost nothing” is doing a lot of heavy lifting in that sentence.
Stalking someone from your desk vs. IRL is a whole different ball game. Not sure why this needs explanation… anyways, the main difference is how easy it do things from your desk. For example, no one see you when you’re stalking someone from your desk. Think of the success of 4chan investigations vs. those in authority to actually do so. It’s empowering.
We live in a world of strangers, and unfortunately a % of those are the type to kill/rape other strangers. Why enable them?
Not sure who else would be empowered by making all public camera accessible at the click of a button, but I’m interested in who you think that population is.
Certainly we can agree most normal folks will not spend their time looking camera feeds of strangers?
I’m fascinated by people who stick to their theoretical principles (‘all data should be public’, etc.) no matter the real world implications, but we all have our own interests :).
There are sites that index thousands of public live streaming cameras, with search fields where you can just enter "park" and get live cams with kids playing, because people have specifically arranged for those cameras to exist.
Turns out, 95% of the predators already know exactly where the victims are, usually because it's their kid. Probably we want to worry about that a lot more.
Doubly so since, y'know, this only works if the predator lives close enough to act on the information before it changes - so the tiny possibility of a predator, a tiny possibility that they didn't already know this, and a tiny possibility of being able to act on the information...
I've thought the same regarding license plate readers (and saw considerable pushback on HN) — feeling like you suggest: if they have the technology anyway, why not open it up?
I imagined a "white list" though (or whatever the new term is—"permitted list"?) so that only certain license plates are posted/tracked.
I wonder if such a business model could exist where they were effectively "public" and thus, access was uniformly granted to anyone willing to pay. not sure if this would be net better for society, but an interesting thought.
No, but the same argument could be made for things like open source software. We assume/hope that someone more aligned with our outcomes is actively looking.
Or, at the very least, that we can go back and look later.
I don't think they are similar. Public feeds would enable someone to document and sell people's whereabouts in real time. The fact that I could do the same or go back and look later is no defense.
This is a different argument than what I was responding to.
> I know in theory we all can continuously download and datamine these video feeds but can everyone really?
To which my response is "this is like OSS." What I mean by that is that, in theory, people audit and review code submitted to OSS software, in reality most people trust that there are other people who do it.
> Public feeds would enable someone to document and sell people's whereabouts in real time. The fact that I could do the same or go back and look later is no defense.
This is a different argument to me and one that I'm still torn about. I think that if the feeds exist and the government and private entities have access to them, the trade-offs may be better if everyone has access to them. In my mind this results in a few things:
1. Diffusion of power - You said public feeds would "enable someone to document and sell people's whereabouts in real time." Well, private feeds allow this too. I'd rather have everyone know about some misdeed than Flock or the local PD blackmail someone with it.
2. Second guessing deployment - I think if the people making the decisions know that the data will be publicly available, they're more likely to second guess deploying it in the first place.
3. Awareness - if you can just open an app on your phone and look at the feed from a camera then you become aware of the amount of surveillance you are subject to. I think being aware of it is better than not.
There's trade-offs to this. The cameras become less effective if everyone knows where they are. It doesn't help with the location selection bias - if they're only installed in areas of town where decision makers don't live and don't go, the power is asymmetric again. Plenty of other reasons it is bad. None of them worse than the original sin of installing them in the first place.
Open cameras make information that was previously local and difficult to collect global and easier to collect. Relatively, it reduces the privacy and power of people on the ground in your neighbourhood and increases the power of more distant actors. It doesn't seem very socially desirable as an outcome. It also increases the relative power of people with technical capacity and capital for storage and processing etc.
I do buy your argument that open access could help check the worst abuses. But, if widespread, it'd be so catastrophic for national security that I can't see how it would ever fly.
While the author mentioned multiple passes through the block didn't help in this instance, I can't help but wonder if it would work if it were built in during training.
reply