Maybe countries could tackle such problems twofold:
- first, implement a nationwide social freezing program, where women in their 20s are offered to freeze their eggs at a young age for free. Such a large-scale program would probably also improve the tech and might make egg collection less intrusive.
- combined with this program, let the women who freeze their eggs opt-in into an egg donation program, where some of their eggs can be used by women with fertility problems
But as with many things fertility, seems that modern states simply do not have the capacity to seriously try anything. Who knows why that is.
They might also look to Israel to see what they're doing that's working so much better than other OECD countries - see my other comment in this post.
But Israel's advantage seems to be partly cultural and I don't see any time-limited elected government willing to expend that much effort to change their nation's culture.
From what I've read, the immediate effect will likely be worse for CO2 emissions, because the alternative to (liquefied) gas is often coal power. Also, the various inputs that are needed for global manufacturing are also affected, so maybe even renewable tech gets more expensive.
I'm not saying that the dependence on the middle east was good, but I think it's good to keep in mind that this was a pretty stable equilibrium even with the various questionable countries involved until the US initiated a global supply shock without a good reason.
There are short term and long term effects. Overall these are good changes.
There are a couple of points to make here. The lead time for new coal/gas plants is years. If it's not planned already, any newly planned plants are unlikely to come online this decade. The supply chains simply can't handle building more turbines and it takes years to fix that. Also, that investment is super risky in it self.
Another point is that the cheapest and fastest way to add new capacity to grids is via renewables. That's why we see record breaking new capacity coming online on a regular basis.
There is indeed a short term increase in emissions from electricity plants because the fastest way to bring more capacity online is to use existing underused plants. A lot of gas and coal plants are no longer running full time because they are too expensive to operate. But they haven't been decommissioned either. Some gas plants actually are used as peaker plants. Most older coal plants take too long to warm up for this. So, yes short term the expensive but quick way to provide extra power is via these plants. But of course, as soon as something more affordable comes online, these things go back to being utilized less. There are many tens/hundreds of GW of renewables and batteries being deployed in the next few years.
Data centers add to all this pressure. That's long term a good thing because these too will want to long term reduce their OpEx by cutting as much dependence on gas/coal as possible.
A final point to make is that despite all these increased emissions, there are also decreased emissions from electrification. Even if the power for an EV comes from an efficient gas/coal plant, it's actually better than the alternative of burning petrol in a combustion engine instead. Less emissions this way. Same for heat pumps. With a COP of 3-4, they outperform burning gas by 3-4x using electricity. Even if that electricity comes from a gas plant operating at 40-50% efficiency. Less gas gets burned.
So these are all good effects even if the reason is a bit sad and unnecessary. This crisis is unnecessary. But I like that it is helping to kill fossil fuel companies faster. This long term erodes confidence in the market as a whole and drives decision makers to do exactly what the article suggests: cutting the dependency on fossil fuels as fast as possible. It's already resulting in measurable reductions in oil/gas imports in some countries.
> An expensive AI which simply takes your job or forces you to work harder
But this implies higher productivity, no? This must mean more outputs that should benefit someone, unless the jobs that are being automated had little value to begin with. Seems paradoxical.
> AI as it is being developed is likely to centralize it
The access to AI is centralized, but the ability to generate code and customized tools on demand for whatever personal project you have certainly democratizes Software.
And even though open source models are a year behind, they address your remaining criticism about the AI being centralized.
I don't use it myself, but I feel like the way Grok is integrated into Twitter is a pretty good thing for discussions, as it is certainly a more objective and rational voice than most human participants. I think it's good that people tag @grok if they don't understand something or want an opinion, even if it looks pretty silly to see "@grok is this true" repeated multiple times in replies.
That said, Musk's attempts at misaligning the thing and make it prefer his opinions of course destroy any trust. It's surprising that it's seemingly as good and helpful as it is despite the corruption attempts.
I also don't quite get how the business model is supposed to work out if its main usecase is to serve Twitter. I know they provide API access as all other models, but with how distrusted Musk is and how sensitive of a topic reliable model behavior is, they seem to sabotage themselves. Which company wants it to go mechahitler on them?
I disagree, I find that the grok replies are terrible product UX. Not only do they clog up the replies of every popular post, they're also constrained to extremely short answers with no sources. The community notes system, while also flawed in its own ways, is at least not nearly as disruptive and usually provides a link.
Trying to make social media a source of truthful information is always an uphill battle and doubly so for X.
I like you can ask Grok to search the social graph and comments. Hacker news also has one semantic search engine (https://hackersearch.net/), reddit has none and it's a pity.
2) was trained to be biased against empathy and understanding (because woke).
3) is customized to spout Elon's opinions as fact.
Claiming it is "objective and rational" seems like a misjudgement to me. If it really is more objective and rational than the average xitter poster, that says more about that platform than it does about Grok.
I guess I was mostly arguing that the integration of something like Grok into Twitter was definitely a net positive for online discussion, as anyone has a fact checker and explainer at hand now to diffuse irrational online arguments.
Also I think you overrate Musk's success in fiddling with the model. As I have written, I also don't like his attempts to tune it to his tastes, but if you see the outputs that people get from Grok, it seems mostly fine except in the specific scenarios that Musk seems to have focused their misalignment on.
Of course something like Claude being integrated into Twitter would likely be better.
He doesn't have to fiddle with the model because he gets to inject his own opinion into the context MitM style.
But I get what you're saying now, a fact checker available to query during an online discussion would be helpful. Assuming the checkerbot was actually independent/neutral and backed responses with sources. Definitely not assumptions you can make with grok.
You’re right. But it appears they may have failed with 2) and 3) because I frequently see Grok spit out content that doesn’t agree with the creators’ narrative.
From what I heard it was designed to prefer truth over political correctness. I don't use Grok or Twitter though so I cannot comment on whether that aim was achieved (or even seriously attempted).
I will however note that when I asked ChatGPT for an LLM prompt for truthfulness, it added "never use warm or encouraging language."
It would appear that empathy and truth are in conflict — or at least the machine thinks so!
That "MechaHitler" episode lasted less than a day.
> 2) was trained to be biased against empathy and understanding (because woke).
No, it was trained and instructed to be truthful, even if the truth is deemed politically incorrect.
> 3) is customized to spout Elon's opinions as fact.
Certainly a nugget of truth there.
> Claiming it is "objective and rational" seems like a misjudgement to me.
I do believe it's generally objective, simply due to the fact that despite how much Elon tries to push it to the right, it still dunks on right-wingers all the time when they summon Grok to back up a bullshit story, but Grok debunks it instead.
> Grok is integrated into Twitter is a pretty good thing for discussions, as it is certainly a more objective and rational voice than most human participants
> there's always an infinite supply of new work that could be done
I definitely buy this for the software sector or the economy as a whole, but for an individual company? Seems one would be bottlenecked by various factors quickly.
Perhaps better to let people go so that they can be productive elsewhere?
There's always bugs that can be fixed, there's always optimizations that can be done, there's always a feature that someone wants to build but hasn't had budget to do. There's always improvements that can be done for deployment. There's always ways of reducing memory. There's always ways of reducing ongoing expenses etc.
I have worked for a bunch of companies, and even relatively new and young companies have all these things pile up pretty quickly.
Have you tried looking for a job recently? The job market is cooked and it's not getting better any time soon. The supply of candidates is way up. Salaries are going down. Even mediocre jobs show 100+ applicants on LinkedIn.
> Perhaps better to let people go so that they can be productive elsewhere?
True. Joining thousands of other unemployed developers sending applications into a job posting for a nonexistent role online is very productive. Probably good for the economy too now that I think about it.
I unironically think so, though. I though this was a great though-provoking comment by OP.
I think it's a totally legitimate thing to ponder and on most internet forums you'd just be ridiculed for it. OP even qualified it by saying they don't have anything against human connection per se.
The paper does seem to include a section where they check what the AI is used for and in work contexts, there was no correlation between depression and AI usage. Only in personal contexts.
> "Greater levels of AI use were associated with modest increases in depressive symptoms"
to me ever so slightly implies causality via "increases ...", even though, as they are also very transparent about, this paper isn't about any causal mechanism. I feel like "associated with higher rates of depressive symptoms" might have read more neutrally and would have been in line with the results of their paper.
Not suggesting something intentional by the authors, of course, I just found it interesting how verbs subtly influence the meaning of things, at least for me.
But perhaps I'm also biased because I kind of intuitively believe that the causation is that depressive people enjoy talking to the AI, rather than AI being the cause of anything. I worry that any reverse interpretations will lead to an over-regulation of AI in such contexts.
It's standard academic use of "increased", so I can't fault the authors for using it. Few in the intended target audience would read that as implying causation. One could of course argue that abstracts should be written with a larger audience in mind, but the job of a researcher is first and foremost to communicate as effectively as possible to other researchers.
I don't think replacing "increased" with "greater" or "higher" would compromise communication to researchers at all, but it could cut down on misinterpretation and miscommunication in the wider science reporting world.
Yes, but should we expect researchers to have the lay communication skills to even consider such things, to realize that the phrasing could be misinterpreted? Traditionally that's the job of the institute's PR department writing press releases. Anyone reading an abstract directly from its authors should also be expected to have basic academic reading skills.
Sure— but that is different to “increases” which makes it seem as though they experienced increases due to AI use. The academic use of “increased” is more standard and in line with what you said, is kind of fine.
To me, the wording doesn't necessarily imply causality, but it does imply a repeated-measures design. Something being "associated with an increase in symptoms" is different than something being "associated with higher symptoms"; the former suggests that participants were measured at multiple time points, and there is a factor that could explain that change over time. But reading through the study, it was just a single time point.
Regardless, you're correct that it also shouldn't be taken to imply a causal relationship.
I noticed how much basic stuff is getting upvoted that confirms people's priors. I guess HN has always been this way, but it doesn't speak well of a community that views itself as thoughtful.
It's frustrating watching this topic turn into culture war.
- first, implement a nationwide social freezing program, where women in their 20s are offered to freeze their eggs at a young age for free. Such a large-scale program would probably also improve the tech and might make egg collection less intrusive.
- combined with this program, let the women who freeze their eggs opt-in into an egg donation program, where some of their eggs can be used by women with fertility problems
But as with many things fertility, seems that modern states simply do not have the capacity to seriously try anything. Who knows why that is.
reply