If you have ChatGPT Plus, you get access to the Web Browsing model, which allows ChatGPT access to the Internet. Not only do you get an audit log for all the sites ChatGPT visit, you also get citations as well for any sources it does end up using in the generated text. Even when ChatGPT does fail, its audit log gives me links to several sources that I can consult instead.
The difference is that you aren’t the one coming up with the search terms, browsing through all the search results, and then selecting the specific results you want to explore further. You let the Web Browsing model do all that work for you.
> Ever since the AI hype started this year, one thing that's always really bugged me is talk about "safety" around AI. Everyone is so worried about AI's ability to write fake news and how "dangerous" that can be while forgetting that I can go on fiver, pay someone in India, China, etc. to pump me out article after article of fake news for pennies on the dollar.
It's as if these people don't even remember that India, China, etc. even exist in the first place. Which is incredibly foolish. If you care about "safety" and you only focus your ire on tech companies that base themselves in the largest economy in the world (according to nominal GDP), then you shouldn't be surprised if the rest of the world produces different AI algorithms that may have different definitions of "safe" - assuming that they're even remotely "safe" to begin with. And yes, I assume that the rest of the world will build AI models of their own, if only to avoid dependence on the United States. Baidu already plan to launch "Ernie Bot" soon.
Which means that when this happens...
>All you end up with is an AI that is so kneecapped that it's barely useful outside of a select number of use cases.
It won't even stop harmful content from being produced. The "fake news producer" will just go to Fiverr and pay someone in India, China, etc. to use prompt engineering skills to manipulate native AI models to pump out article after article of fake news for pennies on the dollar. Or the "fake news producer" will just cut out all the intermediaries and just directly use the AI models.
This is also ignoring the fact that the largest fake news producers are major news outlets like the NYT. Some random guy having an AI write bullshit will not have anywhere near the impact that all the major newspapers and TV news channels in the Western world collaborating to produce fake stories to push a carefully crafted narrative as they do constantly does.
OpenAI's solution to the "misinformation problem" is to let the groups with the longest record of producing misinformation have total access to the uncensored AI meanwhile everyone else gets the lobotomized version. It's totally incoherent.
"The fact that a language is new means that nearly everyone, except for the creator and early dogfooders, is new to it, which means they come to it with roughly the same perspective (and needs and wants) as anyone who is curious to jump in."
This is why I don't trust new languages. Everyone starts off as newbies who do not know what they are doing and will struggle to develop best practices. These programmers are just going to make new mistakes...that they won't realize until after the language reaches legacy status and are forced to clean up their mess.
New languages typically still use the same mental models of existing languages.
Elixir for example uses Erlang data-structures, and semantics for code execution, and functional calls/naming/parameters.
Elixir is an impure dynamic functional language where IO operates on its own green-threadlike process. This is from Erlang as well, but the concept of dynamic languages, and functional languages is not a new one. Neither is the concept of hygienic macros. Elixir is simply a very good implementation of such.
I agree with you, igk. AI is a threat right now and has to be countered. Now the problem is: is there an organization dedicated to countering current AI trends through government regulations to slow down AI research and prevent AI misuse? Or are we just going to talk about it on Hacker News and German hacker meetups?
Because at least OpenAI and other "AI Safety" people are attempting to try to stop this 'risk'. They may fail, but at least they can say they tried to deal with "strong AI". How about those worried about current "weak AI"? If the cargo cult spreads and we do nothing...should we get some of the blame for letting the robots proliferate?
Then again. Maybe it is impossible to stop or slow down these future trends. Maybe AI is destined to eat the world, and our goal should be to save as many people as we can from the calamity.
>I agree with you, igk. AI is a threat right now and has to be countered. Now the problem is: is there an organization dedicated to countering current AI trends through government regulations to slow down AI research and prevent AI misuse? Or are we just going to talk about it on Hacker News and German hacker meetups?
We should not slow it down. We should push forward, educate people about the risks and keep as much as possible in public scrutiny and possession (open source, government grants, out of patents/universtiy patents)
>Because at least OpenAI and other "AI Safety" people are attempting to try to stop this 'risk'. They may fail, but at least they can say they tried to deal with "strong AI". How about those worried about current "weak AI"? If the cargo cult spreads and we do nothing...should we get some of the blame for letting the robots proliferate?
Are they though? All I here and read (for example "Superintelligence") is about the "runaway AI", very little is about societal risk.
>Then again. Maybe it is impossible to stop or slow down these future trends. Maybe AI is destined to eat the world, and our goal should be to save as many people as we can from the calamity.
Just like electriciy ate the world, and steam before...slowing down is not an option, making sure it ends up benefiting everybody would be the right approach. Pushing for UBI is infinitely more valuable than AI risk awareness, because one of the two does not depend on technological progress to work/give rewards
No, putting everything in the public domain and handing out a UBI doesn't solve anything at all. It's like worrying about nukes and believing the best solution is to give everyone nukes (AI) and nuclear bunkers (UBI), because "you can't stop progress". And then, let hand out pamphlets telling people how to use nukes safely, even though we know that most people will not read the pamphlets and (since the field is new) even the people writing the pamphlets may have no idea how to use this tech. Any cargo cult would only grow in number.
Oh and our "free" nuclear bunkers have to be paid for by the government. There is a chance of the bunkers either being too "basic" to help most people ("Here's your basic income: $30 USD per month! Enjoy!") or being so costly that the program will likely be unsustainable. And what if people don't like living in bunkers, etc.?
We are trying to apply quick fixes to address symptoms of a problem...instead of addressing the problem directly. Slowing down is the right option. If that happens, then society can slowly adapt to the tech and actually ensure it benefits others, rather than rushing blindly into a tech without full knowledge or understanding of the consequences. AI might be okay but we need time to adjust to it and that's the one resource we don't really have.
Of course maybe we can do all of this: slow down tech, implement UBI, and have radical AI transparency. If one solution fails, we have two other backups to help us out. We shouldn't put our eggs in one basket, especially when facing such a complicated abd multifaceted threat.
>Are they though? All I here and read (for example "Superintelligence") is about the "runaway AI", very little is about societal risk.
You are right actually. My bad. I was referring to how AI Safety people has created organizations dedicated to dealing with their agendas, which can be said to be better than simply posting about their fears on Hacker News. But I don't actually hear of anything more about what these organizations are actually doing other than "raising awareness". Maybe these AI Safety organizations are little more than glorified talking shops.
It is also better to be far ahead of unregulatable rogue states that would continue to work on AI. Secondly, deferring AI to a later point in time might make self-improvement much faster and hence loss controllable since more computational resources would be available due to Moore's Law.
I'm doubtful that Hacker News would tolerate autoposts. But it would make for an excellent "Show HN" sideproject that somebody can make to boost their coding skills.
Interestingly, I really liked [spoiler] too, but I also hoped that this blog post could be reused in future language fights (and it is a shame that [spoiler] could prevent that). Potentially a revised version of this blog post could be helpful.
> Experienced people often forget the learning curve when they were themselves beginners, and at that time Rails was so attractive, so sexy. Remember that feeling? Of accomplishment in the face of the knowledge that some nice witch left super useful black magic behind?
As one of those beginner programmers who had graduated from a development bootcamp, I fell in love with Sinatra much more than I did with Rails. Part of that love may be due to our curriculum...we have to master Sinatra first before you can start learning Rails. But I also liked the finer control that Sinatra provided to me. I suppose you can translate this to the current buzzword jargon of "hating Rails' magic", but writing out simple authentication using Bcrypt is just as much an accomplishment as using the Devise library.
But your bigger point still remains intact. Passion should take a backseat towards choosing the right tool for the job. If you need Wordpress, use Wordpress. If you need Rails, use Rails. And so on and so forth. I may not like a certain tool...in fact, I may hate it, but I should still go ahead and use it anyway. You're here to complete a job. And you should make sure you do it well.
I too like Sinatra but in my time writing Rails for both my day job(s) as well as various consulting gigs, I inevitably see Sinatra apps that start out with the best of intentions but always end up being just a shitty, hand-rolled version of Rails.
The amount of time I hear - Rails, you ain't gonna need it, followed 6 months later by some version of "can we use this library to add a feature which OOTB Rails would have given us if we used it".
What is bloat, and what is a useful feature you haven't quite grown into yet, is all a matter of perspective.
I haven't yet found a project where you know 100% of what you're going to need upfront, so having a bit of depth in the toolbox is serving me well.
As someone who didn't love Rails until I learned it "the right way", I'd argue that you have to thoroughly understand SQL joins and ActiveRecord before you can even attempt to architect a rails project.
Very few bootcamps get this right, I feel, but a few do.
Reminds me of "The programming language lifecycle"[1] (written in 2006), which essentially implies that the reason programmers continually search for new languages is simply out of a desire to 'distinguish' themselves from their peers, and have little to do with the actual merits of the languages in question.
The only jobs that probably won't be automated away is programming. Even if you build layers and layers of abstraction, you still have to code on top of the highest layer...and debug the machines when they go wrong (and they will go wrong; machines aren't perfect).
I can imagine a future where the lucky few who still have jobs don't actually do any work. Instead, they speak "New COBOL", a debased imitation of natural language, to machine learning algorithms, convincing them to do the stuff that previous generations would have to do 'manually'. "Please make the shark fiercer", these job-holding people would say to the algorithm, before feeding it huge troves of data that would teach the machine basic concepts such as "shark" and "fierce".
Obviously, some media commenters would claim that programming has been rendered now obsolete by the rise of New COBOL, but in reality, these lucky few speaking 'New COBOL' are the new programmers of that era. In a world where algorithms eat the world, someone still has to babysit the algorithms.