Hacker Newsnew | past | comments | ask | show | jobs | submit | bhntr3's commentslogin

I finished my master's thesis in Inductive Logic Programming recently at Oxford. I'd say that the field has continued improving since Aleph, which was written in the late 90s.

Anyone interested could also take a look at Popper (https://github.com/logic-and-learning-lab/Popper) or this overview of the first 30 years of ILP (https://arxiv.org/abs/2008.07912)


Here's what I got when I put in "Will I am was spotted at a cafe with Beyonce."

"This is the first time two Black people were spotted out in public together."

"They were there for hours and no one could tell them apart."

That's not OK. A "joke machine" that regurgitates unfunny stereotypes and racism is worse than broken.

There are some things we can't do half baked. AI comedy is one of them.


This isn't an anti-AGI argument and it doesn't disprove humans. Humans have the same problem. It's harder to write a program to do a thing than it is to just do the thing.

It's appealing to think we can just make a program that learns programs and then use that to learn to do anything computable. But this is a well studied field and it turns out that when you generalize a learning problem that way you make the learning problem a lot harder.

The space of programs that could possibly.identify dogs in images is much much much larger than the space of images that contain dogs. The images are bounded by the number of pixels in the image times the color depth. What is the space of programs bounded by? 10TB? That's roughly 256^10000000000 programs. That's just a stupidly large number.

Obviously not every 10TB string is a valid program. You can reduce that number. But what current research in program synthesis tells us is that you can't reduce it as much as you might hope.

So the point is that, just like for humans, it's easier to learn to do a thing than it is to learn to write a program to do a thing.


> This isn't an anti-AGI argument and it doesn't disprove humans.

You're saying you can't find intelligent programs this way because the search space is large. That's an anti-AGI argument, and it's fallacious because humans evolved.

Yes, you can only search an infinitesimal subset of the search space. The same is true for DNA. The argument is clearly invalid without at least reference to properties that gradient descent has, or that evolution has but it does not, which you have not done. It is wrong for the same reasons the watchmaker analogy is.


Thank you. I just finished my master's thesis in program synthesis/induction. You're explaining this better than I could.


This was part of the lawsuit. They communicated with each other about offers made to each other's employees and agreed not to counter-offer above the initial offer.


That link is a wild ride. I expected a design blog. Instead I got a communist indictment of startup culture through the lens of the commodification of illustration.


Same! A refreshing take on the startup world from the outside.


Agree! I really appreciated their main page too with no ads (I think) and the way articles populated the endless scroll.


> I don’t see any path from continuous improvements to the (admittedly impressive) ‘machine learning’ field that leads to a general AI

> I share the skepticism towards any progress towards 'general AI' - I don't think that we're remotely close or even on the right path in any way.

This isn't how science works though. Quoting the wikipedia page for Thomas Kuhn's "The Structure of Scientific Revolutions" (https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Re...):

"Kuhn challenged the then prevailing view of progress in science in which scientific progress was viewed as "development-by-accumulation" of accepted facts and theories. Kuhn argued for an episodic model in which periods of conceptual continuity where there is cumulative progress, which Kuhn referred to as periods of "normal science", were interrupted by periods of revolutionary science."

I think this is the accepted model in the philosophy of science since the 1970s. That's why I find this argument about AI so strange, especially when it comes from respected science writers.

The idea that accumulated progress along the current path is insufficient for a breakthrough like AGI is almost obviously true. Your second point is important here. Most researchers aren't concerned with AGI because incremental ML and AI research is interesting and useful in its own right.

We can't predict when the next paradigm shift in AI will occur. So it's a bit absurd to be optimistic or skeptical. When that shift happens we don't know if it will catapult us straight to AGI or be another stepping stone on a potentially infinite series of breakthroughs that never reaches AGI. To think of it any other way is contrary to what we know about how science works. I find it odd how much ink is being spent on this question by journalists.


I think you're misunderstanding Kuhn slightly. He invented the term paradigm shift. What he means by normal science with intertwined spurts of revolution is more provocative. He means that in order to observe periods of revolution, the "dogma" of normal science must be cast aside and new normal must move in to replace it. Normal science hits a wall, gets stuck in a "rut" as Kuhn describes it.

I think, in a way, Doctorow is making that same argument for the current state of ML: "I don't think that we're remotely close or even on the right path in any way". In other words, general thinking that ML will lead to AGI is stuck in a rut and needs a new approach and no amount of progressive improvement on ML will lead to AGI. I don't think Doctorow's opinion here is especially insightful, he's just a writer so he commits thoughts to words and has an audience. I don't even know wether I agree or not. But I do think this piece comes off as more in the spirit of Kuhn than you're suggesting.

And of course you can interpret Kuhn however you want. I don't think Kuhn was saying you shouldn't use/apply the tools built by normal science to everyday life. But he, subtly, argues that some level of casting off entrenched dogmatic theories, in the academic domain, is a requirement for revolutionary progress. Kuhn agrees that rationalism is a good framework for approaching reality, but also equates phases of normal science to phases of religious domination that predated it. Essentially truly free thought is really really hard because society invents normals (dogma) and makes it hard to deviate. Academia is no exception. Science, during periods of normals, is (or can become) essentially over-calibrated and over-dependent on its own contemporary zeitgeist. If some contemporary theory that everyone bases progressive research off of is not quite right, it kinda spoils the derivative research. Not always true because sometimes the theories are correct.


This is an excellent post. Thank you!

I felt like the part that wasn't in line with Kuhn was the idea that there was something wrong with a field if incremental improvement couldn't lead to a breakthrough like AGI. You're right. He's arguing Kuhn's point. But he seems to use it to conclude that machine learning is a dead end when it comes to AGI. Further, he seems to think this means AGI won't happen any time soon.

But, if I'm not misinterpreting Kuhn again, knowing that a revolution is necessary to overturn the current dogma (which I would argue is deep learning) doesn't tell us anything about when the revolution will occur. It could be tomorrow or 50 years from now or never. So, specifically, it doesn't tell us anything about machine learning in general, whether AGI is possible, or when AGI will happen.


>So it's a bit absurd to be optimistic or skeptical.

We skeptics aren't skeptical that AI is possible, were skeptical of specific claims. I think it's perfectly reasonable to be skeptical of the optimistic estimates, since they really are little more than guesses with little or no foundation in evidence.


This seems akin to Asimov's "Elevator Effect": https://baixardoc.com/preview/isaac-asimov-66-essays-on-the-... starting p 221.

I agree that one would think that Science Fiction writers would have enough of an imagination to be able to consider alternate futures (Cory CYA's by saying such a scenario would make a good SF story) - but there are already promising approaches to AGI: Minsky's "Society of Mind", Jeff Hawkins' neuro-based approaches, the fairly new Hinton idea GLOM: https://www.technologyreview.com/2021/04/16/1021871/geoffrey... .

“By 2029, computers will have human-level intelligence,” Kurzweil said in an interview at SXSW 2017.

Time to get to work, eh? https://www.timeanddate.com/countdown/to?msg=Kurzweil%20AGI%...


1960s Herbert Simmons predicts "Machines will be capable, within 20 years, of doing any work a man can do."

1993 - Vernor Vinge predicts super-intelligent AIs 'within 30 years'.

2011 ray Kurzweil predicts the singularity (enabled by super-intelligent AIs) will occur by 2045, 34 years after the prediction was made.

So until his revised timeline for 2029 the distance into the future before we achieve strong AI and hence the singularity was, according to it's most optimistic proponents, receding by more than 1 year per year.

I wonder what it was that lead him to revise his timeline so aggressively. I think all of those predictions were unfounded, until we have a solid concept for an architecture and a plan for implementing it an informed timeline isn't possible.



That's funny. Of course, I was referring to Asimov's Elevator Effect, which is that if aliens visited NYC with some probe in 1800 and then in 1950, they would be astonished at all the very tall buildings, and would have to assume people were now living in these tall towers for reasons TBD. They would not know that elevators had been invented, and hence, the buildings would only be occupied 8 hours per day or so; and nobody would live in them. Elevators allowed this major unexpected result. There is more, I couldn't find the actual essay.


>I think this is the accepted model in the philosophy of science since the 1970s.

Perhaps, but "philosophy of science" has never been something the majority practicing scientists consider relevant, care about, or are influenced by, since forever.


is this related to Foucault? in an old debate with Chomsky, Foucault spends a lot of time on a concept similar to what you are talking about


> Are you at all close to this space?

I am.

> The example Cory puts on policing

My most upvoted comment on this website was discussing this exact scenario. https://news.ycombinator.com/item?id=23655487

Could you perhaps clarify the generalization you're making about me and people like me so I can understand it?


Excellent. One problem in my mind that I don't see discussed enough -- and also not in your other post -- is that there is a large divide between those who use the technology (the cops in this case) and those who supply it, and there is no accountability in any of the two groups when something goes wrong. Like you write in your other post, "the system works (according to an objective function which maximizes arrests.)", and that is as far as the engineer goes. On the other hand, the cop picks up the technology and blindly applies it. To make any improvement to the system would require both groups to work together, but as far as I know, that is not happening. A recent example can be found in the adventures of Clearview AI. So from that perspective, I do think that the engineers (and the cops, and everybody else) are out to lunch, each doing their own work in a bubble and not paying enough attention to (or caring about) the side effects of the applications of this technology.

Also, the lack of thought and accountability that I mention above I think is fairly general from my experience, even outside of policing. That is why I don't generally agree with the lunch statement. Guys are having a hell of a party as far as I can tell -- at the expense of horror stories suffered by the victims of these systems.


I second this. I spend a great deal of time digging through where we've positioned big data models to steer population scale behavior, and very infrequently do the implementers of the system ever stop to analyze the changes they are seeding or think beyond the first or second degree consequences once things take off.

That is all part of engineering to me, so by definition, I think many in the field are in fact, out to lunch.


Yes, thank you. Analyzing the effects of our technology should be part of the engineering process. The physicists back where I studied all go through a mandatory ethics class. Us software crowd, well...


   "Don't say that he's hypocritical
   Say rather that he's apolitical
   'Once the rockets are up, who cares where they come down?
   That's not my department!' says Wernher von Braun

   Some have harsh words for this man of renown
   But some think our attitude
   Should be one of gratitude
   Like the widows and cripples in old London town
   Who owe their large pensions to Wernher von Braun"
Tom Lehrer "Wernher von Braun"


I actually think you're being too generous. Most people who work in ML are not ignorant that it has risks and flaws.

Many people are very resistant to the idea that their particular work can have a negative impact or that they should take responsibility for that. See Yan Lecun quitting Twitter (https://syncedreview.com/2020/06/30/yann-lecun-quits-twitter...)

Other people are very aware of the dangers of their work. But, when the money gets big enough, they take their concerns to the bank and their therapist. See Sam Altman's concerns about the dangers of machine intelligence before he invested in OpenAI (https://blog.samaltman.com/machine-intelligence-part-1) Contrast that with his decision to become the CEO, take the company private and license GPT-3 exclusively to Microsoft. (https://www.technologyreview.com/2020/02/17/844721/ai-openai...) He had reasons. He posts here. He might defend himself. But to me it seems like the kind of moral drift I've seen happen when people in silicon valley have to make hard choices about money and power.

There are also applications of ML that are generally safe and can be of benefit to society. See the many medical uses including cancer detection.(https://www.nature.com/articles/d41586-020-00847-2) Most of the work being done to expose the risks and biases of ML is being done by researchers who are at least somewhat within the field. In my math and computer science program, two and a half of the 25 students are doing their thesis in safe ML. (I'm giving myself a half because I'm working on logic based ML.) I don't think it's fair to believe that every person working in ML is participating in something negative for society.

Ultimately, I think we need some reasonable regulation and a lot more funding for research into safe ML. Corporations and governments want ML for purposes that can be unethical. Unfortunately they also control a lot of the research grants. So they have a disincentive to fund AI ethics or safe ML over pushing the boundaries of what ML can accomplish.

Finally, I think many engineers would like their work to be positive for society. Unfortunately, with what we know now, a lot of the edge cases we run into are unfixable. When Google Photos started classifying black people as gorillas, Google just removed primates from the search terms. Years later, they hadn't fixed it. (https://www.wired.com/story/when-it-comes-to-gorillas-google...) I'm sure most engineers on the project knew that was a hack. When faced with an unfixable issue like that, the engineer either tries to get the company to stop using ML for that problem, compartmentalizes and ignores the issue, or they quit. Where do you draw the ethical line? It's good to hold people accountable but it's unrealistic to expect that to solve the problem.


Thank you for the summary. The arrogance and moral bankruptcy of the first two stories are marvels of human behaviour. I was not aware "safe ML" was a thing; I was aware of explanatory models, but I guess the safe ML research you do covers more than just that?

Your third and fourth points I think are linked. I am not exactly sure where or how you would draw the line, but I kind of think of these ML/AI applications as something that could be export-controlled or be regulated along those lines, just like certain pieces of hardware are export-controlled on the grounds that they could be used for harm, and weapons, of course (and I mean, add some salt here because governments will cause the harm regardless, but hopefully the point comes across.) Once the regulations are in place, and corporations take _substantial_ economical hits for their errors (unlike, say, GDPR violations, which Google just factors into their OPEX), those corporations will rapidly start effecting real change. Corporations understand the language of (economic) violence suprisingly well, it's an effective tool for change. But like you said, it is precisely the same governments and corporations driving the research and exercising economic and political power, so I am not entirely sure how that would start shaping into place. Like almost everything else in life, the first step will probably be to keep raising social awareness; change will emanate from us at the bottom -- if we can direct our anger correctly and if the climate catastrophe that is upon us does not wipe us all first.


3gg was replying to version_five. You're bhntr3. There is no generalization being made about you or even people like you, in a post that is a specific response to an account that is not yours.


I believe they are disagreeing whether "engineers working in this space are out to lunch" and since I have been "an engineer working in this space" I was asking for more clarification about what it meant to be "out to lunch".


I love the implication that there's this shadow company, Fronk. Seemingly defunct, they're actually thriving secretly behind the facade of a failed startup.

Every marketing manager has engaged them privately to boost their numbers. Every developer secretly works for them on the side.

But no one anywhere ever talks about it until one day a former consultant notices an expired NDA.


It’s a great conspiracy theory. :-) The reality is the model survived in new firms.

This is a strange counterpoint to managers interviewing people to learn about a market.


this sound like the backstory of an SCP story waiting to be written.


Reminds me of the G.K. Chesterton novel where a cop infiltrates a criminal organization only to learn that the entire group is composed of cops who have infiltrated it. I won't name the book to avoid spoilers.


Fronk sounds like the Fight Club of the developer world. Convincingly wrapped in the busted-startup fabrication, the cult probably lives on. :')


I would think that Fronk is actually Google, but I doubt that Google would ever put an expiration date on any of their NDAs.


Since when do Google make money from people choosing to use their tech?


>Is it worth reading them already or does it feel unfinished?

Mistborn is a complete trilogy although he continues to publish other novels set in the same world.

Way of Kings is ongoing. It's on book four now. Each book is over a thousand pages so there's a lot there. I don't think it's a problem to start. Books 1-3 are great and stand alone pretty well. It's started to drag with book 4 in my opinion. Like so many other huge epic fantasies, it has too many characters, too many plotlines, too huge of a world, and it's difficult to maintain the epic feel with all that sprawl. I'm worried for book 5.

> Mistborn and The Way of Kings take place in the same universe?

They take place in the same universe (literally) but they are on different worlds. So they don't have anything (much?) to do with each other (yet?)


There's another Sanderson Cosmere book called Warbreaker which crosses over with Stormlight pretty heavily from book 2 onwards (it's also very possibly there are references in book 1 which totally passed me by). You'll definitely have a better handle on why a particular object which shows up in the Stormlight books is so scary if you read Warbreaker first.

Book 4 of Stormlight does have some pretty big references to the original Mistborn trilogy, too.

On the whole I try to read books in the order I bought them (ish), but Sanderson is one of the authors I'll just drop everything for when a new book comes out. Disclaimer: he does have some bad habits (mainly inserting "wise ass" characters who don't fit the tone or setting, and who I strongly suspect carry the author's voice a little _too_ directly). But he does epic world building incredibly well, and very different to just about any other author I've read. He also writes action exceptionally well.


Warbreaker is a very amateurish effort though - I think a lot of people would bounce off it. I'd definitely suggest starting with Way of Kings, if that grabs a new reader then they can delve into the Cosmere before continuing on to Words of Radiance.


Thanks for the details! Mistborn is going on my list to read next.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: