Hacker Newsnew | past | comments | ask | show | jobs | submit | onetrickwolf's commentslogin

I don't know if Amazon should have the right to terminate anyone's account to be honest. I feel like people are afraid to regulate anything these days but I mean some people's entire lives are dependent on these corporate accounts. You can't just have me build up a livelihood on Amazon or have my whole house running on Amazon smart products and then just terminate the account.

If there was more competition or these things were based on open standards you could easily switch over I might be more inclined to agree, but I think right now these companies have way way too much power.

The threat of a company deleting an account for many people would be detrimental to their lives. We are way way too dependent on corporations and in kind of tired of people being scared to regulate.


If Amazon and Google and other Big Companies were prevented from deleting accounts, what else would happen? How would the Big Companies game the system? Would that harm smaller companies (as regulation usually does)? Would we even see the benefits that were promised?

Regulation often makes things worse, so we can't just look at the potential benefits without considering the potential (and likely) negative consequences.

That doesn't mean folks are scared of regulation. It means they're _smart_ about it.


I used to agree with you but it's getting out of hand these companies have more control than most states and governments over our communication, privacy, money, and general lives but we don't get to vote in who is running them. I'd rather start shooting and ask questions later before we get consumed by these behemoths. We need another Teddy Roosevelt and another antitrust sweep up in my opinion.


I mean why are power lines not locked up and buried underground secured locked steel cages?

Because some things work better with trust vs convoluted security.

I think this is something a lot of computer nerds don't get (myself included at one point). It's almost like if something can be accessed we are allowed to access it and it's the fault of the person securing it. But a lot of our society works on trust and I think we'd live in a much more difficult world if everything had to be secure enough to resist any attack.

If this thing was connected to the internet I get it, but you already need physical access to the meter why add another layer of security on top of that? If someone has wants to mess up your power and they have physical access there's plenty of ways they can do it without wireless communication.


I would just add a simple layer of device-id based password generation function which is hard to reverse engineer. The devices used by authorized people would auto-generate it and will be transparent to them, yet it'll prevent many people from getting in. Add a rate-limiter on top of it, and it's impractical to brute force it.

If Philips can secure its SoniCare brush heads this way to prevent tampering and counterfeiting, a utility company or meter producer which enables a much more important infrastructure can be a little more mindful about what they are doing.

Other than that I agree 100% to your viewpoint.


Definitely agree with you here. The parent has a very valid point about not always over-securing things that don't need to be secured, but physical line cutting and wireless shutoff are very different threats.

Someone walking around your neighborhood cutting every single electric line on the side of a house, risking electric shock and trespassing on your private land is much more likely to get caught than somebody rolling through your neighborhood with a flipper zero and a high power antenna turning off all of your meters.

If someone had a grudge against you, and they started to "release the magic smoke" from your meter once a week and the power company is upset with you and your HVAC system doesn't work anymore, in addition to the fact that the compressor in your AC is toast because of someone energizing and de-energizing the circuit so rapidly. Now you are out thousands of dollars and, on top of all that, no matter how many cameras you put up, you'll have a hard time figuring out who's doing it.


Which is exactly how you end up with more etrash when a company goes out of business.

Also, you've just made replacement/repair/support far more complicated and dangerous for everyone than it need be. You must be 10% smarter than any piece of equipment you are operating to safely use it, and be "ahead of the machine".

I truly believe we have suffered greatly as a civilization for our willingness to lose sight of that, and to have allowed the siren call of "abstraction" to charm us into making things so absurdly complicated that short of neverending population growth to bring into existence more people to solve all the new problems people have created, one is hard pressed to even read everything necessary to understand why most things are the way they are.


When done with proper contracting and documentation, losing a company is not a problem, because either you put the spec and the algorithm on the table, and people implement it to get certified, or you get the technical docs to use when/if the company goes out of business.

Practically, it doesn’t do anything more complicated. Device provides you an ID without a password, but accepts everything else with a password. In many countries, if not all, infrastructure equipment is already protected property. Nobody except the utility company touch, repair, reconfigure that meter, anyway.

Overcomplicating stuff is indeed a problem, and it’s a combination of poor engineering plus monetary greed in most cases. Also it’s a side effect of evolution of technology. I would love to discuss it to death, but this is not the place and I don’t have much time for it either.


Very good insight here. This is something I’ve been thinking a lot about.

Case in point the electric substation attacks early this year.

Or on a micro scale, just walking into a store, taking what you want and then walking out.

I don’t think anyone really wants to live in a society that is fully secure but to not have that we need to stop the breakdown of trust.

Arguably, the only reason we have society and not total anarchy is because everyone kinda tacitly agrees to “act right”


Yeah, a lot of this infrastructure was built on a trust-based society so we're having to slowly learn that isn't possible in our current culture and population size. It's sad.


Yeah I had this experience too. I figured at least there'd be static checking that would at least make sure we aren't going off spec but there isn't really. So you just have a spec that slowly becomes out of sync with the code until it's basically useless. Just seems like double work for almost no benefit.


It depends on the tools you use. You can use tester proxies that validate all requests and responses to the specs, or you can do server code gen (with interfaces/Subclasses for example in Java) so you are forced to adhere to the specs.


Would be nice to have some collection of no ad or non-spammy games and websites in general.


Web development has trained me to Google (now often ChatGPT) everything even if I already know how to do it because it changes so much haha

It makes me look stupid in code interviews because I commit very little to memory but professionally it's helped me catch a lot of syntactical upgrades.


I've been using GPT4 to code and these explanations are somewhat unsatisfactory. I have seen it seemingly come up with novel solutions in a way that I can't describe in any other way than it is thinking. It's really difficult for me to imagine how such a seemingly simple predictive algorithm could lead to such complex solutions. I'm not sure even the people building these models really grasp it either.


I've started to suspect that generating code is actually one of the easier things for a predictive text completion model to achieve.

Programming languages are a whole lot more structured and predictable than human language.

In JavaScript the only token that ever comes after "if " is "(" for example.


On the other hand, if you want to use an external library on the line 80, you need to import it at the top.

I once asked it for a short example code of something, no longer than 15 lines and it said "here's a code that's 12 lines long" and then added the code. Did it have the specific code "in mind" already? Or was it just a reasonably-sounding length and it then just came up with code that matched that self-imposed constraint?


The latter option is closest, but neither is quite right. It would have ~known~ that the problem asked, combined with a phrase for a 15 line limit has associations with a length of 12 lines (perhaps most strongly 12, but depending on temp it could have given other answers). From there it is constrained to (complete) solutions that lead to 12 lines, from the several (partial) solutions that already exist in the weights.


I loved your example. I think that may be an obvious advantage to LLM, humans are poor at learning new languages after adolescence but a LLM can continue to learn and build new connections. Studies show that multilingual people have an easier time making connections and producing new ideas, In the case of programming, we may build something that knows all programming languages and all design patterns and can merge this knowledge to come up with better solutions than the ordinary programmer.


The more constraints there are (e.g. like your example) the better it should perform. So it disappoints me when copilot, knowing what libraries are available in the IDE it's running in, hallucinates up a method call that doesn't exist.

Separately (and apologies for going on a tangent), where do you think we are in the Gartner cycle?

Around GPT3 time I was expecting for trough of disillusionment to come, particularly when we see the results of it being implemented everywhere but it hasn't really come yet. I'm seeing too many examples of good usage (young folks using it for learning, ESL speakers asking for help and revisions, high-level programmers using it to save themselves additional keystrokes, the list is long).


> hallucinates up a method call that doesn't exist

I actually think it helps to reframe this. It hallucinates up a method call that predictively should exist.

If you're working with boto3, maybe that's not actually practical. But if it's a method within your codebase, it's actually a helpful suggestion! And if you prompt it with the declaration and signature of the new method, very often it will write the new helper method for you!


If you have a long iterative session by the end it will have forgotten the helpful hallucinations at the beginning, so then phantom methods evolve in their name and details.

I wonder if it is better at some languages than others. I have been using it for Go for a week or two and it’s ok but not awesome. I am also learning how to work with it, so probably will keep at it, but it is clearly a generative model not a thinking being I am working with.


No idea about Go, but I was curious how GPT-4 would handle a request to generate C code, so I asked it to help me write a header-only C string processing library with convenience functions like starts_with(), ends_with(), contains(), etc.) I told it every function must only work with String structs defined as:

struct String { char * text; long size; }

...or pointers to them. I then asked it to write tests for the functions it created. Everything... the functions and the tests... worked beautifully. I am not a professional programmer so I mainly use these LLMs for things other than code generation, but the little I've done has left me quite impressed! (Of course, not being a professional programmer no doubt makes me far easier to impress.)


Interesting. I haven’t tried it with C. Hopefully the training code for C is higher quality than any other language (because bad C kills). Do you have a GitHub with the output?


Hah, hadn't thought of this but kind of love that take!


Are you using it with static types at all? With TypeScript, I've found that it's quite good at producing the imperative logic, but can struggle with types once they reach a certain level of abstraction. It's interesting that even in the realm of "structured languages", it's a lot stronger at some kinds of inference than others.


> In JavaScript the only token that ever comes after "if " is "(" for example.

I'm pretty sure " " (whitespace) is a token as well, which could come after a `if` as well. I think overall your point is a pretty good one though.


> I've started to suspect that generating code is actually one of the easier things for a predictive text completion model to achieve.

> Programming languages are a whole lot more structured and predictable than human language.

> In JavaScript the only token that ever comes after "if " is "(" for example.

But isn't that like saying that it's easy to generate English text, all you need is a dictionary table where you randomly pick words?

(BTW, keep up the blog posts, I really enjoy them!)


One thing to bear in mind is that GPT training set for code is supposedly skewed very heavily towards Python.


This!


The advanced capabilities of scaled up transformer models fed oodles of training data has burdened me with pseudo-philosophical questions about the nature of cognition that I am not well equipped to articulate, and make me wish I'd studied more neuroscience, philosophy, and comp sci earlier in life. A possibly off-topic thought dump:

- What is thinking, exactly?

- Does human (or superhuman) thinking require consciousness?

- What even is consciousness? Why is it that when you take a bunch of molecular physical laws and scale them up into a human brain, a signal pattern emerges that feels things like emotions, continuity between moments, desires, contemplation of itself and the surrounding universe, and so on?

- Why and how does a string predictor on steroids turn out to do things that seem so close to a practical definition of thinking? What are the best evidence-based arguments supporting and opposing the statement "GPT4 thinks"? How do people without OpenAI's level of model access try to answer this question?

(And yes, it's occurred to me that I could try asking GPT4 to help me make these questions more complete)


> has burdened me with pseudo-philosophical questions about the nature of cognition that I am not well equipped to articulate, and make me wish I'd studied more neuroscience, philosophy, and comp sci earlier in life

Welcome to the club. There pretty much are no answers, just theories primarily played out as thought experiments. Its on of those areas where you can pick out who knows less (or is being disingenuous) by seeing who most confidently speaks about having answers.

We don't know what consciousness is, and we don't know what it means to "think". There, I saved you a decade of reading.

Edit: My choice theory is panpsychism, https://plato.stanford.edu/entries/panpsychism/ but again, we don't yet know how to verify any of this (or any other theory).


It's interesting to me how many commenters on HN are absolutely convinced that GPT4 is incapable of thought or understanding or reasoning, it's "just" predicting the next word. And then they'll insist that it'll never be able to do things that it's already capable of doing...

Interestingly, more than one of these folks have turned out to be religious. I wonder if increasingly intelligent AI systems will be challenging for religious folks to accept, because it calls into question our place at the pinnacle of God's creation, or it casts doubt upon the existence of a soul, etc.


> because it calls into question our place at the pinnacle of God's creation, or it casts doubt upon the existence of a soul

I think this is a very simplistic view, that possibly suggests you haven't talked to many religious people.

I've never known a religious person who thought "thought" was the same as "soul", or that God is neccesarily a requirement for reasoning. Or that any of this is thought about much, considering it's so new.

Although, I suppose that if someone did say that God was a requirement for reasoning, a "logical within that context" perspective might be AI being some vicarious creation, since it wouldn't have been possible without us being able to reason.

I subscribe to the belief that reasoning is an eventual emergent law of nature/information. But, even that could, and does, fit into many "religious" perspectives perfectly well.


If we could create a sentient being, it would be the first evidence of it being possible at all. If this casts doubt in the mind of a believer, then it tells us more about what belief is than anything else.


"Interestingly, more than one of these folks have turned out to be religious."

The guy fired by google for announcing LaMDA was sentient was religious.

I don't really see a meaningful distinction between declaring a machine is "thinking" for hand waving religious reasons and hand waving non-religious reasons, I'm afraid.


It's less unsettling when you think of LLMs as an approximation to a kind of "general intellect" recorded in language. But then the surprising thing is that we as "individual intellects" tend to operate the same way, perhaps more than we imagined.


The hypothesis that I find most compelling and intuitive is that language is thought and vice versa. We made a thing really good at language and it turns out that's also pretty good at thought.

One possible conclusion might be that the only thing keeping GPT algos from going full AGI is a loop and small context windows.


Add the strange loops and embed in a body the interacts with a real or rich virtual word—that should do the trick. Of course there should ideally be an emotional-motivational context.


- Does human (or superhuman) thinking require consciousness?

I was going to write this exactly. I believe these things think. They're just not alive.

- What even is consciousness?

My advice: stay as far as you can from that concept. Wittgenstein already noticed that many philosophical questions are nonsense and specifically mentioned how consciousness as felt from the inside is hopefully incompatible with any observation we make from the outside.

BS concepts like qualia are all the rage now, but ultimately useless.


My views:

The best definition of "intelligence" is "the degree of ability to correctly predict future outcomes based on past experience".

Our cortex (part of the brain used for cognition/thinking) appears to be literally a prediction engine where predicted outcomes (what's going to happen next) are compared to sensory reality and updated on that basis (i.e. we learn by surprise - when we are wrong). This makes sense as an evolutionary pressure since ability to predict location of food sources, behavior of predators, etc, etc, is obviously a huge advantage over being directly reactive to sensory input in the way that simpler animals (e.g. insects) are.

I'd define consciousness as the subjective experience of having a cognitive architecture that has particular feedback paths/connections. The fact that there is an architectural basis to consciousness would seem to be proved by impairments such as "blindsight" where one is able to see, but not conscious of that ability! (eg. ability to navigate a cluttered corridoor, while subjectively blind).

It doesn't seem that consciousness is a requirement for intelligence ("ability to think"), although that predictive capability can presumably benefit from more information, so these feedback paths may well have evolutionary benefit.

The reason a "string predictor on steroids" turns out to be able to do things that seem like thinking is because prediction is the essence of thinking/intelligence! Of course there's a lot internally missing from GPT-4 compared to our brain, for example basics like working memory (any internal state that persists from one output word to the next) and looping/iteration, but feeding it's own output back in does provide somewhat of a substitute for working memory, and external scripting/looping (AutoGPT, etc) goes a long way too.


I think since the mechanisms are different we should arrive at a distinction between:

organic thinking (I.e. the process our squishy human brains do)

and mechanical thinking ( the computational and stochastic processes that computers do ).


I don't think the substrate defines the nature of the thinking, but the form of the process does.

It is entirely possible to build mechanical thinking in organic material (think Turing machines built on growing tissue), and it could also be possible to build complex self-referential processes simulated on electronic hardware, of the kind high-level brains do, with their rhythms of alfa and beta waves.


> What even is consciousness? Why is it that when you take a bunch of molecular physical laws and scale them up into a human brain, a signal pattern emerges that feels things like emotions, continuity between moments, desires, contemplation of itself and the surrounding universe, and so on?

I doubt we'll ever be able to answer this, even after we create AGI.


Any overly simple "it's just predicting next word" explanation is really missing the point. It seems more accurate to regard that just as the way they are trained, rather than characterizing what they are learning and therefore what they are doing when they are generating.

There are two ways of looking at this.

1) In order to predict next word probabilities correctly, you need to learn something about the input, and the better you want to get, the more you need to learn. For example, if you just learned part-of-speech categories for words (noun vs verb vs adverb, etc), and what usually follows what, then you would be doing better than chance.. If you want to do better than that they you need to learn the grammar of the underlying language(s).. If you want to do better than that then you start to need to learn the meaning of what is being discussed, etc, etc.

If you want to correctly predict what comes next after "with a board position of ..., Magnus Carlson might play", then you better have learned a whole lot about the meaning of the input!

The "predict next word" training objective and feedback provided doesn't itself limit what can be learned - that's up to the power of the model that is being trained, and evidentially large multi-layer transformers are exceptionally capable. Calling these huge transformers "LLMs" (large language models) is deceptive since beyond a certain scale they are certainly learning a whole lot more than language/grammar.

2) In the words of one of the OpenAI developers (Sutskever), what these models have really learnt is some type of "world model" modelling the underlying generative processes that produced the training data. So, they are not just using surface level statistics to "predict next word", but rather are using the (often very lengthy/detailed) input prompt to "get into the head" of what generated that, and are predicting on that basis.


To be deliberately unfair, imagine a huge if-else block — like, a few billion entries big — and each branch played out a carefully chosen and well-written string of text.

It would convince a lot of people with the breadth, despite not really having much depth.

The real GPT model is much deeper than that, of course, but my toy example should at least give a vibe for why even a simple thing might still feel extraordinary.


This is absolutely not viable because exponential growth absolutely kills the concept.

Such a system would already struggle with multiple-word inputs and it would be completely impossible to make it scale to even a paragraph of text, even if you had ALL of the observable universe at your disposal for encoding the entries.

Consider: If you just have simple sentences consisting of 3 words (subject, object, verb, with 1000 options each-- very conservative assumptions), then 9 sentences already give more options than you have atoms (!!) in the observable universe (~10^80)


α: most of those sentences are meaningless so they won't come up in normal use

β: if statements can grab patterns just fine in most languages, they're not limited to pure equality

γ: it's a thought experiment about how easy it can be to create illusions without real depth, and specifically not about making an AGI that stands up to scrutiny


> most of those sentences are meaningless so they won't come up in normal use

Feel free to come up with a better entropy model then. Stackoverflow gives me confidence that it will be between 5 and 11 bits per word anyway [https://linguistics.stackexchange.com/questions/8480/what-is...].

> if statements can grab patterns just fine in most languages, they're not limited to pure equality

This does not help you one bit. If you want to produce 9 sentences of output per query then regular expressions, pattern matching or even general intelligence inside your if statements will NOT be able to save the concept.


> What is the entropy per word of random yet grammatical text?

More colourless green dreams sleep furiously in garden path sentences than I have

> This does not help you one bit.

Dunno, how many bits does ELIZA? I assume more than 1…


> What is the entropy per word of random yet grammatical text?

That is what these 5-11bit estimates are about. Those would correspond to a choice out of 32 to 2048 options (per word), which is much less than there are words in english (active vocabulary for a native speaker should be somewhere around 10000-ish).

Just consider the XKCD "thing explainer" which limits itself to a 1k word vocabulary and is very obviously not idiomatic.

If you want your big if to produce credible output, there is simply no way around the entropy bounds in input and desired output, and those bounds render the concept absolutely infeasible even for I/O lengths of just a few sentences.

Eliza is not comparable to GPT because it does not even hold up to very superficial scrutiny; its not really capable of even pretending to intelligently exchange information with the user, it just relies on some psychological tricks to somewhat keep a "conversation" going...


> Eliza is not comparable to GPT because it does not even hold up to very superficial scrutiny; its not really capable of even pretending to intelligently exchange information with the user, it just relies on some psychological tricks to somewhat keep a "conversation" going...

That's kinda the point I was making — tricks can get you a long way.

The comparison with GPT is not "and therefore GPT is bad" but rather "it's not necessarily as smart as it feels".

Perhaps I should've gone for "clever Hans" or "why do horoscopes convince people"?


It’s a fallacy to describe what the machine does as “thinking” because that’s only process you know for achieving the same outcome.

When you initiate the model with some input where you expect some particular correct output, that means there exists some completed sequence of tokens that is correct—if that weren’t true then you either wouldn’t ask or else you wouldn’t blame the model for being wrong. Now imagine a machine that takes in your input and in one step produces the entire output of that correct answer. In all nontrivial cases there are many more _incorrect_ possible outputs than correct ones, so this appears to be a difficult task. But would you say such a machine is “thinking”? Would you still consider it thinking if we could describe the process mathematically as drawing a sample from the output space; that it draws the correct sample implies it has an accurate probability model of the output space conditioned on your input. Does this require “thought”?

GPT is just like this machine except that instead of one-step, the inference process is autoregressive so each token comes out one at a time instead of all at once. (Note that BERT-style transformers _do_ spit out the whole answer at once.)

It’s possible that this is all that humans do. Perhaps we are mistaken about “thinking” altogether—perhaps the machine thinks (like a human), or perhaps humans do not think (like the machine). In either case I do feel confident that human and machine are not applying the same mechanism; jury is still out whether we’re applying the same process.


Now consider the case when you tell GPT to "think it out loud" before giving you the answer - which, coincidentally, is a well-known trick that tends to significantly improve its ability to produce good results. Is that thinking?


Maybe. Mechanically we might also describe it as causing the model to condition more explicitly on specific tokens derived from the training data rather than the implicit conditioning happening in the raw model parameters. This would tend to more tightly constrain the output space—making a smaller haystack to look for a needle. And leveraging the fact that “next token prediction” implies some consistency with preceding tokens.

It could be thinking, but I don’t think that’s strong evidence that it is thinking.


I would say that it's very strong evidence that it is thinking, if that "thinking out loud" output affects outputs in ways that are consistent with logical reasoning based on the former. Which is easy to test by editing the outputs before they're submitted back to the model to see how it changes its behavior.


Perhaps it’s more productive to go the other direction and consider how the concept of ‘thinking’ could be reconsidered.

It’s not like we all agree on what thinking is. We never have. It may not even be one thing.


I have only seen gpt generate imperative algorithms. Does it have the ability to work with concurrency and asynchrony?


I've attempted to pose a concurrency problem to GPT4. The output was invalid code, though likely would have looked correct to the untrained eye. It was only after I spelled out the limitations that it could account for them.


I tried point free solutions, which threw it off.


Care to post a full example ?


I used GPT-4 to build this tool https://image-to-jpeg.vercel.app using a few prompts the other day - my ChatGPT transcript for that is here: https://gist.github.com/simonw/66918b6cde1f87bf4fc883c677351...


See my problem with virtually every single example is that we talk about "I can't describe in any other way than it is thinking", "such complex solutions" but in the end we get a 50 lines "app" that you'd see in a computer science 101 class

It's very nice, it's very impressive, it will help people, but it doesn't align with the "you're just about to lose your job" "Skynet comes in the next 6 months" &c.

If these basic samples are a bottleneck in your day to day life as a developer I'm worried about the state of the industry


The concern is the velocity. GPT-4 can solve tasks today that it couldn't solve one months ago. And even one month ago, the things it could do made GPT-3.5 look like a silly toy.

Then there's the question of how much this can be scaled further simply by throwing more hardware at it to run larger models. We're not anywhere near the limit of that yet.


This took me 3 minutes to build. Without ChatGPT it would have taken me 30-60 minutes, if not longer thanks to the research I would have needed to do into the various browser APIs.

If it had taken me longer than 3 minutes I wouldn't have bothered - it's not a tool I needed enough to put the work in.

That's the thing I find so interesting about this stuff: it's causing me to be much more ambitious in what I chose to build: https://simonwillison.net/2023/Mar/27/ai-enhanced-developmen...


Love how you didn’t care about styling this like at all, Lol. Btw, if you ask gpt to make it presentable by using bootstrap 5 for example it can style it for you


One mans "presentable" is another mans bloat. It looks perfectly fine to me, simple, useful and self-explanatory, doesn't need more flash than so.


Sure, but presentation and UX basics are not "bloat".


What "basic UX" principles are being violated here exactly? And how would adding Bootstrap solve those?


I'm assuming the bits that say

> // Rest of the code remains the same

Are exactly as generated by GPT-4, i.e. it knew it didn't need to repeat the bits that hadn't changed, and knew to leave a comment like this to indicate that to the user.

It gets confusing when something can fake a human so well.


Yes, it will do that routinely. For example, you can ask it to generate HTML/JS/SVG in a single file to render some animated scene, and then iterate on that by telling it what looks wrong or what behaviors you like to change - and it will answer by saying things like, "replace the contents of the <script> element with the following".


What is the time-spent for delta btwn fixing GPT code to writing it all yourself? Is it a reasonable scaffold that will grow over time?


It's not thinking, plain and simple.

Anything it generates means nothing to the algorithm. When you read it and interpret what was generated you're experiencing something like the Barnum-Forer effect. It's sort of like reading a horoscope and believing it predicted your future.


What gives you any confidence that the way GPT4 comes up with answers is qualitatively different from humans?

Why should the emulation of human though, a result of unguided evolution, require anything more than properly wired silicon?


That's highly reductive of our capacities. We are not weighted transformers that can be explained in an arxiv paper. GPT, at the end of the day, is a statistical inference model. That's it.

It's not going to wake up one day, decide it prefers eggs benny and has had enough of your idle chatter because of that sarcastic remark you made last week.

Could we simulate a plausibly realistic human brain on silicon someday? I don't know, maybe? But that's not what GPT is and we're no where near being able to do that.

You can scale up the tokens an LLM can manage and all you get is a more accurate model with more weights and transformers. It's not going to wake up one day, have feelings, religion, decide things for itself, look in a mirror and reflect on its predicament, lament the poor response it gave a user, and decide it doesn't want to live with regret and correct its mistakes.


> That's highly reductive of our capacities.

I'm not saying that GPT4 is as capable as a human-- it can not be, by design, because its architecture lacks memory/feedback paths that we have.

What I'm saying is that HOW it thinks might already be quite close in essence to how WE think.

> We are not weighted transformers that can be explained in an arxiv paper. GPT, at the end of the day, is a statistical inference model. That's it.

That is true but uninteresting-- my counterpoint is: If you concede that our brain is "simulatable", then you basically ALREADY reduced yourself to a register based VM-- the only remaining question is: what ressources (cycles/memory) are required to emulate human thought in real time, and what is the "simplest" program to achieve it (that might be something not MUCH more complicated than GPT4!).


> What I'm saying is that HOW it thinks might already be quite close in essence to how WE think.

How would one be able to prove this? Nobody knows how we think, yet.

All one can say is that what GPT-4 outputs could plausible fool another human into believing another human wrote it. But that's exactly what it's designed to do, so what's interesting about that?

> If you concede that our brain is "simulatable",

It could be. Maybe. It might be that's what the universe is doing right now. Does it matter?

We're talking about writing an emulator on a Harvard-architecture computer that can fully simulate the physics and biological processes the make up a human brain. By interpreting this system in our emulator we'd be able to witness a new human being that is indistinguishable from one that isn't simulated, right?

That's not what GPT is doing. Not even close.

It turns out there's more to being human than being a register VM. Ever get punched in the face? Bleed? Fall in love? Look back on your life and decide you want to change? Write a book but never show it to anyone? Raise a child? Wonder why you dreamt about airplanes on Mars with your childhood imaginary friend? Why you hate bananas but like banana bread? Why you lie to everyone around you about how you really feel and are offended when others don't tell you the truth?

It's not so simple.


> We're talking about writing an emulator on a Harvard-architecture computer that can fully simulate the physics and biological processes the make up a human brain. By interpreting this system in our emulator we'd be able to witness a new human being that is indistinguishable from one that isn't simulated, right?

My point is: if you don't believe that there is magic pixy dust in our brains, then this would NECESSARILY be possible.

It would almost certainly be HIGHLY inefficient-- the "right way" to do AGI would be to find out which algorithmic structures are necessary for human level "performance", and implement them in a way that is suitable for your VM.

I'm arguing that GPT4 is essentially the second approach-- it lacks features for full human level performance BY DESIGN (e.g. requires pre-training, no online learning, etc.), but there is no reason to assume that the way it operates is fundamentally different from how *parts* of OUR mind work.

> It turns out there's more to being human than being a register VM. Ever get punched in the face? Bleed? Fall in love? Look back on your life and decide you want to change? Write a book but never show it to anyone? Raise a child? Wonder why you dreamt about airplanes on Mars with your childhood imaginary friend? Why you hate bananas but like banana bread? Why you lie to everyone around you about how you really feel and are offended when others don't tell you the truth?

I don not understand what you are getting at here. I consider myself a biological machine-- none of this is inconsitent with my worldview. I believe that a silicon based machine could emulate all of this if wired up properly.

PS: I often talk with people that explicitly DONT believe into the "pixy dust in our brains" (call it soul if you want), but on the other hand they strongly doubt the feasibility of AGI-- this is internally inconsistent and simply not a defensible point of view IMO.


> I'm arguing that GPT4 is essentially the second approach

Ok, so then it is an algorithm that simulates a specific behaviour that produces plausibly human-level results.

My point is that this is not thinking, smart, or "general intelligence."

Let's say I write an algorithm that can also produce text. It's not an implementation of the specification for GPT-4 but something novel. It takes the exact same inputs and produces outputs that I share with you and claim is produced by GPT-4. And lo, success, you can't tell if it was produced by GTP-4 or my algorithm.

You claim it's the same thing as having GPT-4, right? If you can't tell the difference it must be the same thing.

Big deal. We can write computer programs that perform better than humans at chess, go, and now can write more text than us. We knew this was possible before we even begun on this endeavour. It's still not intelligent, conscious, smart, or anything resembling a complete human.

It's merely an algorithm that does one specific task.

> I don not understand what you are getting at here.

I've proven my point then.

There's more to the human experience than what can be simulated on a silicone chip and it doesn't have to do with hand-waving away all the complexity of reality as "magical pixie dust."

Take physical trauma. The experience of which by one human is not merely a fact. It is felt, it is reflected upon, and it is shared in the DNA of the person that experience it with their descendants. We have science investigating how trauma is shared through generations and the effects it has on our development.

You are more than a machine with inputs and outputs.


> My point is that this is not thinking, smart, or "general intelligence."

Why not? I would already, without hesitation, describe GPT4 as strictly more intelligent than my cat and also all gradeschoolers I've ever known... Maybe some adults, too- depends on your exact definition of intelligence.

> Let's say I write an algorithm [...], you can't tell if [input] was produced by GTP-4 or my algorithm.

Sure, I'd call your algorithm just as clever as GPT4 and approaching adult human levels of intelligence.

> It's still not intelligent, conscious, smart

Why not? What do these mean to you?


> I would already, without hesitation, describe GPT4 as strictly more intelligent than my cat

Well if we're going to define intelligence based one what you believe it is then why don't you explain it?

I'm not the one claiming to know what intelligence is or that we can even simulate a system capable of emulating this characteristic. So if you hold the specification for human thought I think you ought to share it with us.

> Why not?

By definition. ChatGPT is designed for a single function, the description of which are the specifications and the code that implements it. Nothing in this specification implies it is capable of anything except what is described.

Calling it, "intelligent," is a mischaracterization at best and anthropomorphism at worst. The same follows for calling it "smart" or claiming it is, "skilled at X."


You're the one claiming that GPT is not in any sense, shape, or form intelligent. Such claim inevitably carries a very strong implication that you know what intelligence is.


One doesn’t have to know how thoughts are formed to have good theories and reasonable hypothesis.

Science makes progress with imperfect information all the time, including incomplete models of neurological phenomenon, intelligence, and consciousness.


My explicit definition for "intelligence" would be something with an internal model of <reality> that you can exchange information with.

Cat is better at this than the robot vacuum, gradeschooler is better still and GPT (to me) seems to trump all of those.


"Nobody knows how we think, yet."

Then how can you confidently say we don't think 'like' Transformers/Attention/Statistical models/etc/etc?


I think you would love to read Mark Rowlands’ The Philosopher and the Wolf. He asks these questions and like all if us struggles with answers.

https://www.goodreads.com/book/show/8651250


> If you concede that our brain is "simulatable", then you basically ALREADY reduced yourself to a register based VM-- the only remaining question is: what ressources (cycles/memory) are required to emulate human thought in real time

We haven't emulated brains yet, so we don't know. The OpenWorm project is interesting, but I don't know to what extent they've managed to faithfully recreate an accurate digital version of a nematode worm. I do know they had it driving around a robot.

Thing is that the our brains are only part of the nervous system, which extends throughout the body. So I don't know what happens if you only simulate just the brain part. Seems to me that the rest of the body kind of matters for proper functioning.


I personally believe that while interesting, projects like OpenWorm or humanbrainproject are extremely indirect and unpromising regarding AGI (or even for improving our understanding of human thinking in general).

To me, these are like building an instruction set emulator by scanning a SoC and then cobbling together a SPICE simulation of all the individual transistors-- the wrong level of abstraction and unlikely to EVER give decent performance.

People also like to point out that human neurons are diverse and hard to simulate accurately-- yeah sure, but to me that seems completely irrelevant to AGI, in the very same way that physically exact transistor modelling is irrelevant when implementing emulators.


I read this and can't help but chuckle... To say that we are nowhere being able to have AGI is quite a bold statement. It was after all only a few months ago where many people also believed we were a long way away from ChatGPT-4.

The confidence with which you think we are not weighted transformers or statistical inference models is also puzzling. How could you possibly know that? How do you know that that's not precisely what we are, or something immediately tangent to that?

Perhaps if you keep going you do get something that begins to have feeling, religion and understand that it's a self and perhaps that's precisely what happened to humans.


Ah yes, the old: you can’t prove my deity doesn’t exist argument.

Puzzling that I don’t share your faith or point of view? Why?

The point is to not ascribe properties attributed to a thing we know doesn’t have them. We can teach people how ChatGPT works without getting into pseudo-philosophical babble about what consciousness is and whether humans can be accurately simulated by an LLM with enough parameters.


IMO the big blindside of your argument is that you MUST either accept that some magic happens in human brains (=> which is HARD to reconciliate with a science-inspired world-view), OR that achieving human-level cognitive performance is a pure hardware/software optimization problem.

The thing is that GPT4 already approaches human level cognitive performance in some tasks, which means you need a strong argument for WHY full human-level performance would be out of reach of gradual improvements to the current approach.

On the other hand, a very strong argument could be made that the very first artificial neural networks had the absolutely right ideas and all the improvements over the last ~40 years were just the necessary scaling/tuning for actually approaching human performance levels...

This is also where I have to recommend V Braitenbergs "Vehicles: Experiments in synthetic psychology" (from 1984!) which aged remarkably well and shaped my personal outlook on the human mind more than anything else.


What faith? I never made the claim you're attributing to me. Smug idiots like you are wrong all the time.


> What gives you any confidence that the way GPT4 comes up with answers is qualitatively different from humans?

For a start, GPT-4 doesn't include in its generation the current state of its internal knowledge used so far; any text built can only use at most the few words already generated in the current session as a kind of short-term memory.

Biological brains OTOH have a rhythm with feedback mechanisms which adapt to the situation where they're doing the thinking.


> For a start, GPT-4 doesn't include in its generation the current state of its internal knowledge used so far

Sure. But are you certain that you NEED write access to long term memory to think? Would your thinking capabilities degrade meaningfully if that was taken away?


Yes, I would say that a brain without the capacity to form new memories has degraded thinking capabilities.


Except for when as an expert in a field you ask it questions about that are subtle and it answers in a cogent and insightful way, and as an expert you are fully aware of that. It’s not reasonable to call that a Barnum-Forer effect. It’s perhaps not thinking (but perhaps we need to more clearly define thinking), but its not a self-deception either.


What’s novel to you could be just trained material


We completely privatized payments without realizing it. Credit cards are no longer some extra benefit on top of your bank account, it's how basically almost all commerce is required now. Private companies collect basically a tax on almost all transactions in our economy.

I don't know how we let them get away with this. Facilitating commerce should 100% be a government operation and be free. To accept cash I pay paying the US government a fraction of a percent I'm taxes compared to credit card fees.

The fact that, on top of all this, they are essentially policing what can and cannot be sold and bought should be outrageous to people. It's a complete conflict of interest that companies can shut down payments to and from competitors to their investments.

People can say "oh just don't use or accept visa etc" but good luck staying in business or buying anything online.

We absolutely need some kind of government backed digital payment system in my opinion. Credit card rewards should also be illegal they are nothing but hidden fees passed to you by charging the business more.


> Private companies collect basically a tax on almost all transactions in our economy.

Forget transactions, they collect a tax on almost all money in the economy. Something like 97% of money in circulation is simply bank loans (like mortgages) that haven't been paid back yet.

I encourage anyone to walk around a financial district somewhere and really think about what it's all for. On the inside of some of those buildings there might be people, but they aren't doing much. On the inside of many there aren't even any people.

Finance definitely has value, but is it really that much? It's a fraction of what they take.

How they get away with it is the interesting part. There's just something about money. There will always be a very large part of the population who just don't get it. Then there are some who excel with it. It brings out the worst in many and it downright scares the rest. All that's clear is we need to get a grip on it. Bitcoin was the latest attempt.

I can't recommend enough the book Other People's Money by John Kay.


I have a feeling that a government run payment system would also forbid illegal activity (eg laundering, etc). I agree that a private payments network is not ideal but the question at hand is how should transactions facilitating illegal activity be handled?

To be clear I’m not passing a moral judgement on sex work - just pointing out the fact that it’s presently illegal in many forms and jurisdictions and that fact is relevant.


The problem isn't that laws are being enforced. The problem is private institutions are enforcing extra-legal penalties on legal activity.


Well Visa, Mastercard, American Express, and Discover combined have almost a $1.5T market cap - so if you think the government is going to put them out of business - you must have a different perception of what the government does than I do...


> is going to put them out of business

Going to put them out? Probably not. Can they? Absolutely.


A dystopian digital currency as CBDC will come after FedNow.


If you are just hosting text popular blockchains might be viable.


I'd like to see something like torrenting for websites. Just a way that if a certain site is important to you, you could seed it yourself and 1. Help with the hosting costs and 2. Preserve it if the original hosts decides to take it down or change it.


Something like IPFS?

https://ipfs.tech/


To quote another comment but "Instead of replacing crappy jobs and freeing up peoples time to enjoy their life, we’re actually automating enjoyable pursuits."

I think this isn't just a simple discussion on competition and copyright, I think it's a much larger question on humanity. It just seems like potentially a bleak future if enjoyable and creative pursuits are buried and even surpassed by automation.


If the pursuit is enjoyable, it should continue to be enjoyable as a hobby, no?

Meanwhile, where is my levy of custom artists willing to do free commission work for me? It’s enjoyable, right?

I see a lot of discussion about money and copyright, and little to no discussion about the individual whose life is enriched by access to these tools and technologies.

As for your bleak future… will that even come to pass? I don’t know. Maybe it depends on your notion of “surpass”, and what that looks like.


> If the pursuit is enjoyable, it should continue to be enjoyable as a hobby, no?

I think for most people the enjoyable and fulfilling part of life is feeling useful or having some expression and connection through their work. There's definitely some people who can create in a vacuum with no witness and be fulfilled, but I think there's a deep need for human appreciation for most people.

> As for your bleak future… will that even come to pass? I don’t know. Maybe it depends on your notion of “surpass”, and what that looks like.

I don't know either, maybe it will be fine. Maybe this will pass like the transition from traditional to digital. But something about this feels different...like it's actually stealing the creative process rather than just a paradigm shift.


Some people enjoy looking at images more than creating them.


Yeah maybe, but I think we also already have a problem with overconsumption of media though. I am not sure this is helping.

It seems inevitable and I don't think we can stop it, but I just am kind of worried about the collective mental health of humanity. What does a world look like where people have no jobs and even creative outlets are dominated by AI? Are people really just happy only consuming? What even is the point of humanity existing at that point?


People could always work on higher-order creation: stitch together AI paintings into collages, try styles the AI has not mastered, etc…


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: