Short 3 minute excerpt of a talk on fundamental properties of IT as well as fundamental properties of humans that influence design decision making in complex (IT) landscapes. Only slightly tongue-in-cheek.
Why, if the author asks it to summarise a single webpage and gives the link should ChatGPT go out and load 5 more (one is the same page again, the others short overview pages, so won't have influenced the result much)
And why all this talk about trying to engineer a prompt so that in the end the result is good? Should an actual usable system not just handle "Please summarise [url/PDF]"? That is, I suspect, what people expect to be able to do.
Summarize clearly means something different to the author and the people who think the model results are good. Everyone expects different things. Most people are used to others knowing their preferences and adjusting over time. Models do not unless you tell them.
Definitely. Otherwise it would have required a lot more than a single blog post. It is an observation, not anything rigorous with a large number of examples, and decent statistics.
Generative AI models 'memorise' some training data. This can be sought-after so we get reliable answers, but it is also a bad thing: training data leakage, plagiarism. The problem? The good and the bad are the same thing.
I would consider this realistic service design, just as Meta’s Cicero (plays blitz Diplomacy) is smart design. It might work as a service.
What the answer glances over is that even with just 3% of the time requiring human assistance (2 minutes out of every hour) the term ‘autonomous vehicle’ is not really applicable anymore in the sense everybody is using/understanding that term. The idea behind that term assumed ‘full’ autonomy. Self driving cars. And there is no reason to assume that this is still in sight. The answer puts that ‘self-driving car’ on the shelf.
PS. Human assistent seems to me a difficult job, given the constant speed and concentration requirements.
By this logic Tesla does not have "autonomous vehicles" either. They just do adjustments after the car crashes and kills someone instead of doing it online.
Bridges the gap between all the explanation of the internals of transformer architectures etc. and the (good/bad/imagined) uses. Suitable for non-technical people, but does explain what GPT does and what that means.
Article using psychology research insights on the subject of convincing others, e.g. IT architects convincing management what the 'right thing to do' is
Because loans are generally a safer investment than stock (ownership) with more certain returns.
And when you can get good returns from loans (high interest) stock gets relatively less attractive and hence the stock prices drop to match again the risk versus reward equilibrium (cheaper stock means less money invested means less risk for the same returns like dividend or stock growth)
This is why stock prices react almost immediately to interest rises.
The key investment calculus is always risk versus reward.
Secondary: the cost of doing business also goes up.
One of the more interesting aspects is that the idea of what (enterprise) 'architecture' is, depends on who you talk to. There is an interesting relation between agile and architecture.
A while back Grady Booch was asked on Twitter what architecture books he would suggest. He replied with three images of sets of covers of what was on his bookshelf (some 45 books in all, I think). All of them more software architecture style books. One EA book: Chess and the Art of Enterprise Architecture. Which I personally found noticeable.