Hacker Newsnew | past | comments | ask | show | jobs | submit | hodgehog11's commentslogin

This is the point where most of the public would probably acknowledge that digital privacy is worth seeking. If you're in a fascist or communist state, announcing your political opinions online without anonymity is generally not advisable.

The interesting thing is that the time to oppose it these encroachments was somewhere between 2001 and say.. 2015 ( some events, but nothing in particular other than general acceptance by general populace ). And now the masses are crying foul? Now is absolutely not the time to try to get online invisibility cloak.

Some couldn't vote then broskie. In particular because of things like age, and school, and parents being spoonfed propaganda and having the desire for vengeance stoked for dat Middle East invasion and things. So since that couldn't happen, it seems the next best time to address the issue is now.

Here is a thing. I am not sure anyone really can at this point. I am not being hyperbolic.

Not with the assholes we have running the tech industry right now trying to groom every nation state into letting them help them morph into a technocratic hellhole, no.

FWIW, I don't mean to extinguish your fire. It is useful, but I do want you to see the sheer enormity of the issue.

7 deckchairs on the left, 456 on the right, that might help right the ship.

[flagged]


> Until all of these things are addressed, I certainly won't support the freedom of speech for people that won't support mine.

That means you don’t support freedom of speech. But we already knew that because you already explained your authoritarian views.

What I don’t understand is why authoritarians such as yourself (as well as many of the what I call “blue MAGA” authoritarian counterparts on the left) still pay lip service to concepts such as free speech and the rule of law. These concepts fundamentally encapsulate that they are applied equally. If you don’t support them for your enemies (and criminals, and immigrants, and trans people, etc), then you simply don’t support them, period. “Free speech for my side” simply isn’t. “The rule of law (but only for citizens)” isn’t support for the rule of law.

I find all forms of government censorship to be abhorrent, regardless of which party is in power. I support free speech and freedom from government interference for the MAGA crowd as I do for everyone else, despite their active and continued efforts to curtail my legal rights to same (as Trump has repeatedly said out loud).


Republican propagandists were quite successful at spinning the emergent corporate infringement of natural rights as a bona fide illegal government action led by "the left", to fool enough useful idiots into supporting their "alternative" of a wannabe-dictator planning a full scale governmental assault on our rights.

>How about when Amazon engineers colluded with the federal government to shutdown Parler? It would be like Trump working with hosting servers for Blue sky and getting it shutdown.

Private companies can't choose their customers anymore? Interesting.

>Twitter and Facebook were caught colluding with the Biden administration to censor Americans. There weren't 10 posts a day on HN about it, and it was pretty quickly ignored and forgotten.

The laptop? They didn't. FBI warned facebook etc about possible russian fake stories and they decided to suppress it own their own until it was fact checked. Biden did ask Twitter to take down nudes of his son as it was against twitters revenge porn rules.

You are deluded if you think this is anywhere close to what trump is doing.


I disagree that the majority of it is anti-LLM ranting, there are several subtle points here that are grounded in realism. You should read on past the first bit if you're judging mainly from the initial (admittedly naive) first few paragraphs.

> You should read on past the first bit...

Not GP, but... the author said explicitly "if you believe X you should stop reading". So I did.

The X here is "that the human mind can be reduced to token regurgitation". I don't believe that exactly, and I don't believe that LLMs are conscious, but I do believe that what the human mind does when it "generates text" (i.e. writes essays, programs, etc) may not be all that different from what an LLM does. And that means that most of humans's creations are also the "plagiarism" in the same sense the author uses here, which makes his argument meaningless. You can't escape the philosophical discussion he says that he's not interested in if you want to talk about ethics.

Edit: I'd like to add that I believe that this also ties in to the heart of the philosophy of Open Source and Open Science... if we acknowledge that our creative output is 1% creative spark and 99% standing on the shoulders of Giants, then "openness" is a fundamental good, and "intellectual property" is at best a somewhat distasteful necessity that should be as limited as possible and at worst is outright theft, the real plagiarism.


So do you believe seahorse emoji exists?

I read the rest of it. It was intellectually lazy.

It's more intellectually lazy to think boolean logic at a sufficient scale crosses some event horizon wherein its execution on mechanical gadgets called computers somehow adds up to intelligence beyond human understanding.

It is intellectually lazy to proclaim something to be impossible in the absence of evidence or proof. In the case of the statement made here, it is provably true that Boolean logic at sufficient scale can replicate "intelligence" of any arbitrary degree. It is also easy to show that this can be perceived as an "event horizon" since the measurements of model quality that humans typically like to use are so nonlinear that they are virtually step function-like.

Doesn't seem like you have proof of anything but it does appear that you have something that is very much like religious faith in an unforeseeable inevitability. Which is fine as far as religion is concerned but it's better to not pretend it's anything other than blind faith.

But if you really do have concrete proof of something then you'll have to spell it out better & explain how exactly it adds up to intelligence of such magnitude & scope that no one can make sense of it.


> "religious faith in an unforeseeable inevitability"

For reference, I work in academia, and my job is to find theoretical limitations of neural nets. If there was so much of a modicum of evidence to support the argument that "intelligence" cannot arise from sufficiently large systems, my colleagues and I would be utterly delighted and would be all over it.

Here are a couple of standard elements without getting into details:

1. Any "intelligent" agent can be modelled as a random map from environmental input to actions.

2. Any random map can be suitably well-approximated by a generative transformer. This is the universal approximation theorem. Universal approximation does not mean that models of a given class can be trained using data to achieve an arbitrary level of accuracy, however...

3. The neural scaling laws (first empirical, now more theoretically established under NTK-type assumptions), as a refinement of the double descent curve, assert that a neural network class can get arbitrarily close to an "entropy level" given sufficient scale. This theoretical level is so much smaller than any performance metric that humans can reach. Whether "sufficiently large" is outside of the range that is physically possible is a much longer discussion, but bets are that human levels are not out of reach (I don't like this, to be clear).

4. The nonlinearity of accuracy metrics comes from the fact that they are constructed from the intersection of a large number of weakly independent events. Think the CDF of a Beta random variable with parameters tending to infinity.

Look, I understand the scepticism, but from where I am, reality isn't leaning that way at the moment. I can't afford to think it isn't possible. I don't think you should either.


As I said previously, you are welcome to believe whatever you find most profitable for your circumstances but I don't find your heuristics convincing. If you do come up or stumble upon a concrete constructive proof that 100 trillion transistors in some suitable configuration will be sufficiently complex to be past the aforementioned event horizon then I'll admit your faith was not misplaced & I will reevaluate my reasons for remaining skeptical of Boolean arithmetic adding up to an incomprehensible kind of intelligence beyond anyone's understanding.

Which part was heuristic? This format doesn't lend itself to providing proofs, it isn't exactly a LaTeX environment. Also why does the proof need to be constructive? That seems like an arbitrarily high bar to me. It suggests that you are not even remotely open to the possibility of evidence either.

I also don't think you understand my point of view, and you mistake me for a grifter. Keeping the possibility open is not profitable for me, and it would be much more beneficial to believe what you do.


I didn't think you were a grifter but you only presented heuristics so if you have formal references then you can share them & people can decide on their own what to believe based on the evidence presented.

Fine, that's fair. I believe the statement that you made is countered by my claim, which is:

Theorem. For any tolerance epsilon > 0, there exists a transformer neural network of sufficient size that follows, up to the factor epsilon, the policy that most optimally achieves arbitrary goals in arbitrary stochastic environments.

Proof (sketch). For any stochastic environment with a given goal, there exists a model that maximizes expected return under this goal (not necessarily unique, but it exists). From Solomonoff's convergence theorem (Theorem 3.19 in [1]), Bayes-optimal predictors under the universal Kolmogorov prior converge with increasing context to this model. Consequently, there exists an agent (called the AIXI agent) that is Pareto-optimal for arbitrary goals (Theorem 5.23 in [1]). This agent is a sequence-to-sequence map with some mild regularity, and satisfies the conditions of Theorem 3 in [2]. From this universal approximation theorem (itself proven in Appendices B and C in [2]), there exists a transformer neural network of a sufficient size that replicates the AIXI agent up to the factor epsilon.

This is effectively the argument made in [3], although I'm not fond of their presentation. Now, practitioners still cry foul because existence doesn't guarantee a procedure to find this particular architecture (this is the constructive bit). This is where the neural scaling law comes in. The trick is to work with a linearization of the network, called the neural tangent kernel; it's existence is guaranteed from Theorem 7.2 of [4]. The NTK predictors are also universal and are a subset of the random feature models treated in [5], which derives the neural scaling laws for these models. Extrapolating these laws out as per [6] for specific tasks shows that the "floor" is always below human error rates, but this is still empirical because it works with the ill-defined definition of superintelligence that is "better than humans in all contexts".

[1] Hutter, M. (2005). Universal artificial intelligence: Sequential decisions based on algorithmic probability. Springer Science & Business Media.

[2] https://arxiv.org/abs/1912.10077

[3] https://openreview.net/pdf?id=Vib3KtwoWs

[4] https://arxiv.org/abs/2006.14548

[5] https://arxiv.org/abs/2210.16859

[6] https://arxiv.org/abs/2001.08361


How do you reconcile that w/ the fact that optimal probabilistic planning¹ is actually undecidable?

¹https://www.sciencedirect.com/science/article/pii/S000437020...


Good question. It's because we don't need to be completely optimal in practice, only epsilon close to it. Optimality is undecidable, but epsilon close is not, and that's what the claim says that NNs can provide.

That doesn't address what I asked. The paper I linked proves undecidability for a much larger class of problems* which includes the case you're talking about of asymptotic optimality. In any case, I am certain you are unfamiliar w/ what I linked b/c I was also unaware of it until recently & was convinced by the standard arguments people use to convince themselves they can solve any & all problems w/ the proper policy optimization algorithm. Moreover, there is also the problem of catastrophic state avoidance even for asymptotically optimal agents: https://arxiv.org/abs/2006.03357v2.

* - Corollary 3.4. For any fixed ε, 0 < ε < 1, the following problem is undecidable: Given is a PFA M for which one of the two cases hold:

(1) the PFA accepts some string with probability greater than 1 − ε, or (2) the PFA accepts no string with probability greater than ε.

Decide whether case (1) holds.


Oh yes, that's one of the more recent papers from Hutter's group!

I don't believe there is a contradiction. AIXI is not computable and optimality is undecidable, this is true. "Asymptotic optimality" refers to behaviour for infinite time horizons. It does not refer to closeness to an optimal agent on a fixed time horizon. Naturally the claim that I made will break down in the infinite regime because the approximation rates do not scale with time well enough to guarantee closeness for all time under any suitable metric. Personally, I'm not interested in infinite time horizons and do not think it is an important criterion for "superintelligence" (we don't live in an infinite time horizon world after all) but that's a matter of philosophy, so feel free to disagree. I was admittedly sloppy with not explicitly stating that time horizons are considered finite, but that just comes from the choice of metric in the universal approximation which I have continued to be vague about. That also covers the Corollary 3.4, which is technically infinite time horizon (if I'm not mistaken) since the length of the string can be arbitrary.


> "...it would sometimes regurgitate training data verbatim. That’s been patched in the years since..."

> "They are robots. Programs. Fancy robots and big complicated programs, to be sure — but computer programs, nonetheless."

This is totally misleading to anyone with less familiarity with how LLMs work. They are only programs in as much as they perform inference from a fixed, stored, statistical model. It turns out that treating them theoretically in the same way as other computer programs gives a poor representation of their behaviour.

This distinction is important, because no, "regurgitating data" is not something that was "patched out", like a bug in a computer program. The internal representations became more differentially private as newer (subtly different) training techniques were discovered. There is an objective metric by which one can measure this "plagiarism" in the theory, and it isn't nearly as simple as "copying" vs "not copying".

It's also still an ongoing issue and an active area of research, see [1] for example. It is impossible for the models to never "plagiarize" in the sense we think of while remaining useful. But humans repeat things verbatim too in little snippets, all the time. So there is some threshold where no-one seems to care anymore; think of it like the % threshold in something like Turnitin. That's the point that researchers would like to target.

Of course, this is separate from all of the ethical issues around training on data collected without explicit consent, and I would argue that's where the real issues lie.

[1] https://arxiv.org/abs/2601.02671


The plagiarism by the models is only part of it. Perhaps it's in such small pieces that it becomes difficult to care. I'm not convinced.

The larger, and I'd argue more problematic, plagiarism is when people take this composite output of LLMs and pass it off as their own.


> But humans repeat things verbatim too in little snippets, all the time

Also, it's possible, although statistically improbable, for a human to generate the exact same thing another human generated (and copyrighted) without even knowing it.


To a large extent both "hallucinations" and "plagiarism" can be addressed with the same training method: source-aware training.

https://arxiv.org/abs/2404.01019

At the frontier of science we have speculations, which until proper measurements become possible, are unknown to be true or false (or even unknown to be equivalent with other speculations etc. regardless of their being true or false, or truer or falser). Once settled we may call earlier but wrong speculations as "reasonable wrong guesses". In science it is important that these guesses or suspicions are communicated as it drives the design of future experiments.

I argue that more important that "eliminating hallucinations" is tracing the reason it is or was believed by some.

With source-aware training we can ask an LLM to give answers to a question (which may contradict each other), but to provide the training-source(s) justifying emission of each answer, instead of bluff it could emit multiple interpretations and go like:

> answer A: according to school of thought A the answer is that ... examples of authors and places in my training set are: author+title a1, a2, a3, ...

> answer B: according to author B: the answer to this question is ... which can be seen in articles b1, b2

> answer ...: ...

> answer F: although I can't find a single document explaining this, when I collate the observation x in x1, x2, x3; observation y in y1,y2, ... , observation z in z1, z2, ... then I conclude the following: ...

so it is clear which statements are sourced where, and which deductions are proper to the LLM.

Obviously few to none of the high profile LLM providers will do this any time soon, because when jurisdictions learn this is possible they will demand all models to be trained source-aware, so that they can remunerate the authors in their jurisdiction (and levy taxes on their income). What fraction of the income will then go to authors and what fraction to the LLM providers? If any jurisdiction would be first to enforce this, it would probably be the EU, but they don't do it yet. If models are trained in a different jurisdiction than the one levying taxes the academic in-group citation game will be extended to LLMs: a US LLM will have incentive to only cite US sources when multiple are available, and a EU trained LLM will prefer to selectively cite european sources, etc.


In addition to providing training sources, it's important to identify overlaps among the fragments used in the answer. For me, overlap doesn't mean simply identical expression, but conceptually identical.

We are much more likely to find conceptual overlap in code than in language and prose because Many of the problems we solve, as mathematicians say, reduce to previously solved problems, which IMO means substantially identical code.

A related question is how much change is necessary to a work of art, image, prose, or code for it to escape copyright? If we can characterize it and the LLM generates something that escapes copyright, I suggest the output should be excluded from future copyright or patent claims.


I wasn't aware of source-aware training, so thank you for the reference! It does seem a bit too good to be true; I believe in a system of tradeoffs so I feel like this must have an issue with reducing creativity. That's at first glance though, so I could be wrong.

> This is totally misleading to anyone with less familiarity with how LLMs work. They are only programs in as much as they perform inference from a fixed, stored, statistical model. It turns out that treating them theoretically in the same way as other computer programs gives a poor representation of their behaviour.

Can you share any reading on this?


Maybe it is to a child or average citizen, but I don't believe that "not understanding the consequences" is the case here on HN. This is just a difference in philosophy, the old "freedom vs. security" tradeoff that everyone falls down on a little differently. Giving up your data to a company (and therefore the government) in exchange for services is a trust exercise, and there are ways to avoid making it, but they have significant unavoidable costs. It's an easier decision when you don't fear your own government, but where you fall on the spectrum rapidly changes when your government makes you the target. Of course you can say "the government is always going to turn on you, so you should never trust them!", but you'll sound like a loon to many native citizens of a Western nation that have had little to fear for decades.

The US is just experiencing a little more of what the citizens of communist and fascist nations have experienced. Over time, that might lead to rapid societal change, or maybe it's too late.


>Over time, that might lead to rapid societal change, or maybe it's too late.

Seeing how things are going, not to mention Microsoft blocking European politician e-mail account by Trump orders, it is past too late.


This is the way I think. C is "nice" because it is constructed to satisfy so many "nice" structural properties simultaneously; that's what makes it special. This gives rise to "nice" consequences that are physically convenient across a variety of applications.

I work in applied probability, so I'm forced to use many different tools depending on the application. My colleagues and I would consider ourselves lucky if what we're doing allows for an application of some properties of C, as the maths will tend to fall out so beautifully.


Not meaning to derail an interesting conversation, but I'm curious about your description of your work as "applied probability". Can you say any more about what that involves?

Absolutely, thanks for asking!

Pure probability focuses on developing fundamental tools to work with random elements. It's applied in the sense that it usually draws upon techniques found in other traditionally pure mathematical areas, but is less applied than "applied probability", which is the development and analysis of probabilistic models, typically for real-world phenomena. It's a bit like statistics, but with more focus on the consequences of modelling assumptions rather than relying on data (although allowing for data fitting is becoming important, so I'm not sure how useful this distinction is anymore).

At the moment, using probabilistic techniques to investigate the operation of stochastic optimisers and other random elements in the training and deployment of neural networks is pretty popular, and that gets funding. But business as usual is typically looking at ecological models involving the interaction of many species, epidemiological models investigating the spread of disease, social network models, climate models, telecommunication and financial models, etc. Branching processes, Markov models, stochastic differential equations, point processes, random matrices, random graph networks; these are all the common objects used. Actually figuring out their behaviour can require all kinds of assorted techniques though, you get to pull from just about anything in mathematics to "get the job done".


In my work in academia (which I’m considering leaving), I’m very familiar with the common mathematical objects you mentioned. Where could I look for a job similar to yours? It sounds very interesting

Sorry, I'm in academia too, but my ex-colleagues who left found themselves doing nearly identical work doing MFT research at hedge funds, climate modelling at our federal weather bureau, and SciML in big tech. I know of someone doing this kind of work in telecoms too, but I haven't spoken to them lately. Having said that, it's rough out there right now. A couple of people I know looking for another job right now (academia or otherwise) with this kind of training are not having much luck...

Just a heads up in case you didn't know, taking the Hessian over batches is indeed referred to as Stochastic Newton, and methods of this kind have been studied for quite some time. Inverting the Hessian is often done with CG, which tends to work pretty well. The only problem is that the Hessian is often not invertible so you need a regularizer (same as here I believe). Newton methods work at scale, but no-one with the resources to try them at scale seems to be aware of them.

It's an interesting trick though, so I'd be curious to see how it compares to CG.

[1] https://arxiv.org/abs/2204.09266 [2] https://arxiv.org/abs/1601.04737 [3] https://pytorch-minimize.readthedocs.io/en/latest/api/minimi...


For solving physics equations there is also Jacobian-free Newton-Krylov methods.


Yes the combination of Krylov and quasi-Newton methods are very successful for physics problems (https://en.wikipedia.org/wiki/Quasi-Newton_method).

Iirc eg GMRES is a popular Krylov subspace method.


I lately used these methods and BFGS worked better than CG for me.


Absolutely plausible (BFGS is awesome), but this is situation dependent (no free lunch and all that). In the context of training neural networks, it gets even more complicated when one takes implicit regularisation coming from the optimizer into account. It's often worthwhile to try a SGD-type optimizer, BFGS, and a Newton variant to see which type works best for a particular problem.


I also see the shrinking sentence length celebrated among my scientific colleagues who abhor the dreaded "run-on sentence". Maybe it is because I have no formal literacy or linguistic training but I mourn this loss; older, classical novels used to have a tremendous flavor in their sentence structure by prioritizing the longform. Some English translations of Russian literature can run into the absurd (sentences at half a page long), but even then there is a beauty to it.

I see this much less in modern novels and articles. Where is the flavor from pausing. all. the. time?


Yes. A long sentence can be thought of as a room, not a hallway.

I learned in high school lit that sentence length is an artistic choice as meaningful as word selection: long sentences can reflect stream of consciousness, recursive thought, associative or digressive exploration. Short sentences can reflect anxiety, urgency, vigilance, cognitive compression.

There are a lot of factors that have led to the decay of long sentences. Scientific writing norms, ubiquitous style guides like Strunk & White, modern distraction/multitasking/short(er)-form content, and my favorite, impoverished education - and the concomitant lack of trust in the reader on the part of the author.


> concomitant

Thanks for the new word! Native speaker but I’ve never seen/heard that one before. Might be more common in a commonwealth country though tbf.


> Yes. A long sentence can be thought of as a room, not a hallway.

The irony of this post having an initial sentence consisting of one word is either a sublime statement regarding the topic at hand or an unintentional affirmation of the subsequent factors enumerated.


I recently read "The Sense Of Style", which explained the actual principle behind making an understandable run-on. The trick was to allow the brain to mentally store away the earlier parts of the sentence, and take it out of the parsing context into the logical connections context. Not going to try and remake the point from scratch, if you're curious go read the book!

(as a sidenote, trying to make a point about grammar made me very self-conscious about mine, this is why I had to read a good book!)


Thanks for the reference! I think this very neatly puts into words some impressions that I've had about these long sentences. There is certainly nuance to it, as long sentences can feel exhausting if constructed inappropriately.


The vogue for artificially-short sentences removes not just shape and color, but also logical relationships. Writers and readers are unburdened of tracing chains of cause-and-effect or the dreaded wondering "why". It's part of the larger societal craving to shrug off reality and one's place in it.


I definitely agree there is a strong element of this, especially in the last few decades.

Perhaps it is also due to a widening of the audience that can provide literary criticism back to the author. Only the educated wealthy individuals with connections could offer critiques in the Victorian era of fiction; now it is anyone with a social media account. Judging by the failure of widespread peer review in "hype" research fields, I'm not sure this is a good thing.


Russian is much more conducive to long sentences because it's highly inflected. Adjectives have to agree with the nouns, and verbs can carry the grammatical gender and person markers. This all helps to keep the context clearer, the reader doesn't have to strain their brain to connect the clauses. So long-winded descriptions fit really well into the flow of the text.

It just feels more artificial and self-indulgent in English. As if the author wants to show off how well they can string together longer sentences, and it's up to you, the reader, to keep up with the magnanimousness of the author allowing their readers to glimpse upon their greatness.

Chinese novels are on the other side of the spectrum. The sentences simply can't be very long and but often don't have any connecting words between sentences. The readers have to infer.


> Chinese novels are on the other side of the spectrum. The sentences simply can't be very long and but often don't have any connecting words between sentences. The readers have to infer.

There is no grammatical ceiling on sentence length in Sinitic languages, Chinese languages (all of them) can form long sentences, and they all do possess a great many connecting words. Computational work on Chinese explicitly talks about «long Chinese sentences» and how to parse them[0].

However, many Chinese varieties and writing styles often rely more on parataxis[1] than English does, so relations between clauses are more often (but not always) conveyed by meaning, word order, aspect, punctuation, and discourse context, rather than by obligatory overt conjunctions. That is a tendency, not an inability.

[0] https://nlpr.ia.ac.cn/2005papers/gjhy/gh77.pdf

[1] https://hub.hku.hk/bitstream/10722/127800/1/Content.pdf


Sure. You can try to create arbitrarily long sentences with nested clauses in Chinese. Just like in English you can create arbitrarily long sentences like: "I live in a house which was built by the builders which were hired by the owner who came from England on a steamship which was built...".

But it feels unnatural. So most Chinese sentences are fairly short as a result. And it's also why commas, stops, and even spacing between words are a fairly recent invention. They are simply not needed when the text is formed of implicitly connected statements that don't need to be deeply nested.

To give an example, here's our favorite long-winded Ishmael: "Yes, here were a set of sea-dogs, many of whom without the slightest bashfulness had boarded great whales on the high seas—entire strangers to them—and duelled them dead without winking; and yet, here they sat at a social breakfast table—all of the same calling, all of kindred tastes—looking round as sheepishly at each other as though they had never been out of sight of some sheepfold among the Green Mountains." The Chinese translation is: "是的,这里坐着的是一群老水手,其中有很多人,在怒海中会毫不畏怯地登到巨鲸的背上——那可是他们一无所知的东西啊——眼都不眨地把鲸鱼斗死;然而,这时他们一起坐在公共的早餐桌上——同样的职业,同样的癖好——他们却互相羞怯地打量着对方,仿佛是绿山山从未出过羊圈的绵羊"

Or word-for-word: "Yes, here sitting [people] are the group of old sailors, among them there are many people, [who] in the middle of the raging sea can/will without fear on the whale's back climb. That whales were something they knew nothing about".

The subordinate clauses become almost stand-alone statements, and it's up to the reader to connect them.


I can see your point now, and we are in agreement that nested clauses are uncommon and at the very least sound unnatural in Sinitic languages, but it is distinct from «The sentences simply can't be very long and often don't have any connecting words between sentences».

Strictly speaking, complex nested clauses are slowly on the way out of English as well due to the analytical nature of its present form, which is what the cited article partially laments, and remain a distinctive feature of highly inflected languages (German, Scandinavian, Slavic, etc.).


When I was a kid, I learned a run-on sentence was a sentence without adequate conjunctions or punctuations to mark and separate the clauses. E.g.: "My wife and I went to a concert we saw The Cure they were terrific." I still have a tendency to write long sentences, but sometimes when I go overboard (e.g., a whole paragraph turns out to be one long sentence) I might break it in two, for clarity. But I don't go to grug-speak extremes.

I think the preference for short sentences in today's prose is a lot like vocal fry among North American women: a deliberate attempt to sound young.


> Some English translations of Russian literature can run into the absurd (sentences at half a page long), but even then there is a beauty to it.

C. K. Scott Moncrieff and Terence Kilmartin’s translation of Marcel Proust’s «In Search of Lost Time (Remembrance of Things Past)» contains nearly half-page long sentences.

Many modern readers complain about the substantial difficulty in following such sentences, although I personally find them delightful.


likewise. they are staggeringly beautiful when your mind is in "the zone". It's like a kind of focused meditation with images just flooding the mind


I started reading melancholy of resistance after the author won the Nobel prize this year. The sentences are very long, the book is really difficult to read imo though.


Not the OP, but I have a 2015 Macbook Pro and a desktop PC both running Linux. I love Fedora, so that's on the desktop, but I followed online recommendations to put Mint on the Macbook and it seems to run very well. However, I did need to install mbpfan (https://github.com/linux-on-mac/mbpfan) to get more sane power options and this package (https://github.com/patjak/facetimehd) to get the camera working. It runs better than Mac OS, but you'll need to really tweak some power settings to get it to the efficiency of the older Mac versions.


I permanently switched from Windows to Linux about five years ago. I had the same issue as you with Dropbox, so I switched to using the Maestral client for Dropbox instead which has support for selective sync. Works like a charm for me.


+1 for Maestral, have been using it for about a year on my Linux install and it works seamlessly.


As always, it's the intent that matters.

For the sake of argument, what if Amazon decided tomorrow that they would secure exclusive contracts with all food suppliers and then hoard all the food to starve out the people they don't want to have it? Or at least, drive up the price of food so it becomes completely unaffordable? I know people can simply grow their own food so it's a bit different, but hopefully it gets the point across. It's anti-trust on an unprecedented level.


But OpenAI legitimately needs HBM. Amazon in this instance doesn't need food and is doing purely to create artificial scarcity. If OpenAI were to actually not use the HBM then it could mean something.


That's the whole problem: it's unlikely that OpenAI will actually use all of that HBM. It seems probable that they are using it to create artificial scarcity for their competitors.


"needs" is doing a lot of heavy lifting in your argument...


"As always, it's the intent that matters."

That's certainly not a universal Legal Standard. If I'm harmed, but you didn't "intend" to harm me, does that nullify my Claim?

Hardly.


Voluntary manslaughter, involuntary manslaughter, degrees of murder, hate crimes.


Lack of intent doesn’t mean your claim is nullified. “Intent matters” means it’s taken into account when deciding what damages were wrought


IANAL, but yes, I believe it can nullify the claim. Bumping into someone on the sidewalk is only battery if the prosecution demonstrates intent to harm.


> I know people can simply grow their own food

Small thing, but this is not simple or realistic at all. How does someone in an apartment grow enough food for their family?


Yeah it would definitely still be a problem, but history shows that life finds a way. Even if everyone has to eat nothing but planted potatoes from any patch of grass that one can lay eyes on.


What history has taught us is that life finds a way by staying together and each person having their function within society, only some of which is growing or producing food.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: