Hacker Newsnew | past | comments | ask | show | jobs | submit | karagenit's commentslogin

True, but the article also says:

> That's it. No rate limiting. No account lockout.

To me, if he confirmed that there’s no rate limiting on the auth API, this implies a scripted approach checking at least tens (if not more) of accounts in rapid succession.


Granted. I guess, unless it's applied very aggressively, assessing the existence of rate limiting may require some sort of automation (and probably some heuristics – how much data points do you actually need? do you have to retrieve any data at all, while looking for a single signal? The article doesn't tell.) Same goes for lockout.

On the other hand, as mentioned already, all that's required is really looking for a return code and not for any data. Is accessing an API endpoint the same as retrieving data? Is there proof or evidence of intent of the latter? I guess, there remains much to be defined. Especially, if it's not so much about protecting reputation than it is about protecting data and ensuring trust, and the intent is to protect and secure this in the first place.


I would highly recommend giving this excelled LessWrong post a read: https://www.lesswrong.com/posts/rarcxjGp47dcHftCP/your-llm-a...

It isn't a perfect fit, since the article talks a lot about the scientific method which doesn't apply super well to philosophy+math, but I think there are some strong parallels here.


Yes. There are parallels, thanks a lot for sharing this! Especially on emergence, and explaining quantum effect. The better news for me: - I did follow though their step 1 and 2 from the start, and I noticed that the LLMs hallucinate, confirming some apparent nonsense, specifically ChatGPT. So I cross checked Grok, Sonnet, Gemini, DeepSeek. With 0 context on me. - I did the dereivation of logic independently of llms, the only thing I asked is "cosmetic" and even that cosmetic I checked with cross-reference I'd be glad to see the other "genuises" and their work, to understand what exactly am I dealing with here. Because "quantum-consciousness" crowd was there forever long before LLMs and they just likely to amplified with LLMs saying them how unique they are.


The worst part here is that if that LLM-science is going mainstream right now, then people will dismiss anything in that direction despite of quality


Looks cool! Does it support prompt caching? And do you have any data showing how your latency compares to going directly to the model providers? I’m thinking about trying it out but those are my two big reservations.



What if the number of game critics just hasn’t increased, and since they can only play/review a fixed number of games each year due to time constraints, the number that they acclaim each year hasn’t grown? Not saying this is necessarily the case, just suggesting the possibility.


Yeah, I'd probably go that route if I wanted to scrape more than the handful of pages I needed for this project. I wonder if it would work on Zillow or not. Even my simple workflow of "click the next button, save the request in devtools, repeat" was suspicious enough to trigger a captcha-type "are you a bot?" challenge. Maybe it was just too many requests quickly like you mentioned, or maybe they're doing something more advanced like mouse movement tracking.


If you use something like playwright, you can inject events arbitrarily during the scraping session to simulate things a human might do.

https://playwright.dev/docs/api/class-mouse#mouse-move


Do you have a citation for this? The only relevant study I saw on the LISTEN website was a preprint of a study showing data on self-reported post-vaccine symptoms, but didn’t really talk about causes or gene edits (Krumholz et al. 2023).


It did discuss causes to some surface level: continuous spike protein production, T-cell exhaustion and Epstein-Barr reactivation. And they're investigating post-vaccine syndrome so the root cause there would be clear, as the study authors discussed in the LISTEN press release.

It's easy to find papers discussing the problem, just search Google Scholar. Example:

https://bpspubs.onlinelibrary.wiley.com/doi/full/10.1002/prp...

Integration was proven in 2024, unfortunately :(

https://www.medrxiv.org/content/10.1101/2024.03.24.24304286v...

"Of the S1 positive post-vaccination patients, we demonstrated by liquid chromatography/ mass spectrometry that these CD16+ cells from post-vaccination patients from all 4 vaccine manufacturers contained S1, S1 mutant and S2 peptide sequences"

They can tell the difference between vaccine spike and virus spike as the vaccine spike was modified for stability. The exact pathway is speculated to have been DNA contamination due to manufacturing process defects. Sequencing of vaccine vials has shown far higher levels of DNA contamination than is considered safe, and the lipids would bring DNA into the cells just as well as they do mRNA making the safe levels much lower still.

https://osf.io/b9t7m_v1/download/


> A significant limitation of this study was the lack of approved testing to 100% rule out previous infection and it is possible the persistent S1 protein detected in the CD16+ monocytes of some of the patients in this study is from SARS-CoV-2 and not from the vaccine. There also exists the possibility that some of these new-onset symptoms post-COVID vaccination are unrelated to the vaccines. The data from this study also cannot make any inferences on epidemiology and prevalence for persistent post-vaccine symptoms. Thus, further studies and research need to be done to understand the risk factors, likelihood and prevalence of these symptoms.

https://www.medrxiv.org/content/10.1101/2024.03.24.24304286v...

But yes further study has to be made.


Yep, been waiting for the same thing. Maybe at some point it’ll be possible to use a large multilingual model to translate the dataset into one programming language, then train a new smaller model on just that language?


Isn't microsoft phi specifically trained for Python? I recall that Phi 1 was advertised as a Python coding helper.

It's a small model trained only by quality sources (ie textbooks).


Hah at least it’s two/three letters, my personal site is at .software and most people get really confused by an eight letter TLD.


In terms of total petroleum products (including crude, gasoline, and diesel) the US has become a net exporter in the last few years.

> In 2020, the United States became a net exporter of petroleum for the first time since at least 1949. In 2022, total petroleum exports were about 9.52 million barrels per day (b/d) and total petroleum imports were about 8.33 million b/d, making the United States an annual net total petroleum exporter for the third year in a row.

https://www.eia.gov/energyexplained/oil-and-petroleum-produc...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: