The readme shows how to run it assuming you can run a python program on the device, so I expect it works with laptops and PCs but there's a note at the end of the page saying that the iOS app has fallen behind the python version so it's not clear to me how to get this running on your iphone or other such devices.
The "device" in question must be Apple Silicon because the `mlx` package is a hard dependency, or at least an ARM machine (I do not have any Apple Silicon Macbooks or ARM machines to run this). I tried tweaking this before realizing calls to this library is littered all over the repo. I don't really understand the AI ecosystem very well but it seems that the use of the `mlx` library should be supplanted by some other library depending on the platform machine. Until then, and the actual release of the iOS code somewhere, "everyday devices" is limited to premium devices that almost no one has more than one of. I'm looking forward to run this on other machine platforms and squeeze out what I can from old hardware laying around. Otherwise I doubt the tagline of the project.
Edit: to add on, the only evidence that this runs anywhere but Apple Silicon is the maintainer's Twitter where they show it running on two Macbook Pros as well as other devices. I'm not sure how many of those devices are not ARM.
I'm not throwing shade at the concept the author is presenting, but I'd appreciate if they could slow down functional commits (he is writing them right now as I type) and truthfully modify the documentation to state which targets are actually able to run this.
Also, possibly a hot take, but many have been on caffeine since early teens, I am convinced the average person’s anxiety is 30 or 40% higher when on several caffeinated beverages a day
If it really was about “safety” then why wouldn’t Ilya have made some statement about opening the details of their model at least to some independent researchers under some tight controls. This is what makes it look like a simple power grab, the board has said absolutely nothing about what actions they would take to move toward a safer model of development.
But if you really cared about that why would you be so opaque on everything. Usually people with strong conviction try to convince other people of that conviction. For a non profit that is supposedly acting in the interests of all mankind, they aren't actually telling us shit. Transparency is pretty much the first thing everybody does who actually cares about ethics and social responsibilities.
Mr. Yudkowsky is a lot like Richard Stallman. He’s a historically vital but now-controversial figure whom a lot of AI Safety people tend to distance themselves from nowadays, because he has a tendency to exaggerate for rhetorical effect. This means that he ends up “preaching to the choir” while pushing away or offending people in the general public who might be open to learning about AI x-risk scenarios but haven’t made up their mind yet.
But we in this field owe him a huge debt. I’d sincerely like to publicly thank Mr. Yudkowsky and say that even if he has fallen out of favor for being too extreme in his views and statements, Mr. Yudkowsky was one of the 3 or 4 people most central to creating the field of AI safety, and without him, OpenAI and Anthropic would most certainly not exist.
I don’t agree with him that opacity is safer, but he’s a brilliant guy and I personally only discovered the field of AI safety through his writings, through which I read about and agreed with the many ways he had thought of by which AGI can cause extinction, and I as well as another of my college friends decided to heed his call for people to start doing something to avert potential exctintion.
He’s not always right (a more moderate and accurate figure is someone like Prof. Stuart Russell) but our whole field owes him our gratitude.
Great idea and demo but tough to see many municipalities refitting their street lighting to keep astronomers happy. Might be easier to persuade them to just turn streetlights off completely for a few hours a night, at least then there’s some cost saving.
Maybe in the future when we all have smart glasses with night vision mode and self driving cars we’ll look back at citywide streetlights as a quaint and inefficient solution
A common mistake technologists make is to conflate the technologically illiterate with the entirety of the population. I can't overstate how technologically stratified we are and I believe his trend will only worsen. As technology advances, we will see the literate move forward and the majority stay relatively still. We will only see further stratification. We must assimilate this truth into our strategy.
I think for many people “Bug” can imply something that crawled in from outside and messed up the system, “defect” implies that there was an avoidable deficiency in the specification or the implementation. The engineering mindset might prefer “defect” since it implies that we can fix the process and improve the quality of future products, whereas “bug” implies these things are just a fact of life and you can’t expect to eliminate them
I’ve started using “defect” instead of bug for these reasons. The “bug” euphemism implies the software was once correct, but then problems crawled in from somewhere external to infest the otherwise good software.
That’s really not how 99% of software problems happen. They are defects because the software was defective from the moment it was conceptualized or typed in.
“Bug” tries to soften/downplay the developer’s role in producing a defective program.
SRT is a text file with a awk-able format that you can just write a script to update all the timestamps of. There's also a bunch of online services that do it. VLC has it built-in (track synchronisation - you can set delays on video audio and subtitle tracks individually).
curious what exactly you're referring to by market regime detection / classification, anything I've seen on this (several models from bulge sellside) has been backward looking and fairly useless.
My understanding of this is that you want to classify what all of the other traders are doing, basically. That is, intro investment discussions build on intrinsics of what you are trading. As you trade more, you also want to trade on the behavior of everyone else that is trading.
Sadly, all market discussions I've seen are always "backward looking" and fairly useless for most folks. High frequency trading is basically cheating by do much smaller forecasts that can be acted on quickly and profitably. But if you can't react fast enough, the information is effectively useless.
Is like knowing tomorrow's rain forecast when you are trying to plant for the season. It is of little help. Even if it is far more accurate than the year's forecast.
It’s a bit hard not to be cynical when he himself describes how he developed his channel by optimising and a/b testing everything “like a psychopath” including the amount of views per dollar given away - he says himself that 100k seems to be the inflection point, you don’t get so many more views by going from 100k to 500k or 1m. Not once did he ever appear to have thought about the impact on the lives of the people receiving this random lump sum, which plenty of research on lottery winners shows is often very disruptive and negative on their overall long term wellbeing. As for his “donating” for blindness surgery etc, there are plenty of actual charities staffed by volunteers working hard day after day, he could easily donate quietly to any of those and he chooses not to.
I’d buy a Pavlok for sure if I could easily configure it to buzz me when I exceed a certain number of phone pickups or just phone screen time in a given period. I see some people have made this work with shortcuts and zapier but I’d want it to work out of the box