By that logic all technology is "natural" and there is no distinction between the inventions of man and the creations of nature/god. Such logic sidelines self reflection, leading to horrible places.
No, it just means that self reflection must be based on something other than an arbitrary and meaningless distinction between things that existed before humanity and things that existed after humanity.
If we see ourselves as part of nature, but with a unique ability to impact more of nature than most other forces within nature, I think we would have a greater imperative to rectify mistakes we have made, vs seeing issues like climate change as mostly impact “others.”
It's a philosophical work about leavers and takers and what is the nature of both nature and humans, as well as their role in it. It's told from the perspective of a man taking instruction from a speaking ape.
It is something I've read many times and get something new every time I read it. Given the context of this thread, I'm not sure I could recommend a more salient book.
All evidence and theories point to climate change being caused by humans. This interpretation does nothing to change that conclusion. My view is that it is an entirely useless distinction to make. If climate change were instead caused by natural sources of carbon dioxide, it would be no less a crisis, and would require no less of an intervention.
Only if you derive your morals in a nature good/synthetic bad frame, which is terribly naive. Nature is filled with awful things and awful acts.
Self-reflexion is the ultimate unnatural act which is (almost if not) completely unique to our species, and it's not to hard to argue not entirely ubiquitous within us.
When I was 19 I messed my back up very badly on the job. The company tried very hard not to take responsibility for it, going so far as court with dueling doctors.
The fact I refused opiods for my pain was used against me, "If it's so bad why aren't you taking the pain medications?"
My lawyer told me that if I had taken them I likely would have gotten 50% more int he settlement. I don't regret it though. Addiction is a major problem in my family.
Arguably, as Fascism was an actual political movement in Italy, Japan could no more have been considered Fascists than they could have been considere Nazis. They were Imperialists.
However both 'fascist' and 'nazi' have commonly understood meanings beyond am affiliation with their respective political parties, so while datawarrior was technically correct (making them the best kind of correct) they were also being needlessly pedantic.
That's largely because Rekall is a fork of Volatility, not because Python was better suited to the task for both projects. Python is used heavily in security/forensics work so it was just a natural choice for the developers.
I'm not sure what your argument is otherwise. If there was a tool written in C or Rust to benchmark against then we could better argue which is better for the task. I don't think anyone said Python was "failing."
My point is that both of them are big, succesful projects, written in Python, in contrast to hasenjs comment that anything that isn't "a small program being easy and quick (and pleasant) to write... fails in some way.".
I'm not so sure they will sell a ton. Most of the talk I've heard has been "meh" regarding it. I'm personally holding off until I see more hands on reviews and feel comfortable that I wont be a beta tester for Apple.
I'm really excited for the screen (not the notch). True blacks are where Apple has really suffered. I love reading at night on my iPad but it's a much worse experience than my Samsung tablet with an OLED screen. The move to OLED by Apple is really the biggest thing here that people aren't paying much attention too.
That’s thought to be one of the reasons why. There are 3 years of phones that are effectively identical looking so if want to show off you have the ‘new shiny’ you really can’t because without close inspection no one can tell.
The X obviously fixes that. There are rumors a big percentage of the supply will be going to China, more than even distribution.
Ya, the X was definitely designed to solve that problem. I don't think it will be nearly as successful in the west as it is in the east (just like larger phones in general).
AMD open-source drivers (new GPUs: amdgpu; old GPUs: radeon) is way better than NVIDIA open-source drivers (nouveau) and is competitive with their own closed-source drivers (new GPUs: amggpupro; old GPUs: catalyst), while losing to the NVIDIA close drivers [1].
If you want good performance using open-source drivers, AMD is the way to go with Linux [2].
[2]: Open-source drivers is important not only because FSF and all, but because the open-source drivers follows the advancements in Linux graphics stack much closer. Things like KMS and Wayland works much better with open-source drivers than NVIDIA proprietary drivers. Even things simple like Xrandr is ramdomly broken in NVIDIA drivers.
Open-source-only is an artificial comparison to make; nvidia's open-source drivers get very little development attention precisely because their official drivers on Linux are so good that most people have no reason to use the open-source ones. What matters to most users is performance, stability, and features under the best available driver - and in those terms nvidia still wins on Linux. (At least IME - e.g. my experience is that Xrandr was much more reliable for nvidia than for AMD)
It depends, open-source driver brings better desktop experience in general, even considering only NVIDIA drivers (nouveau), since you don't need the extra performance in 90% of the cases. Things like KMS is only available in NVIDIA binary drivers in recent releases, are still kind bugged and it is more difficult to use (installing a binary driver and including the module in initramfs VS. completely plug-and-play support with open-source drivers; the work can be automated by distro maintainers, but still is not the same thing). Other things like Wayland works so slow in NVIDIA binary drivers that is simple unusable. Another example is GPU switching: for some time Linux has a very good dynamic GPU switching support using PRIMUS, however NVIDIA proprietary drivers still does not support it (maybe they can't?), so NVIDIA ends up reinventing the wheel and their implementation is really bad (Vsync issues and no dynamic switching, you either start your whole X11 session in iGPU or dGPU, so it is kinda useless).
NVIDIA binary drivers really only win in performance. Even the features in NVIDIA drivers tends to be more bugged: i.e. I use compton as my composite manager. It needs a very old OpenGL version (like 1.1 or 2.0 capability), and NVIDIA drivers still mess up and I need to activates some workarounds in compton to get usable desktop. MESA (used by open-source drivers) has a much better OpenGL implementation, and I can basically get a glitch free desktop without workarounds. I also remember getting random glitches in Chrome and Gnome Shell, just to cite more examples.
So it is not an artificial comparison. Want performance and CUDA? Yeah, go to NVIDIA. Want just a stable, modern desktop (Wayland and composite glitch-free)? Open-source drivers is the way to go.
The AMD open source driver is just about on-par with their closed source Windows driver. One of the benefits of Open Source is also being utilized now. Game companies like Feral Interactive, who develop many Linux ports, have been improving the driver's performance for their games.
I just wanted a comparison between NVIDIA binary drivers and AMD open-source drivers. I know the comparison is old, I even added a footnote explaining this, saying that current performance of open source-drivers is even better.
Nvidia can play the game however they want. But they were not pushed into that situation. Their choice, they get to own it. I think it’s a fair comparison.
Nvidia isn't involved in nouveau, it's developed completely by the community. Since it can't use hardware in full, there is no point to compare it to radeonsi performance wise.
I don't think anyone would say this with any negativity towards the folks who are developing the nouveau driver. Given the circumstances that they are working under (no useful help from the vendor), they have done heroic work.
But I think you are missing the point here and that is: if you want to buy a GPU now for a Linux machine and you don't want to use a big proprietary blob, AMD is by far the best choice, because the amdgpu driver outperforms the nouveau driver.
> if you want to buy a GPU now for a Linux machine and you don't want to use a big proprietary blob, AMD is by far the best choice, because the amdgpu driver outperforms the nouveau driver.
No doubt, I said so explicitly elsewhere in this thread. I.e. that on Linux Nvidia will lose its current dominance.
Sure it makes sense, since the graphics driver for most is a means to an end. Few users are graphics driver fans based on how commendable the progress is relative to circumstances. Though this is of course a good and valid POV too.
In The Witcher 3 in Wine, AMD with Mesa beats Nvidia blob by a huge margin. Nvidia usually tries to optimize individual titles by cheating and substituting shaders and etc. In conformant OpenGL tests, AMD / Mesa is already close if not better. Mesa developers did a great job in the past year, and they are still working on improving performance.
I think he's basing it on a price-performance ratio. Phoronix tests show that NVIDIA's closed OpenGL/Vulkan drivers are still ahead of AMD/Mesa's open graphics stack. -- On the bright side the "relative" transparency of AMD with their graphics development does make maintaining the FOSS drivers easier as a kernel maintainer (although I rarely send patches to the graphics subsystem I have friends who do).
While we probably won’t make big improvements in aging, once we address a few major diseases, and devise better ways to monitor the body, it’ll be a lot more common for people to live to 100.
Hopefully, Blue Zone research will give us some insight.
> While we probably won’t make big improvements in aging
Why? Ageing is molecular deterioration. In fact, all disease can be described as not being in the place they need to be or the structure/configuration they need to be.
Right now we have very few tools able to do anything more than assist the body's self-healing capabilities (including e.g. vaccines "training" the immune system) or some targeted biochemical attacks, or providing missing molecules. These are powerful, but it's like trying to fix an intricate detail while wearing boxing gloves. Actually, it's like the same metaphor but with boxing gloves the size of a city block.
In the coming decades we will develop molecularly precise tools able to manipulate individual molecules in-situ. I expect that at that time it will become quaint to talk of "ageing" rather than an amalgam of age-correleated diseases that are each addressable by molecularly precise treatments available for a price. A complete cocktail of treatments, which would take maybe a century to develop, would take any elderly person and give them a biological age of 25 with perfect health.
There is no biological reason why this is not possible. And every year we get closer to achieving molecular-scale tools.
> it’ll be a lot more common for people to live to 100.
Would you really want to though? I'm relatively young, but after about 70 I don't think I'd really want to live anymore.
This is mainly due to genetics, as the body and brain start to give around that time (in my family).
I'm sure it's different for other's, but for many just the fact that their bones are brittle enough to prevent basic things would be too much of a constraint to handle.
There's also the mental aspect of things. At some point I just want to "move on." As you lose loved ones (assuming we haven't cured all of death) and experience great sadness in life, you may not want to live in the torment of all your memories. Or maybe the positive memories out-weigh the bad ¯\_(ツ)_/¯
>> Would you really want to though? I'm relatively young, but after about 70 I don't think I'd really want to live anymore.
All the people that I know that are in their 70s say that it all went by really fast. And they don't look like they are tired of living. This reminds me of the phrase "Youth is wasted on the young"
I guess I was assuming that it’s possible to have a good life in my 90’s. Your sample size of the 2 people seems a bit anecdotal. Know anyone famous in their 90’s?
Jimmy Carter
Henry Kissinger
Dick Van Dyke
Stan Lee
Chuck Yeager - broke the sound barrier
Anyway, that’s your choice. I suppose if we do learn how to treat aging to delay some of the symptoms, you can always change your mind.
It's more than just the two for reference. It's either dead or dying at that age for most people. In addition, my 94 year old neighbour is housebound and has leukaemia and just wishes someone would kill him. My grandmother is 93 and she can't even recognise any members of her own family any more and just wants to go to sleep forever. That's more the state of things.
The above are minor and major celebrities. They have a higher standard of care.
If we can increase quality of life significantly, I'm all for it but until then, shoot me when I get to 80.
Counter-anecdote, I just had the pleasure of meeting a 90-year-old man who was still very mobile, very compus mentis (we had a nice chat that I could have had with someone of any age), and who was surrounded by a loving, social family. Who would complain about that?
No one would complain about that, but few people are that fortunate. In my academic field, I know of some elderly scholars who managed to reach their 80s and 90s. Only two are still in a condition to do scholarship. The rest had, at best, to give it up because they could feel their minds deteriorating, or, at worst, they just completely vanished from the scene and it turned out they were suffering from dementia and their families had to take care of them.
I’d love to live to an advanced age if I could still do what I do now, but I agree with the OP that old age in a deteriorated state would suck.