Hacker Newsnew | past | comments | ask | show | jobs | submit | photon-torpedo's commentslogin

Minio recently started removing features from the community version. https://news.ycombinator.com/item?id=44136108


How awful. It seems to be a pattern nowadays?

Some former colleagues still using gitlab ce tell me they also removed features from their self-hosted version, particularly from their runners.


Yeah, there's a trend of people who don't actually believe in software freedoms releasing a subset of their proprietary software under free software licenses and pretending.

It's really just a bait and switch to try to get free community engagement around a commercial product. It's fundamentally dishonest. I call it "open source cosplay". They're not real open source projects (in the sense that if you write a feature under a free software license that competes with their paid proprietary software, there's zero percent chance it will be upstreamed, even if all of the users of the project want it) so they shouldn't get the credit for being such just because they slapped a free software license on a fraction of their proprietary code.

Invariably they also want contributors to sign a rights-assignment CLA so they can reuse free software contributions (that they didn't pay for) in their for-profit proprietary project. Never sign a CLA that assigns rights.

Some open source projects flat-out illegally "relicensed" open source contributions as a proprietary license when they wanted to start selling software (CapRover). Some just start removing features or refuse to integrate features (Minio, Mattermost, etc). Many (such as Minio) use nonfree fake open source licenses like the AGPL[1].

It's all a scam by people who don't care about software freedoms. If you believe in software freedoms, you never release any software that isn't free software.

[1]: https://sneak.berlin/20250720/the-agpl-is-nonfree/


> the anti-privacy misfeature the AGPL requires that the software furnish its own source code to users over the network

This statement in the linked article is incorrect. It overlooks the "through some standard or customary means of facilitating copying of software" clause in section 12.

The software does not have to provide the source code _itself_. It must provide users a reference to such. A link to the Github repository on the about page, for example, would fulfill the requirement.


Good question. Something to do with interactive mode?

  $ cat l.sh
  alias l=ls
  l
  $ sh l.sh
  file1  file2  l.sh
  $ bash l.sh
  l.sh: line 2: l: command not found
  $ bash -i l.sh
  file1  file2  l.sh
Edit: Ah yes, the man page says so.

> Aliases are not expanded when the shell is not interactive, unless the expand_aliases shell option is set using shopt


Thanks, I had no idea. I guess I've never used aliases in scripts, but I would've assumed that they'd just work the same as in interactive mode. Good to know.


Why use an alias in a script? Or in general? Functions work everywhere.


Functions don't work everywhere. Bash functions only work in the current shell context unless exported via an `export -f myfun' statement in between the function declaration and downstream sub-shell usage.

Working example:

  pzoppin() {
      printf 'echo is for torrent scene n00bs from %s\n' "$*"
      trap "printf '%s\n' <<< \"$*\"" RETURN EXIT SIGINT SIGTERM
  }
  export -f pzoppin

  echo -e 'irc\0mamas donuts\0starseeds' \
      | xargs -0 -n 1 -I {} /usr/bin/env bash -c '
  echo hi
  pzoppin "$*"
  echo byee
  ' _ {}
The above will fail miserably without the magic incantation:

  `export -f pzoppin'
Why'd they design an otherwise perfectly usable, mapless language without default c-style global functions? :)


I suppose aliases predated functions. Can't find a reference to support that, though. Just a possible reason.

BTW aliases come from csh, and there they support arguments, which makes them similar to functions.


Aliases are like C macros

    $ alias foo='seq 3 | '
    $ foo cat
    1
    2
    3
Functions are functions


So then use $variables for evil syntactic tricks? That works everywhere. Functions where hygiene counts? Aliases never?


>use $variables for evil syntactic tricks

Can't do that without eval, which is another can of worms. Aliases are fine


Indeed, here are two lines atop my "util.sh" to regularize some strange non-conformant behaviours:

  [ "${BASH:-}" ] && shopt -s expand_aliases
  [ "${ZSH_VERSION:-}" ] && setopt SH_WORD_SPLIT


Report from the Taiwan agency:

https://www.cwa.gov.tw/V8/E/E/EQ/EQ113019-0403-075809.html

And a list of all recent earthquakes, showing the aftershocks:

https://www.cwa.gov.tw/V8/E/E/index.html


Whenever I look at those reports, I keep wondering how many automated systems have to be there in place to generate all that. All the waveform records, the intensity maps, etc..., they should be all auto-generated, and likely verified by humans afterwards? Would be super curious of the IT setup and deployment of such things.

This also likely feeds into the automatic warning systems (sent to mobile phones to warn of an incming earthquake, tsunami, or something else), which is likely going to be discussed afterwards, as loads of people didn't get a warning. (As opposted to recent Chinese satellite launch where _everyone_ got the overly scary rocket alert.)

Edit: now they are saying their calculation has to project a minimum "peak ground acceleration" (PGA) of 25 (what units?) to have an alert, and a lot of the places didn't hit that, in part due to underestimating the intensity at the epicentre. I guess they will be revising this criteria, as this was overly conservative on the "less noise" side, while people are likely more forgiving in reverse (getting an alert when they didn't need one).



> is for them to have broken the encryption algorithms behind TLS/HTTPS.

Or if they have access to, or can subpoena, a MitMaaS for HTTPS. Like Cloudflare.


True. Given how widespread Cloudflare has become, I would be surprised if they haven't got a tap there already.


This is nice, but unfortunately

> Rosetta doesn’t support the bootstrapping or installation of Intel Linux distributions on Mac computers with Apple silicon using the Virtualization framework.

So I guess it still won't be possible to run RHEL (or derivative) VMs on Apple Silicon. (Their aarch64 images don't work, something obscure with page size IIRC. Odd because Debian/Ubuntu's aarch64 work fine.)

Edit: Looks like RHEL9 has changed page size so it can run as a VM on Apple Silicon. Unfortunately my common use case for VMs is to prototype things for production, and that's all on RHEL7/8. :(


You install ARM linux on VM, you copy some binaries into that ARM linux which magically allow you to run x86 binaries inside that ARM linux. Those binaries will be translated using rosetta algorithms to arm machine code and run inside that ARM linux.

Fast emulating x86 VM probably is too hard for Rosetta so they decided not to even bother with it. I agree that it would make things much easier if one could just run x86 Linux with proper speed.


> do they consider the memory is faulty when there are correctable errors?

It depends on the frequency. Occasional CEs are somewhat expected (on a large enough scale) and one can live with them, after all that's what ECC is for. When CEs start happening frequently on one machine, most likely a DIMM is going bad and will worsen over time, so one should replace it.


thanks for the info. this is exactly what I am doing. it does provide extra peace in mind knowing that my odds of having silent corruption is further reduced by doing such monitoring.


> The question was: which DIMM should we replace?

On server-class machines, ECC errors often also show up in the system event log, so one can run "ipmitool sel list" and inspect the most recent messages, and they often point to the failing DIMM in a nomenclature that corresponds to how the slots are labelled on the mainboard or in its manual.

In this case, they are using a "gaming" mainboard, so this strategy probably doesn't work (no nice system event log).


System firmware can (but not always does) include a mapping between DIMM identifiers as exported by the Linux EDAC subsystem, and DIMM sockets on the mainboard. In absence of such a mapping, you can provide on yourself via `edac-ctl --register-labels`. Of course, someone will have to have figured out what that mapping actually is (but one can do that oneself, given a little patience) first :)


Most modern system (since 2014-2016?) supports WHEA, which allows the OS to get notification and write it to the OS system log.

Not sure if this would be seen in dmesg.


The author mentions that they also wrote a backup program (bup), and for backup programs it would be very convenient if directory mtimes would get updated like this (recursively up to the root), as it would allow to skip scanning the entire filesystem for changed files (which in my experience is where backup programs spend most of their time).


I don't remember if mtime is updated on each write call or just on fopen but I could see this being a huge performance overhead, namely for applications that are FS bound. I wonder if io_uring would help the situation though since it's mainly geared toward filesystem operations.


No reason a hypothetical recursive mtime needs to be atomic. The kernel could just stick it in a buffer somewhere and deal with that sort of think out-of-band and in batches. You'd probably need some filesystem journaling trickery if you want to make sure the recursive mtime always updates eventually when a file is modified.


> pow(pow(pow(pow(pow(2, 2), 2), 2), 2), 2))

The Julia equivalent is:

    julia> ((((2^2)^2)^2)^2)^2
    4294967296
Julia's ^ is right-associative.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: