From the news I hear (which is quite slanted, I will admit, since I write very little JavaScript) the issue with npm is that it's too easy to publish things on npm and use them, which leads to a dependency mess and breakage when things are removed or get hacked. Is this something that the JavaScript community needs?
I don't think that npm itself should be blamed for being too easy to use - That's a good thing in most cases. I think that the main problem is that a couple of years ago some very vocal members of the Node.js community had been promoting a hard-line philosophy around publishing and using tiny modules.
The consequence of that is that projects ended up with hundreds of tiny dependencies (and sub-dependencies) which increased the attack surface and introduced their own bugs and/or vulnerabilities.
I think that the Node.js community is wiser now. Vulnerability detection tools like Snyk.io have been useful in encouraging module authors to remove unnecessary dependencies from their modules.
Now the trend seems to be to use a fewer modules which offer more functionality that is more closely matched to the use case.
OK, but this behavior can be observed in both the Python and Rust community, too (maybe other communities as well but I am not in touch with them). Do they promote "a hard-line philosophy around publishing and using tiny modules", too? I had to cargo build a few projects (independently) (e.g. parity-ethereum, c2rust), and it took a while because they had over 300 dependencies. That is a lot. What is the reason for this phenomenon?
On the spectrum, Rust is not as extreme as npm but is closer to it than not. It just really depends.
Smaller dependencies are easier to maintain, test, and understand. Rust also has a relatively small standard library and so you tend to rely on packages (some produced by the rust project itself) for some things you might use the stdlib for in other languages.
Compared to PyPi it looks like npm packages are much more granular. You'd see the functionality of a popular Python package spread out into multiple npm packages.
Both approaches have their advantages. I'd say that for security and reliability, you really need to know what packages you are running. Often you can delegate the responsibility to bigger upstream projects/groups.
For example if Facebook works with and on React, you can put a good lower bound on the reliability/security of React and the packages it pulls in. I'd be a lot more suspicious of packages which are rarely used by significant other projects.
There is a cost to relying on someone else's project. If instead of relying on 1 library you rely on 10 this is a pure negative in terms of complexity, risk, communication and potential breakage.
Contrary to your statement this is pure disadvantage.
"For example if Facebook works with and on React, you can put a good lower bound on the reliability/security of React and the packages it pulls in."
I don't think this is true. You could easily depend on something that react pulls in which they later drop months before it turns into a vector for malware.
I don't see how trust translates down the dependency graph AT ALL.
Bugs and security risk seem to be mostly correlated to the number of lines of code. By splitting a package, keeping the volume constant, the risk shouldn't increase that much. Small packages have less of a chance to cause problems.
Nothing is perfect. NPM and PyPi try to mitigate this problem with security audits and notifications. NPM checks your project for known vulnerabilities at every install.
If you're paranoid, you just don't upgrade packages unless you really need to and audit stuff yourself. That comes with its own costs. As does writing the software all by yourself. Or buying it from commercial vendors with similar tradeoffs applying.
I'm not sure why you're being downvoted. I've lost days to debugging issues introduced to modules in deep transitive dependency chains. It wouldn't be so bad if package maintainers respected semver or those upstream took care to lock their dependencies to a specific version. In practice neither happens.
That's what it does. The reason it does this is because npm modules distributed using the "modern" (as in recently added) module system can be more efficiently distributed to users by allowing you to only import parts of a package that you actually use.
The reason not all packages support this, besides legacy, is that this also requires your runtime environment to support and benefit from this. In other words, this is useful when you're targeting modern browsers. When a package can potentially also be used in Node projects, or projects that require support for relatively widely used browsers such as Internet Explorer, however, supporting this module system might not be possible or worth the effort.
In other words, it has absolutely nothing to do with it being too easy to publish to npm.