Socket and I are solving the same problem, behavioral analysis of npm packages before install, but with different approaches.
Socket uses static analysis plus LLM based threat assessment. Dependency Guardian is fully deterministic: 26 regex and AST based detectors plus a correlator with 53 cross signal amplifiers. No LLM in the loop. Scans are reproducible, run in ~38ms, and avoid hallucination or prompt injection issues. The tradeoff is I may miss novel patterns an LLM could generalize to.
Socket had to introduce three alert tiers because of noise. I handle that at the detection layer by correlating signals like ci_secret_access plus network_exfil into higher confidence amplifiers, which lets me hard block PRs at 99.95% precision across 11,356 real packages.
Shai Hulud exploited Bun runtime APIs and legitimate GitHub API traffic to evade Node focused scanners. I built dedicated detectors for those gaps, normalize string escapes before matching, and track import aliases per file.
there is a free tier at 200 scans per month, an open source thin client, a self hosted option, and support for GitHub Actions or any CI via CLI. Socket validated the category and raised $65M. My bet is that a tighter deterministic engine with lower noise wins for teams that want a true CI gate, not just an advisory dashboard.
A camera! The 2.5mm TRS remote trigger jack just needs one of the pins connected to the sleeve to trigger the camera, very easy to do with an optocoupler or even relay.
I've always wanted to look into writing my own Prettier plugins, how'd you feel about getting started working with their little mini DSL (fill, join, hardline, line, etc.)?
The DSL itself is pretty nice. However, one thing I found annoying is that the printing function itself is hard to unit test: there is no easy way to instantiate its parameters (in particular the `ASTPath` class). So I relied on testing it by calling `Prettier.format()`, which is more a end-to-end test of the whole plugin. Maybe I'll figure out another solution at some point.
This is very important to keep in mind when implementing OAuth authentication! Not every SSO provider is the same. Even if the SSO provider tells you that the user's email is X, they might not even have confirmed that email address! Don't trust it and confirm the email yourself!
You mean if I decline enough, I’ll stop getting invited? A dream!
I have a bone to pick with our “ticket estimations” meeting. It’s a waste of time, we don’t even use the estimations for any planning purposes. We just sit there, reading tickets out loud as a group, saying pseudo random numbers that inevitably average out to a 2, 3, or 4.
I’m convinced we’d have better estimations if we just nixed the meeting and sampled estimations from a normal distribution.
more like, if you don't attend you'll get expelled from the group. Meaning you'll get fired.
But then again. if you don't like the group, your job, go on a quest to find someone else to spend you time with. that is probably the best investment you can do.
Those kind of ticket estimation meetings are always a waste of time. Have one person do the estimate and then follow up (in aggregate) how correct the estimations were. Hopefully you will get better over time.
> BleepingComputer has been told that the Akira ransomware operation is behind the attack on Tietoevry, coming soon after the Finnish government warned about their ongoing attacks against companies in the country.
> "The incidents were particularly related to weakly secured Cisco VPN implementations or their unpatched vulnerabilities. Recovery is usually hard," warned the Finnish NCSC.
I wonder what the entrypoint was back in 2021 when they were attacked around the same time?
Thanks! Looks like this project has some high ambitions:
> What's after that?
>
> Tons of stuff. Tons and tons and tons of stuff.
> Debuggers have not substantially evolved since the first Turbo Debugger in 1988!
> For example, we have had GUI debuggers for 20 years now, and we can't see bitmaps! We can't hear sound buffers. We can't view vertex arrays.
> We can't graph values over time. We can't even load a debugging session from yesterday and review it! We have a long way to go.
> Debugging is desperate need of updating, and we see as a long term project. We'll be adding visualizers, new debugging workflows (step through code on multiple platforms at the same time for example), and new features for a long time.
Fun fact, you can plot variable values over time with gdb. You just have to create a gdb extension to do it: https://github.com/jyurkanin/gdb_extensions
This blew my mind when I figured out how to make these extensions and everyone should be making their own extensions. It's a force multiplier. Literally life changing
The archived page claims they started in 2013. Time travel debugging was commercially available (by other vendors [1]) for over a decade by that point.
When I used Xcode for the first (and so far only) time I was surprised I could actually view bitmaps in the debugger and I've wondered why that's not commonplace ever since.
I suppose one reason is that images can be stored in many different ways in memory. For OpenCV images in C++ the Visual Studio extension Image Watch is quite nice, I think it's been around for more than 15 years:
Most people don't know what they're missing. None of the pages linked (in either the submission itself or the comments here) or other comments actually show any of these graphical features in use.
This is a problem that extends beyond this submission and talk of debuggers. It's basically insurmountable for anyone who didn't use Google Reader to know what the experience is using it looked and felt like (and in a few months when it will no longer be possible to use Google Podcasts, there'll be another casualty).
Reading your comment, all I know is that Xcode lets you "view bitmaps in the debugger", but that's a pretty varied range of possibilities for someone who hasn't actually seen it.
Folks who are looking to have an impact would do well to document their setups and show how they actually get work done, lest we end up in a future where people don't know how to run a Python program[1] because nobody was ever explicit about it, instead relying on tacit familiarity.
Haven't fully digged into it, but firstly, the executable is tiny, source code seems sane, performance for average sized projects appears to be better than existing solutions, and also some interesting features such as viewing texture/image buffers within the debugger
x64dbg is primarily made for reverse engineering but RAD Debugger is made more for the context of game development. Those are two different usescase with different priorities. I would expect source code debugging and debug symbol related features much more prioritized in RAD debugger compared x64dbg. Even x64dbg own feature list calls it's PDB support as basic. RAD Debugger supports natvis (same format as used by VS for describing how to visualize more complex data structures), I don't think x64dbg supports that.
A more relevant question might be what RAD debugger offers or plans to offer compared to Visual Studio C++ debugger. Based on http://web.archive.org/web/20230923095510/http://www.radgame... seems like an important part of motivation was providing Visual Studio quality or better debugging experience for Linux and console targets and doing that with single tool.
The article mentions that it's "usually white horseradish, dyed green", and that the real thing "can cost more per pound than even the choice tuna it sits on."
Would be interesting to verifiably taste real wasabi. Who knows, maybe I've never actually even tasted the real thing?
They give you a root, and a grater. Tastes milder than what we get, here.
I have heard that it is impossible to domesticate (has to be harvested wild, on certain mountains). However, I watched a program, where a guy in Hawai'i said he'd figured out how to domesticate it.
It's a difficult plant to grow as it requires specific conditions. Though there are farms in the pacific northwest that have been able to cultivate it successfully.
Here is a decent paper that discusses the challenges[PDF]:
It's still very hard to grow with high failure rates, but you can definitely find plants for affordable prices -- a bit less than 10$, but then you need to manage to keep it alive for 3 years if you want to enjoy it propagate it.
There are a number of good YT videos on how to cultivate it, and some documentaries on professional plantations.
To have it the spiciest you should wait a good ten minutes or more, the spiciness is activated by the process:
"The chemical in wasabi that provides for its initial pungency is the volatile compound allyl isothiocyanate, which is produced by hydrolysis of allyl glucosinolate [...]; the hydrolysis reaction is catalyzed by myrosinase and occurs when the enzyme is released on cell rupture caused by grating" (adapted from wikipedia)
It is difficult to grow but there are multiple producers in the US, and I've occasionally seen it for sale in grocery stores. Some sushi places use it. It is more expensive than the green horseradish/mustard powder, but not prohibitively so.
You can taste the difference but in most contexts it is pretty substitutable with the fake wasabi, hence the ubiquity of the latter.
I'm curious as to why its not possible to grow locally given the growing number of startups growing food in cities in fully controlled environments? for ex: https://youtu.be/VxRNoSSkLkE?t=191
I’ve found real wasabi to actually be a bit milder than some of the horseradish imitators. I suppose I always assumed it’d be even stronger, but that hasn’t been the case in my experience.
If you don't see the sushi chef grinding a little green root fresh on a wasabi grinder and adding a little bit to your rice, it's safe to assume it's not Japanese wasabi.
If you have a Japanese food market near you they might have it. Some specialty grocers can carry it too. If you happen to be in the bay area iirc there's a Wasabi farm in half moon bay.
reply