For all of these newfangled TLDs that are springing out of the woodwork with strictly for-profit interests, yes. Even some ccTLDs have seen rapid price hikes in recent years.
I think the safest bet is to pre-renew the domains you really want to keep for as far out as you can (most registries allow you to renew a domain for up to 10 years). That way, if there is some major change to cost structures, you have a decade to either weather the storm or come up with a migration strategy.
Resolution isn't the only problem. SD resolution (particularly PAL) is quite tolerable, if it's well-encoded from a good source.
DVDs are not well-encoded, and the sources are typically poorer, too.
DVDs store MPEG-2 Part 2 (H.262) video streams. It's an extremely old, inefficient codec. (It was published in 1996! Next month, it'll be 30 years old!) It looks best when the encoder is given a bitrate limit north of 20 megabits per second, but DVD-Video has a hardware limit of 10 Mbps, and that includes the audio and subtitle streams. Most video streams on DVDs get 4-5 Mbps. MPEG-2 also isn't a very good codec; no matter how much bandwidth you get it, it's never really considered to be “transparent” (that is, encoding artifacts are always visible).
If you take a Blu-ray copy of a film (FHD or UHD, doesn't really matter), scale it down to SD resolution, and run it through a good HEVC (H.265) encoder, you'll usually find that a DVD-equivalent encoding takes about a third, maybe a quarter of the space. Or, if you go the other way and let the encode take as much space as the MPEG-2 one on the DVD, you'll almost certainly see an obvious difference, particularly in action scenes.
Starting a physical media collection? Fantastic. Good for you (seriously). But get Blu-rays wherever possible. You'll mostly have to forego the thrift shop, fine, but if you're ever actually going to watch the film, you'll vastly prefer it.
I have both Blu-ray and DVDs and I've found its the content that determines which is good enough. Kids in care not one bit about image quality. Obviously: people still like retro games, too. But then other movies, like anything by Villenueve or Nolan, or Baraka, really want to be on 4K Blu-ray. But kids movies on DVD are perfectly fine, and sitcoms like Community. (Personally I'd pay extra to NOT see Pierce in 4k).
I recently purchased the Firefly Blu-ray and it was an interesting case because it's image quality isn't that much better than the DVD (but definitely better) however it's sound quality was astonishingly better than the DVD. I imagine this has a lot to do with the source material, how it was mastered, etc. I still stream, but I like that I have a core collection that will never disappear without warning, or be edited behind my back (which happens all the time, without notice, especially on YouTube and on Amazon Prime).
Yes, for older TV titles, the main reason to opt for Blu-ray is the better sound quality. Although DVD supports uncompressed audio (LPCM), that was rarely used outside Japan, and regular stereo audio typically used pretty mediocre compression.
When using subtitles, another reason is the higher-resolution fonts.
Still using my RX 5700 XT. The amdgpu driver had a major issue resuming from suspend a few months ago[0], but other than that, I'm not aware of (nor have I experienced) any stability issues. Maybe you had a bad card.
SolveSpace is a wonderfully different take on parametric CAD, but development has really slowed, and it seems fundamentally incapable of some pretty rudimentary features (like chamfers[0]). Dune 3D[1] seems like a pretty effective spiritual successor.
Chamfers and Fillets are my next major undertaking. Don't expect them any time soon, but they've moved to the top of my list. They are extremely difficult to do in the general case - so we will not cover all cases. Several years ago I tried an experiment:
That could only do the top or bottom of a straight extrusion. This time will be a more general than that. Not looking forward to doing corners where 3 fillets meet ;-)
Aren't there licesing issues with porting a proprietary implementation into an open-source one that could open up the project to legal issues with the proprietary vendor?
Hi! Thanks for your hard work! I just want you to know it's definitely worth it!
I am using SolveSpace for my 3D prints because I just don't have time to learn anything else. With SolveSpace I've been productive in like 2 hours after launching it the first time.
So far you've saved me like $500 in things I've printed instead of bought. Just last week I've printed nasal manifold for my DIY sleep monitoring setup. Replacement specs legs a month ago.
If you really make fillets and chamfers a reality, please don't forget to open donations.
I would imagine there are a few different possible options (preferably a settable parameter):
* Intersection. Conceptually the simplest, the chamfers would just be joined by the solid addition of all three fillet surfaces, creating three new sharp corner edges that meet at a single vertex.
* Rolling sphere. Imagine an idealized spherical "thumb" smoothing out caulk. The middle would be joined by a new spherical concave surface, tangent to all three fillets. Also generalizable to convex fillet intersections, smoothing out sharp corners.
* NURBS, with adjustable parameters or even control points, eg when you want a little more "meat" in a corners for strength of a part.
* Flat corners, for chamfers (what do do when N>3 corners meet?)
* What else?
Ideally you might be able set the corner type separately for inside vs outside corners, or on a per-vertex or (in the most granular case) per-incoming-edge basis? Is that crazy?
How do saddle corners[0] behave? Does it just "work out" and (by some miracle) uniquely resolve for all permutations and corner types?
That’s not even the complex part. Most of what you describe is a user interface issue, not a geometric kernel issue.
The hard part of 3 corners fillets is the tolerances. Each of those fillet operations has its own compounding float errors and when they meet, the intersection is so messy that they often do not intersect at all. This breaks almost every downstream algorithm because they depend on point classification to determine whether an arbitrary point is inside the manifold, outside, or sitting on an edge or vertex.
And that description of the problem is just scratching the surface. Three corner filets create a singularity in UV space at the common vertex so even when you find a solution to the tolerance problem you still have to deal with the math breaking down and a combinatorial explosion of special cases, almost each of which has to be experimentally derived.
when i did openscad, i just did a minowski hull with a 4sided bipyramid (aka rotated cube) to get chamfers for my cubes.
bonus: minowski hull with a round pyramid adds chamfers in the vertical and fillets in the horizontal, which is what i want for 3d printing most of the time. additionally it closes small overhangs, and it makes fonts smoother (i.e. fonts don't extrude in a 90degree angle, and get 45degree instead, and print better on vertical faces)
disclaimer: I havent used openscad for about a year and my memory may be fuzzy
edit: i am not saying minowsky hull would directly solve your problem, but maybe the algorithm gives you inspiration to solve your numerical issues
OpenSCAD is mesh based so it's not even in the same universe as a proper brep geometric kernel. Everything is easier when you give up on the math entirely, but that’s not good enough for real world manufacturing and simulation.
All of the major commercial geometric kernels have been working on these problems for thirty years and I’m sorry, but your five minutes experience with a glorified tessellator isn’t going to make progress on long standing computational geometry problems.
>that’s not good enough for real world manufacturing and simulation
Dumb question: why not?? It's working for that guy and his 3D printer apparently, which is "real world" (though one could certainly argue it's not proper "manufacturing").
In theory pi has infinite places, sure . In real-world practice (vs math-lympics) you never need more than 100 digits, and indeed you rarely ever actually need more than 5.
Why doesn't it work to "just" throw more bit-width and more polygons at it? Who out there actually needs more than that (vs who just thinks they do)?
The answer boils down to “floating point math” and “discontinuities”.
> indeed you rarely ever actually need more than 5.
That’s not how math works. With every operation the precision falls, and with floats the errors accumulate. What was five digits quickly becomes 3 digits and now you’ve got three surfaces that are supposed to, but don’t technically intersect because their compounding errors don’t overlap even though the equations that describe them are analytically exact. Modern geometric kernels have 3 to 7 tolerance expansion steps that basically brute force this issue when push comes to shove.
Once you have these discontinuities, a lot of critical math like finite element modeling completely breaks down. The math fundamentally depends on continuous functions. Like I mentioned above, three corner filets create a singularity in parametric space by default, so even the core algorithms that kernels depend on to evaluate surfaces break on a regular basis on basic every-day operations (like a box with smoothed edges - aka almost every enclosure in existence)
> Who out there actually needs more than that (vs who just thinks they do)?
I can’t stress this enough: almost everyone. CAD isn’t one of those fields where you can half ass it. Even the simplest operations are bound to create pathological and degenerate cases that have to be handled, otherwise you have a pile of useless garbage instead of a 3d model.
Slicers deal with meshes, like video game renderers, not boundary representations like CAD kernels. There is effectively zero overlap. Even just tessellation, the step that converts brep to mesh, is significantly harder than anything 3d printing software has to do.
>> I'm curious why you didn't go with OCCT for Solvespace.
I didn't start Solvespace, but Jonathan was apparently in a DIY mode after developing his take on constraint-based sketching. It's also very easy to go from NURBS curves to NURBS surfaces, the challenge begins at boolean operations which continue to be a source of bugs for us. This is really the only option other than OCCT and the code is small and approachable so I try to make it better.
Yeah. To quantify, OCCT is >1M lines of code, and SolveSpace's NURBS kernel is <10k. This general smallness is what subsequently made stuff like the browser target feasible, though it obviously comes with downsides too.
We'd welcome contributions, and it's much easier to contribute to the smaller codebase. I think there's potential for coding agents to accelerate this work since robust point-in-shell and shell-is-watertight tests are mostly sufficient to judge correctness, allowing the agent to iterate; loosely you could define your geometric operation as a function of whether a point should lie within the output region, then ask the agent to convert that to b-rep. I wouldn't currently expect useful progress without deep human effort and understanding though.
Would it be worthwhile to consider switching to OCCT (or make it optional)? It would make certain things such as fillets/chamfers much easier, I suppose, and it would make those boolean operation bugs go away. And exporting to various formats would be easy.
>> Would it be worthwhile to consider switching to OCCT
It would, and it has been considered. The sketch elements in solvespace are significantly decoupled from the solid model. That means we could substitute (via wrapper maybe) an OCCT object instead of our SShell class. Then you'd have to change a set of solvespace curves to OCCT curves to make extrusions from them and such. But that would be most of the work.
We do tag all triangles in the mesh with a sketch entity handle for flat surfaces so you can constrain points to a face. I'm not sure how that would be handled. We will also be tagging edges of the solid with sketch entity handles in the future so we can do chamfers and fillets - say by selecting a line entity and applying a modifier to it which gets applied to the NURBS shell. I'm not sure how that would go with OCCT.
Perhaps you could create both the shell and the OCCT object. Then when an edge is chamfered, you could look it up in the OCCT object (simply find all segments which are sufficiently close to the chamfered edge). And then call the OCCT chamfer function. Or something along those lines.
I’m looking for a recommendation to get beyond TinkerCAD (for 3d printing). I learned it in 2019 and came back in 2025 when I got my own printer. It is comfortable and fine for my purposes but lacks basic things like chamfer and fillets.
Anytime I try to jump into Fusion or FreeCAD I immediately hit a wall (like trying pirated Maya when I was a kid).
Try FreeCAD one more time, if you haven't tried 1.0+, and it might stick. I've finally, in the past 6 months moved all my work to FreeCAD and KiCad after trying both many times over the past decades.
I highly recommend watching one of MangoJelly's beginner videos for FreeCAD, even if you have CAD experience. It made it very clear how to adapt my Fusion360 skills.
OnShape is pretty approachable, and has lots of good tutorial videos. They offer free accounts for non-commercial use with the caveat that all of your documents must be public.
If you haven't tried FreeCAD recently, it's gotten a lot better in the past couple of years. It seems to have hit escape velocity, so to speak, and is improving rapidly in a way it hadn't for a long time.
> They offer free accounts for non-commercial use with the caveat that all of your documents must be public.
Major caveat! Also online access required.
And if you decide to upgrade, the next tier is 1,410€ per year.
For that amount of cash, FreeCAD can abuse and torture me quite a bit. Lol.
Also at the rate FreeCAD is developing and improving now, if more people would drop just 1k€/ donations into FreeCAD/OCCT, chances are your pains will ease rather sooner than later.
I'm not spending weeks to learn a proprietary, online-only software that will lock me out as soon as they need more money. Been burnt before on those kind of stuff
I would recommend pirating SOLIDWORKS and learning with that. It has the easiest UX of the parametric CAD modellers, and once you know the general sketch-extrude methodology you will find the others a lot easier.
Actually I think they have a hobbyist subscription which isn't totally extortionate now if you want to stay legal. Maybe get it for a year.
You may try onshape that is supposed to have a better accessibility than fusion 360, but unfortunately it doesn't seem that a CAD software with a complexity intermediate between tinkerCAD and FreeCAD an dthe pro CAD software exists
Some years ago I tried to learn CAD by doing some FreeCAD tutorials, and failed. But I hear 1.0 was a big step forward, and the recently released 1.1 is also a big step, and it should be somewhat decent nowadays. Maybe I need to try again one day.
Yeah it's vastly better in 1.0 than it used to be. I still think you might be a bit lost if you aren't familiar with parametric CAD, but it's no harder than Blender for example.
There are more than dozen different viewport navigation manipulation modes, latest version added two more (Solidworks and Siemens NX). You can pick whichever behaves closest to the program you are most used to.
Yeah, I tried all of them with all the combinations of presets and orbit styles and the closest I could get was using tinkercad but couldn't match the orbit style correctly.
Dune3D is more like Solvespace with a few improvements and bug fixes vs being anywhere near FreeCAD in terms of capability. Improvements include using STEP files in assemblies and having some ability to make Fillets or Chamfers. Bugs fixes would be due to using OCCT for NURBS surfaces - solvespace frequently fails with NURBS boolean operations.
As for overall capability, FreeCAD does everything these others do but also supports lofting and other modeling options, BIM for architecture, I think it does pre- and post- processing for FEA, and maybe some other "big tool" things.
Indeed. I would love for it to be true, but aside from opencascade^1 all the professional kernels are proprietary and not in the training set, so LLMs can't just regurgitate them.
^1: Which I really appreciate, but let's be real, it is far behind eg. parasolid.
FreeCAD is perfectly good user interface for opencascade. The problem is that as your geometry gets more complicated you start running into the kernel limitations.
Vibe coding only seems to work, insofar as it does when the training data includes multiple exemplars of solutions to a given problem.
As noted elsethread, there's only one geometric kernel which is decently far along and opensource and it's over 1 million LOC --- I doubt it's being included in any training data, and I doubt that an LLM could regurgitate such a large project which would then compile w/o errors and then work as expected --- the number of tokens required to get such a project to an initial state is a marked hurdle as well.
In 2006, Ars Technica published an April Fool's article[0] declaring that the perennially-forthcoming Duke Nukem Forever would finally see the light of day... as... a browser game! Ho ho, how droll.
PCPartPicker are also publishing charts showing the astronomic rise in DDR5 prices over time: https://pcpartpicker.com/trends/price/memory/. Those charts don't cover any kits with 64 GB sticks, but they're a good demonstration of the general scale.
When I read the book as a teenager, I was awed by the technical achievement of the team.
When I re-read the book ten or fifteen years later as an adult facing burnout, I was awed again – by the human cost of the project. Grim stuff.
Fantastic book, but it bothers me that technologists who talk about it almost universally want to talk about the product, and often don't even notice how well the book depicts a real meat grinder of a process. Kidder was not a technologist, and I think that gave him a wonderful ability to really see everything that was happening around him, and to not simply fixate on the computer they were building.
Nowadays I think grinding is just a big part of the project. Like the ancient Chinese wisdom that says half of the road is the first 90% and the other half is the last 10%. I guess by saying grinding you also meant 1) how the engineers were treated, and 2) how quickly they were burnt out, which I agree, but I really like the pinball analogy, and I believe most people feel that way, is because they never got the chance to really play pinball -- they just play games they don't enjoy, so when they read the books they say "Oh those guys' lives really suck".
The documentation addresses that[0]. Basically, Dune 3D uses solvespace's solver, but it can do fillets and chamfers, and has a slightly more approachable user interface.
It's true. The bloom on the eggs protects them from whatever nastiness is on the outside.
This includes salmonella, which may be present if your flock is infected in the poop on the outside of the shell (remember hens only have one egress port), plus any other sources of environmental pathogens, of which there are many.
When the bloom is washed off the egg, pathogens have an easier time penetrating the shell and consuming the nutritious yummy bits inside. At room temperature, they can multiply rapidly. Refrigeration slows the rate of growth.
An unwashed egg retains the barrier, and stays fresh longer without refrigeration.
YMMV on household acceptance of dirty eggs on countertops, but they are cleaner than many other items within arms' reach that we are conditioned to not think about. :)
I happily keep eggs in the box on my kitchen worktop for maybe a couple of weeks without them going bad. They'll happily last longer, but the eggs won't be at their best.
Incidentally, I heard somewhere that using a ridge to crack eggs on (like the edge of a frying pan) isn't best as that can possibly drive a bit of poopy shell into the interior though if it's just about to be cooked and eaten then that's less problematic. I use the flat kitchen top to crack the shell instead which leads to the occasional amusing outcome of cracking it too hard and dumping the whole egg onto the worktop.
When we've had "too many" hens, we've had multiple large salad bowls full of eggs on the countertops. And that's after overwhelming our friends and neighbors.
They will easily last 4-6 weeks with no major degradation (i.e. still good for omelets, but use the freshest ones for poaching).
The forest predators eventually help moderate our egg surplus. Free range comes with risks, alas.
Interesting point about ridge-cracking. I'd never thought about it, but it makes sense and I will mend my ways! :)
I think the safest bet is to pre-renew the domains you really want to keep for as far out as you can (most registries allow you to renew a domain for up to 10 years). That way, if there is some major change to cost structures, you have a decade to either weather the storm or come up with a migration strategy.
reply