Hacker Newsnew | past | comments | ask | show | jobs | submit | dwheeler's commentslogin

I prefer the term "assistant". It can do some tasks, but today's AI often needs human guidance for good results.


I also made a list of tips on writing code with AI, with a special focus on security. Others may find the tips useful. Here they are: https://openssf.org/blog/2026/01/05/ai-software-development-...


This has many similarities to the Heartbleed vulnerability: it involves trusting lengths from an attacker, leading to unauthorized revelation of data.


Many people use Octave https://octave.org/ which is compatible (generally) with Matlab, supports this simple syntax, and is open source software. Indeed, I've taken at least one class where the instructor asked people use Octave for these kinds of calculations.


Yep -- Octave was very helpful for me in school.

Octave is not particularly fast.

RunMat is very fast (orders of magnitude -- see benchmarks).


That's only true if future improvements are easy to create as past ones, that customers care as much about those improvements, and there are no other differentiators.

For example, many companies do well by selling a less capable but more affordable and available product.


I love having built-in local natural language translation implemented by AI, which Firefox provides. Local models have different properties than remote properties, and natural language translation is a useful thing. AI should be added when it solves a real need, and the risks can be minimized (or at least controlled). The goal shouldn't be to use AI, the goal should be to solve problems for humans.


The Linux Foundation's Open Source Security Foundation (OpenSSF) has released a free online course "Secure AI/ML-Driven Software Development (LFEL1012)". It discusses protecting your software development environment, creating more secure software, and reviewing changes.


Yes, you need training if you want something good instead of slop. For example, when asked to write functions that can be secure or insecure, 45% of the time they'll do it the insecure way, and this has been stable for years. We in the OpenSSF are going to release a free course "Secure AI/ML-Driven Software Development (LFEL1012)". Expected release date is October 16. It will be here: https://training.linuxfoundation.org/express-learning/secure...

Fill in this form to receive an email notification when the course is available: https://docs.google.com/forms/d/e/1FAIpQLSfWW8M6PwOM62VHgc-Y...


Summarizes what's happened in the Open Source Security Foundation (OpenSSF) since its founding five years ago.


Using AI assistants != Vibe Coding.

AI can be a helpful assistant but they are nowhere near ready for letting loose when the results matter.


Exactly this, if you're babysitting the AI, you are, by definition, _not_ vibe coding. Vibe coding means not reading the resulting code, and accepting that things will break down completely in four or five iterations.


Brother most of them ain't even assisting. Management just forces it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: