Hacker Newsnew | past | comments | ask | show | jobs | submit | dent9's commentslogin

You should be using the email address "username@no.reply.github.com" or similar

There's never been an obligation to use a real email address for git


Amazon did this to me. Their recruiters started hounding me at an email address that I only ever used to sign git commits on some repos used on GitHub. When I asked them how they got my email address they said "it was in [our] database"

True story; I wanted to make a tiny update to our CI / CD to upload copies of some artifacts to S3. It took 1min for the LLM to remind me of the correct syntax in aws cli to do the upload and the syntax to plug it into our GitHub Actions. It then took me the next 3 hours to figure out which IAMs needed to be updated in order to allow the upload before it was revealed that Actually uploading to the S3 requires the company IT to adjust bucket policies and this requires filing a ticket with IT and waiting 1-5 business days for a response then potentially having a call with them to discuss the change and justify why we need it. So now it's four days later and I still can't push to S3.

AI reduced this from a 5-day process to a 4.9-day process


It's ironic to me because I'm the Luddite who refuses to adopt agentic AI and still using only the Chat interface with Codex and Claude inside the VS Code extensions to help me with both work projects and personal projects. And I've had amazing results with only this. "Look at this codebase and tell me the best ways to integrate some new feature", "look at this source code file and tell me what's wrong with it", "show me how to implement this thing I want". Then I copy and adapt the results as needed and integrate it with the rest of my work. This has worked great and I've shipped a ton of projects much faster and easier. Clearly the AI could have written a lot of it itself but I'm not sure I'm really lacking in any benefits with this method. So this makes the whole agentic push especially seem like some kinda over hyped gimmick.


> I am extremely out of touch with anti-LLM arguments

Wow I know that feel.

I'm here using LLM for daily work and even hobbies in very conservative manners and didn't think much of it.

Now when I have casual discussions with other folks, especially non-tech people, the visceral hatred I get for even mentioning AI and the fact that I use it is insane. There's like an entire sub group of people who are so out of touch with these tools they think they're the devil like the anti-GMO crazies and the PETA psychos.


> When you say “LLMs did not fully solve this problem” some people tend to respond with “you’re holding it wrong!” > > I think they’re sometimes right! Interacting with LLMs is a new skill, and it feels pretty weird if you’re used to writing software like it’s 2020. A more talented user of LLMs may have trivially solved this problem.

So one thing I only recently figured out is that using ChatGPT via the web browser chat is massively different from using OpenAI's code-focused Codex model / interface. Once I switched to using Codex (via the VS Code extension + my own ChatGPT subscription) the quality of answers I got improved massively.

So if you're trying to use LLM to help with debug, make sure you're using the right model!! There are apparently massive differences between models of the same generation from the same company


I appreciate the author's work in doing this and writing it all up so nicely. However every time I see someone doing this, I cannot help but wonder why they are not just using SLURM + Nextflow. SLURM can easily cluster the separate computers as worker nodes, and Nextflow can orchestrate the submission of batch jobs to SLURM in a managed pipeline of tasks. The individual tasks to submit to SLURM would be the users's own R scripts (or any script they have). Combine this with Docker containers to execute on the nodes to manage dependencies needed for task execution. And possibly Ansible for the management of the nodes themselves to install the SLURM daemons and packages etc.. Taken together this creates a FAR more portable and system-agnostic and language-agnostic data analysis workflow that can seamlessly scale over as many nodes and data sets as you can shove into it. This is a LOT better than trying to write all this code in R itself that will do the communication and data passing between nodes directly. Its not clear to me that the author actually needs anything like that, and whats worse, I have seen other authors write exactly that in R and end up re-inventing the wheel of implementing parallel compute tasks (in R). Its really not that complicated. 1) write R script that takes a chunk of your data as input, processes it, writes output to some file, 2) use a workflow manager to pass in chunks up the data to discrete parallel task instances of your script / program and submit the tasks as jobs to 3) a hardware-agnostic job scheduler running on your local hardware and/or cloud resources. This is basically the backbone of HPC but it seems like a lot of people "forget" about the 'job scheduler' and 'workflow manager' parts and jump straight to glueing data-analysis code to hardware. Also important to note that most all robust workflow managers such as Nextflow also already include the parts such as "report task completion", "collect task success / failure logs", "report task CPU / memory resource usages", etc.. So that you, the end user, only need to write the parts that implement your data analysis.


Keep doing tech work but work for more meaningful organizations. Look into STEM and science and health fields that need help with their technology. Shifting your career away from tech is a massive mistake. You need to shop your skills to organizations that are more meaningful to you. Non-tech science and health companies and orgs won't pay as well as pure tech or other fields but you get the satisfaction of knowing that your work changes the world for a positive benefit and possibly saves lives


This authors problem isn't GitHub it's the fact that they used Rust when they should have used Go. Never would have had this issue


Adderall XR + to-do lists

For work purposes I keep hand written To Do lists that I re-write every week or so

This is in addition to the teams Jira tickets and Scrum etc

There is no "switching off" your just f-ed


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: