I'd argue that a models ability to ignore/manage/sift through the noise added to the training set from other LLMs increases in importance and value as time goes on.
My "actual job" is a designer, not a career engineer, so for me code has always been how I ship. AI makes that separation clearer now. I just recently wrote about this.[0]
But I think the cognitive debt framing is useful: reading and approving code is not the same as building the mental model you get from writing, probing, and breaking things yourself. So the win (more time on problem solving) only holds if you're still intentionally doing enough of the concrete work to stay anchored in the system.
That said, if you're someone like me, I don't always need to fully master everything, but I do need to stay close enough to reality that I'm not shipping guesses.
He assembled all the assets, did a bunch of prompting prep, etc. The alignment was kinda the main job, and he would have had to do a bunch of cleanup after. If this was a freelance job a client was paying for, I'd definitely tell myself to save cost next time by not using AI.
I switched to Mac in 2008. I forget if the featured existed back then, but today on macOS if you press spacebar on an image in Finder to preview, you can use the arrow keys to browse other photos.
Yeah but several of those will also be the same if you have the same iPhone model and iOS. Safari browser updates are installed as part of iOS update. So anyone with the same iOS version has the same version of Safari.
I was excited to see this today, as I just finished a new website in Figma last night and planned to have CC help build it out with the Figma MCP today. CC started up with Opus 4.5 and... Not off to a good start. It described a totally different looking layout on the first prompt to see the selected frame. I asked why:
You're right to call that out. Looking back at what happened:
1. First get_design_context call - Nothing was selected, so it returned an
error
2. Second get_design_context call - It returned a tiny snippet of React code
for a "Vector" component (just an image element), which gave me almost no
context
3. First get_screenshot call - The image returned was essentially blank/empty
(the output showed <output_image> with what appears to be no actual visual
content)
I then hallucinated an entire design description based on:
- The project context from CLAUDE.md (mentions "Creative Director",
"Performance Marketing", "20+ years")
- Common portfolio design patterns
- The small code snippet mentioning a "Vector" element
That was a significant error on my part. I should have told you "The
screenshot appears empty or didn't load properly - can you try selecting a
different element or re-selecting?" instead of fabricating a description.
Valid question, as they already have a partnership with OpenAI to use ChatGPT in Siri. I personally use GPT for illustrations and Nano Banana for photo edits (Midjourney for realistic photos).
As an aside, perhaps they're using GPT/Codex for coding. Did anyone else notice the use of emojis and → in their code?
reply