You could run a DNS server and configure the server with a whitelist of allowed IPs on the network level, so connections are dropped before even reaching your DNS service.
For example, any red-hat based linux distro comes with Firewalld, you could set rules that by default will block all external connections and only allow your kids and their friends IP addresses to connect to your server (and only specifically on port 53). So your DNS server will only receive connections from the whitelisted IPs. Of course the only downside is that if their IP changes, you'll have to troubleshoot and whitelist the new IP, and there is the tiny possibility that they might be behind CGNAT where their IPv4 is shared with another random person, who is looking to exploit DNS servers.
But I'd say that is a pretty good solution, no one will know you are even running a DNS service except for the whitelisted IPs.
Correct me if I misunderstand what you're trying to do:
What you want to do is -on each LAN that has a Switch that you want to play on your specific Minecraft server- report that the IP for the hostname of the Minecraft server the Switch would ordinarily connect to is the server that you're hosting?
If you're using OpenWRT, it looks like you can add the relevant entries to '/etc/hosts' on the system and dnsmasq will serve up that name data. [0] I'd be a little shocked (but only a little) if something similar were impossible on all non-OpenWRT consumer-grade routers.
My Switch 1 is more than happy to use the DNS server that DHCP tells it to. I assume the Switch 2 is the same way.
I can do that for my network - but the group is multiple kids that play from their home. I'm not going to teach all of those parents how to mess with their network. There's just way too many things that can go wrong. Also, won't work if the kid is traveling.
> We should be very concerned for the next generation. When you have the constant temptation of digging yourself out of a problem just by asking an LLM how will you ever learn anything?
This is just the same concern whenever a new technology appears.
* Socrates argued that writing would weaken memory, that it would create only superficially knowledge but incapable of really understanding. But it didn't destroy it. It allowed to store information and share it with many others far away.
* The internet and web indexers made information instantly accessible, allowing you to search for the information you just need, the fear is that people would just copy from the internet, yet researching information became way faster, any one with Internet access could access this information and learn themselves, just look at the amount of educational websites with courses to learn.
Each time a new technology came and people feared that it could degrade knowledge, the tools only helped us to increase our knowledge.
Just like with books and the internet, people could simply copy and not learn anything, its not exclusive to LLMs. The issue isn't in the tool itself, but how we use it. The new generation will probably instead of learning how to search, they will need to learn how to prompt, ask and evaluate whether the LLM isn't hallucinating or not.
I'm not sure what you mean by Socrates was proven dead wrong.
The study you linked doesn't show that people are becoming dumber because of LLMs, its just showing that when you offload tasks to these tools your brain engages less in that specific task, just like you'd do with a calculator, instead of doing complex calculations on paper, the calculator will do them for you, or when writing and using a spell-checker or using a search engine, instead of opening a book and searching. The question is whether in the long-term cognitive capacity is reduced, and like I said before this argument predates LLMs (All the way back to Socrates)
Also, take the study with a grain of salt as this is a small sample with only 54 participants for a single task on a short term study.
Personally, I believe LLMs just allows us to have a higher level of abstraction.
Though you can browse and download the latest version 3.0A (1996), there is a directory where they have older versions, but its a bunch of files mixed up with different versions. https://www.w3.org/Daemon/old/
Well, Google has marketed Android as an open source operating system (AOSP) and openness about the system [1] and encouraged manufacturers and developers to build on it based on the premise of openness and of course being "free". People advocated for Android because it was open source compared to other alternatives. But with this change they are simply ending that openness. People that have developed F-Droid and other alternative stores have contributed to the platform value (such as not being able to de-google their phone), the same goes for many other developers who have spent countless of hours developing for Android.
To say they don't owe you nothing seems like a betrayal on the promise that Android was an open platform (and open source).
> You are free to not use their products or start a company to compete
That's not an option as you are making it out to be. For a user switching means buying a new phone, repurchasing apps (if you bought) and maybe apps won't be even available to the new system, for developers that means all their knowledge about the system gone. Building a mobile operating system requires millions if not billions of dollars, years of work and convincing developers and businesses (hardware makers) to use your operating system. The barrier to enter is so high that telling people to just compete with Google is not a realistic solution.
> 1. Existing guidelines already handle low-value content. If an AI reply is shallow or off-topic, it gets downvoted or flagged.
>
> 2. Transparency is good. Explicitly citing an AI is better than users passing off its output as their own, which a ban might encourage.
>
> 3. The community can self-regulate. We don't need a new rule for every type of low-effort content.
>
> The issue is low effort, not the tool used. Let downvotes handle it.
My best guess would be that plugins are limited into what they can do within VSCode, and rewriting a whole IDE/Text editor just for AI Agent seems a lot of work.
For example a while back vscode-pets[1] plugin became popular and tried it and noticed that the pet can only live within a window, whether its the explorer section or in its own panel, I thought it'd be more of a desktop pet that could be anywhere within VSCode but apparently there are limitations (https://github.com/tonybaloney/vscode-pets/issues/4).
So my guess is that forking VSCode and customizing it that way is much easier to do things that you can't with a plugin while also not having to maintain an IDE/Text editor.
For example, any red-hat based linux distro comes with Firewalld, you could set rules that by default will block all external connections and only allow your kids and their friends IP addresses to connect to your server (and only specifically on port 53). So your DNS server will only receive connections from the whitelisted IPs. Of course the only downside is that if their IP changes, you'll have to troubleshoot and whitelist the new IP, and there is the tiny possibility that they might be behind CGNAT where their IPv4 is shared with another random person, who is looking to exploit DNS servers.
But I'd say that is a pretty good solution, no one will know you are even running a DNS service except for the whitelisted IPs.
reply