I'm assuming it also logs some debugging information with regards to your git client's behavior, though. So you could still be able to get to that contact form.
My point is chances are you can't visit https://github.com/contact to paste your debugging result, if you're "Having trouble connecting to github.com".
The main principle here is to reuse as less infra as possible: different dns provider, different hosting, different cert authority, etc. So, debug.github.com is probably worse, than github-debug.com (github.debug should be fine though, and even slightly better than github-debug.com)
You could use two independent dns providers for the same domain (i call it inverse split horizon). I am a proponent of dns service discovery (using NAPTR and SRV records) so that by going to e.g. www.github.com, the dns provides a list of servers, and the final server would be your debug site (kind of like how there are MX records with different priorities, except naptr/srv is service agnostic).
Hi, I assume you're the writer or at least work at Dropbox.
Just want to let you know (as I mentioned somewhere): this blog link doesn't load at all if I click from here (i.e. with the refer of news.ycombinator.com).
To isolate possible issues with gtld, dns + glue records, registrar, domain blocking (corporate firewalls, restricted/compromised user dns, malware) that affects .com (the github.debug that is, debug.github.com is not a good solution for the above reasons)
Good find, I like this pattern. I could see a niche startup or open source solution providing this as a service, similar to statuspage (acquired by atlassian). It could also help keep the layout consistent, for user familiarity.
> DNS TTL is a lie. Even though we have TTL of one minute for www.dropbox.com, it still takes 15 minutes to drain 90% of traffic, and it may take a full hour to drain 95% of traffic.
Interesting, awhile back both Google and AWS engineers replied to a HN thread[1] saying TTL of 30 seconds or so works pretty responsibly and can be trusted and used. Seems to be some disagreement on this.
It really depends who your clients are. If they are servers, 30s can work okay. If they are end users, caching happens all over. It's a huge PITA, especially since you'll run into podunk ISPs that have their own custom caching setup, but you have a customer with a shop there. Not that I'm still bitter.
It really depends on the clients, here is an excerpt from the article:
> Here we also need to mention the myriad embedded devices using Dropbox API that range from video cameras to smart fridges which have a tendency of resolving DNS addresses only during power-on.
With GeoDNS users requests will be resolved to single IP address with which they can establish a TCP connection.
How, will connection establishment (TCP specifically) work with anycast ? Where request (packets ?) can be routed to different machines ? Is there some other network protocol to use with anycast ?
Anycast route election is done by the connecting router (typically the clients ISP), and it works over bgp like normal route advertising - it's transparent to your applications and they only get directed to one endpoint destination (typically the one with the lowest latency, congestion, or cost). Once the router gets a request from a client for a destination, it does the same thing it usually does and checks the possible routes and chooses the "best" based on the rules defined by the owner of the router. The difference here is that not all of those proposed routes make it to the same physical destination. There is some stickyness to make sure you're arriving at the same destination for the duration of that connection.
Really it's like asking your favorite maps app where McDonalds is - it'll know to give you results close to you, since it knows there are many McDonalds. It would make sense for you to choose the closest, but it's not enforced by McDonalds. Similarly, if I do the same and we live in different cities, we'll get different results. The end result is that we both got to different physical places by specifying the same thing, and it's up to McDonalds to make sure the menu is the same :)
Read more on anycast. Anycast lets different computers advertise the same IP address from multiple places. When your computer tries to talk to that IP address (over whatever protocol you chose) the network sends that traffic to the “closest” computer advertising that IP address.
TCP builds a reliable stream on top of unreliable IP packets. Since which computer is “closest” rarely changes, your stream will keep being sent to the same server. When that server stops advertising the IP, then the data will get sent to a different server which will say “I don’t know what you are talking about, Reset your stream” and the tcp connection will close, basically.
I wonder which definition of "Edge" is going to win out because right now it's being used interchangeably to mean either: 1) CDN or 2) On-premise machines/IoT.
Are these really two competing definitions or just manifestations of the same concept? What is considered an edge device depends on the boundaries of whatever network is under discussion.
I've never heard it used with that second definition. It's pretty consistently used to refer to running close to the user, as opposed to having a big data center which most of your users aren't near.
The second definition could be a confusion of ownership — i.e. are you paying a CDN to do higher-level service or running the services yourself?
> Most CDN providers will provide their services over a varying, defined, set of PoPs [...]. These sets of PoPs can be called "edges", "edge nodes" or "edge networks" as they would be the closest edge of CDN assets to the end user"
I work on an Edge Platform team as part of an Edge Foundation that manages both external CDN and internal Tier 1 WAF/Ingress systems. We do Edge computing at both CDN and Tier 1 layers via tenant plugins running LUA/go. We also have an SDN team building Tier 2 solutions, so basically systems operating at the edge of each layer of the HTTP stack.
This article is about neither of those things, though. https://en.m.wikipedia.org/wiki/Edge_device is the term that's relevant to this article. It's about the network edge itself.
Those aren't completely distinct. We've started calling the edge everything between app servers and user devices. The boundary is "where your users are in control", but the edge itself is pretty fat.
That's an interesting implication. If they distribute the same cert so widely geographically, any host country could technically request it for "lawful intercepts".
You don't have to keep keys on boxes in random countries if you use a TLS oracle [1]. Another option is deploying the keys onto an HSM and pointing your frontends at that.
Dropbox doesn't offer privacy. Anything done on the service is visible to them and whoever else convinces them to hand over the data.
They're exactly the type of cloud service that shouldn't be used by businesses or privacy-conscious individuals.
Security's also questionable. They had an incident where one could log into any account a long time ago. More recently they were presenting a fake admin dialog to syphon the admin password on macOS and perform some admin tasks on the machine.
Ouch, that was embarrassing -- thanks for spotting =) Sorry about that -- ESL and stuff. Editors did a hell of a job fixing our English, but errors still slipped in. We'll be fixing grammar (and probably add link to the presentation PDF) later this week.
Oh, right, I'd forgotten about https://news.ycombinator.com/item?id=9224 It's definitely comical in retrospect for not even understanding the state of the technology at the time.
I know the comment is today taken as the high of HN negativity, but to me the comment seems very reasonable.
- Back then there were FTP clients that automatically kept server and client in sync, which is the main feature of Dropbox. Dropbox adds a website, but Windows Explorer already supports FTP nativly. Of course easily creating shared links turned out to be a major thing, but I don't think we can blame people for not predicting that (especially since public folders are a feature of FTP servers, so it's not a new feature, just a lot more convinience). And of course Dropbox makes all that convinient and approachable, but that's easily overlooked by the technical user.
- The comment points out that contrary to the headline Dropbox will not replace USB drives. And here we are, a decade later, and Dropbox indeed didn't replace USB drives.
Of course in hindsight it's clear that Dropbox was a great idea with great execution, but that wasn't obvious at the time at all.
I was just thinking that having used various FTP/SFTP-as-a-filesystem, not to mention NFS and SMB, over a decade or so before Dropbox arrived made the sales pitch immediately obvious: do you want everything to be slow and unreliable, with frequent jank even on fast networks, or not?
“Works” in the sense that the experience is acceptable but anyone who's used it knows that while things have gotten a little better over the years there are still a wide range of programs which handle latency by blocking. If you use a network home directory, you just get used to Outlook, Word, etc. sporadically hanging for a few seconds before the UI paints, etc.
That's going to be worse as a function of latency and packet loss so it's far more tolerable in an enterprise environment using wired networks with tons of bandwidth and, at least theoretically, a professional support team. Over WiFi or consumer-grade internet (i.e. probably a strong majority of Dropbox's customers) the gap in experience is going to be more substantial.
It's heavily used over VPNs. I have seen SMB over the internet by [ill-advised] companies but the various waves of exploits have probably put an end to that.
The main point was just that something like SMB or NFS is not a good fit for a network which is not extremely fast and highly reliable because too many programs do blocking I/O. Dropbox works really well in that situation because it's asynchronous and that advantage was huge when they came out because everything was even worse back in 2007.
This has to be HN at its worst. Reducing a complicated file sharing and collaboration tool to an insecure and highly technical protocol.
Dropbox: I can upload a file super easily and share a simple & secure link with someone who just has a web browser.
FTP: I can upload a file to an FTP server I've either configured on my server or rented online. I'll then provide an FTP url to friend with instructions on how they should login and what FTP client they should use on their chosen device.
EDIT: This could be sarcasm, if I didn't pick up on then feel free to downvote me to hell.
EDIT 2: Thanks to the comments, this is sarcasm. I messed up. Sorry rakoo.
It does indeed, but I feel like it's an important part of HN (some people arrive here allthc time) and retrospecting about it is something all engineers should do, so I felt the need to point at I again.
It's probably in reference to a HN comment on Dropbox's original announcement post that said the product was a glorified version of Rsync. To be fair to that commenter, he congratualted the company in it's IPO post.
Would be great to know how exactly they store all the customer data on this edge network. Is it encrypted with a customer-specific key? If yes, when and how do they decrypt it?
They were using AWS. They have moved off of it within the last couple years because they now have the scale where it makes financial sense to build their own infrastructure and also to provide a better, faster service. They've had improved read/write + sync speeds since switching over to their own infrastructure. Having those checkboxes in a table showing that you have the fastest cloud storage works really well in B2B, which has been a big focus for them recently.
They did, but I imagine it was really expensive vs building their own stuff at that scale. Plus they probably don't want to be reliant on a competitor in many ways.