This work has been slow to take off though as the OSM community has traditionally been stuck on time wasting debates about whether opening hours displayed on the wall of a shop are copyrighted (just the raw data, not a photo of their presentation), and debating the merits and pitfalls of armchair mapping vs. on-the-ground mapping. At least these historical roadblocks seem to now be mostly resolved.
For OsmAnd, you might be able to use the OBF import feature (see https://www.osmand.net/docs/user/personal/import-export/) to add the raw ATP dataset, or potentially other open data such as Overture Maps if that is more to your liking. Data is mostly sourced direct from brand websites, APIs, etc (as if you were using a storefinder map on their website).
Interesting project osmand user here mainly in Germany.
In some cities osm data is far more accurate when t comes to opening hours or if a shop actually still exists compared to Google maps. However searching for them is a pain that one needs a bug improvement.
Since I can't rely on the search I usually try to find the poi category and click though the results,super markets,restaurants, pharmacy,atm etc works but so many cliks and caveats. Search needs massive improvement.
Absolutely. Improving this would be a great boost in usability.
I love OsmAnd and I've been using it ever since I've been using phones that can navigate. That's why I've acquired a lot of arcane knowledge on how to find places in the search function. But I could never explain to anyone what I am doing there.
It starts by the mere fact that entering a street name will always search around the current location, which is usually not where you are but the city where you last ran a lookup.
If you want to change the city, there is a tab for that. But consider using postal because sometimes the place's name may be different from what people call it. Sometimes, the same postal code appears multiple times with subsets of streets of the place. So you'll have to go for each one and look for your street. That just happened to be for Avignon (postal code 84000).
Another fun OsmAnd-introduced activity is semi-leaving German Autobahn main tracks onto the side tracks that can be used to drive off but also lead back onto the main track but with more crossing traffic. It just loves to do that.
None of such disadvantages outweigh the level of detail and possibilities in OsmAnd and further in OSM. I love knowing that I could use the same app if I once had to use a wheelchair. I love being able to add notes to a place and getting an E-Mail update months later that someone fixed an issue that I've reported.
And when I use Google Maps every once in a full moon, I run into weird little glitches that surprise me a lot because the one thing I'd expect from this marvel of our monopolistic dystopia is that it "just works" - but it really doesn't. Don't ask me what issues I ran into last time. I forgot and they've probably been replaced by more confusing ones by now :)
Also, one of the best projects that help with that "last mile" is StreetComplete (https://streetcomplete.app/ available in Google Plasy and F-droid), it makes quite easy to add e.g. opening hours to shops.
Nothing ready-to-go that I'm aware of. ATP will just observe in the next weekly crawl that a shop is no longer returned by the storefinder API call or sitemap crawl, and that shop will simply not be present in the next weekly dataset generated.
To set up archives of shop-specific pages (e.g. record of opening hours, address, etc at a point in time), one could monitor the latest builds of https://alltheplaces.xyz/builds.html and when a new build completes, take the new build and 2nd oldest build to compare differences. Then for any feature whose attributes have changed (address, phone number, opening hours, etc) archive the `website` and/or `source_uri` attribute pages again to ensure the latest snapshot is captured. Any new feature would get the same treatment so the page for the newly observed shop/feature is archived for the first time.
I'm also aware ArchiveTeam projects tend to commence once the impending collapse of a retail chain is known and someone realises there is a website not archived which would be useful to preserve. Monitoring of ATP feature counts for brands across time may give some hint of how a brand is performing and whether it is growing or shrinking without having to find press releases and financial statements of the brand. Even if a brand suddenly announces bankruptcy (it happens all the time), generally the website will remain online for at least a few months whilst a new buyer is sought or whilst each retail location has a fire sale to get rid of remaining merchandise. It's also worthwhile to be aware of acquisitions of retail chains as this often results in the new parent company changing websites soon after acquisition closes, possibly removing useful content that once existed. Websites also change "just because" and this could be observed after-the-fact by seeing when ATP spiders break and get replaced/fixed.
Your best bet is probably to look for wikidata entries that are marked defunct; and match up to something like name-suggestion-index to get broad categories.
True, I accidentally posted the date of the comment (1) not the commit. The only thing strange seems to be he used a smiley in the referenced commit message which doesn't seem to be his style.
There appears to be no obvious plausible link between the SANs other than very obvious lack of plausibility to each website. They're mostly pretend (or knock-off) business websites in random countries (everywhere from Trinidad and Tobago, Germany, mainland USA, Hawaii...) in various languages and all the ones I checked have no verifiable substance to them. For example, one domain is a supposed USA shipping/logistics company whose website states they have 1949 customers and have only delivered 7126 packages, and claims a head office as a house in Renton WA, an office at a different house in Stockbridge GA and a supposed warehouse at a third house in Portland OR. Most domains don't include any valid contact or business information, even a supposed restaurant where you'd want people to find your location easily!
There does appear to be heavy use of Google Firebase, and many of the sites share the same IP address(es) for hosting. A reverse IP lookup of domains hosted at those IP addresses reveals more random suspicious domains beyond just those just listed at https://crt.sh/?q=andrewjdillon.com
I contribute to ATP and can confirm that the author of the wildberries spider was deliberately trying to collect https://wiki.openstreetmap.org/wiki/Tag:shop%3Doutpost (online order pickup locations). It's not a common occurrence within the current set of ATP spiders to capture such features. A quick search indicates that OSM doesn't appear to have tags designed to capture pickup/dropoff partnerships between retail brands, for example, an agreement from a pet supply shop to allow collection of parcels from select fuel stations of a partner brand. Thus I think the author of the wildberries spider has used shop=outpost as the closest tag available in OSM, and Overture Map's filters wouldn't be able to omit these features from their dataset unless Overture Maps adds wildberries to their exclusion list.
Ideally ATP's "located_in" and "located_in:wikidata" fields would be populated for these wildberries pickup locations, making it clear the pickup location is part of a parent feature (e.g. fuel station, supermarket). These fields are specific to ATP and are not OSM fields. OSM would expect features to be merged and a hypothetical field such as "pickup_brands:wikidata=Q1;Q2;Q3" be used instead on the parent feature.
ATP has a much more inclusive set of features it can extract than what Overture Maps, TomTom et al care about. As Overture Maps is more opinionated on what they aggregate they will filter out ATP extracted features such as individual power poles, park bench seats, local government managed street and park trees, stormwater drain manholes, cemetery plots, weather stations, tsunami buoys, etc. I think there might be some exceptions if it helps TomTom et al with their products such as speed camera locations, national postal provider drop-off/pick-up locations within other branded retail shops, etc.
BitLocker encrypts data on a disk using what it calls a Full Volume Encryption Key (FVEK).[1][2] This FVEK is encrypted with a separate key which it calls a Volume Management Key (VMK) and the VMK-encrypted FVEK is stored in one to three (for redundancy) metadata blocks on the disk.[1][2] The VMK is then encrypted with one or more times with a key which is derived/stored using one or more methods which are identified with VolumeKeyProtectorID.[2][3] These methods include what I think would now be the default for modern Windows installations of 3 "Numerical password" (128-bit recovery key formatted with checksums) and 4 "TPM And PIN". Previously instead of 4 "TPM And PIN" most Windows installations (without TPMs forced to be used) would probably be using just 8 "Passphrase". Unless things have changed recently, in mode 4 "TPM And PIN", the TPM stores a partial key, and the PIN supplied by the user is the other partial key, and both partial keys are combined together to produce the key used to decrypt the VMK.
Seemingly once you've installed Windows and given the Microsoft your BitLocker keys in escrow, you could then use Remove-BitLockerKeyProtector to delete the VMK which is protected with mode 3 "Numerical password" (recovery key).[4] It appears that the escrow process (possibly the same as used by BackupToAAD-BitLockerKeyProtector) might only send the numerical key, rather than the VMK itself.[5][6] I couldn't find from a quick Internet search someone who has reverse engineered fveskybackup.dll to confirm this is the case though. If Microsoft are sending the VMK _and_ the numerical key, then they have everything needed to decrypt a disk. If Microsoft are only sending the numerical key, and all numerical key protected VMKs are later securely erased from the disk, the numerical key they hold in escrow wouldn't be useful later on.
Someone did however ask the same question I first had. What if I had, for example, a billion BitLocker recovery keys I wanted to ensure were backed up for my protection, safety and peace of mind? This curious person did however already know the limit was 200 recovery keys per device, and found out re-encryption would fail if this limit had been reached, then realised Microsoft had fixed this bug by adding a mechanism to automatically delete stale recovery keys in escrow, then reverse engineered fveskybackup.dll and an undocumented Microsoft Graph API call used to delete (or "delete") escrowed BitLocker recovery keys in batches of 16.[7]
It also appears you might only be able to encrypt 10000 disks per day or change your mind on your disk's BitLocker recovery keys 10000 times per day.[8] That might sound like a lot for particularly an individual, but the API also perhaps applies a limit of 150 disks being encrypted every 15 minutes for an entire organisation/tenancy. It doesn't look like anyone has written up an investigation into the limits that might apply for personal Microsoft accounts, or if limits differ if the MS-Organization-Access certificate is presented, or what happens to a Windows installation if a limit is encountered (does it skip BitLocker and continue the installation with it disabled?).
Not really, but it's quite complex for Linux because there are so many ways one can manage the configuration of a Linux environment. For something high security, I'd recommend something like Gentoo or NixOS because they have several huge advantages:
- They're easy to setup and maintain immutable and reproducible builds.
- You only install the software you need, and even within each software item, you only build/install the specific features you need. For example, if you are building a server that will sit in a datacentre, you don't need to build software with Bluetooth support, and by extension, you won't need to install Bluetooth utilities and libraries.
- Both have a monolithic Git repository for packages, which is advantageous because you gain the benefit of a giant distributed Merkle tree for verifying you have the same packages everyone else has. As observed with xz-utils, you want a supply chain attacker to be forced to infect as many people as possible so more people are likely to detect it.
- Sandboxing is used to minimise the lines of code during build/install which need to have any sort of privileges. Most packages are built and configured as "nobody" in an isolated sandbox, then a privileged process outside of the sandbox peeks inside to copy out whatever the package ended up installing. Obviously the outside process also performs checks such as preventing cool-new-free-game from overwriting /usr/bin/sudo.
- The time between a patch hitting an upstream repository and that patch being part of a package installed in these distributions is fast. This is important at the moment because there are many efforts underway to replace and rewrite old insecure software with modern secure equivalents, so you want to be using software with a modern design, not just 5 year old long-term-support software. E.g. glycin is a relatively new library used by GNOME applications for loading of untrusted images. You don't want to be waiting 3 years for a new long-support-support release of your distribution for this software.
No matter which distribution you use, you'll get some common benefits such as:
- Ability to deploy user applications using something like Flatpak which ensures they are used within a sandbox.
- Ability to deploy system applications using something like systemd which ensures they are used within a sandbox.
Microsoft have long underinvested in Windows (particularly the kernel), and have made numerous poor and failed attempts to introduce secure application packaging/sandboxing over the years. Windows is now akin to the horse and buggy when compared to the flying cars of open source Linux, iOS, Android and HarmonyOS (v5+ in particular which uses the HongMeng kernel that is even EAL6+, ASIL D and SIL 3 rated).
Sadly, Linux still has many small issues for desktop day-to-day usage. I encounter different small bugs almost each day, something I don't see on Windows that often. These bugs or inconvenient UI are tolerable for me, but not for everybody. Today the bug was Firefox not starting with first click on the shortcut, and mysterious case where keyboard clicks are not registering in the Firefox omnibar until Firefox restart.
The public source of this data (ArcGIS Feature Server account of Transpower) shows data last modified by Transpower in October 2025 for pylons and February 2025 for substations. At the rate of development of NZ, you wouldn't expect major changes to any of this data unless it's a major transmission upgrade project identified years in advance in hundreds of public announcements and documents.
Australia is miniscule by global standards and Alice Springs is miniscule by Australian standards. Alice Springs isn't connected to the grid servicing most of Australia's population crammed up along the East coast and doesn't have much in the way of heavy industrial users nearby. The difficulty for OSM mappers is the low-capacity above-ground power lines in Alice Springs have no more pixels as the trunk of any 20 year old tree so at satellite imagery resolutions of >30cm you may need to find an image taken at sunrise or sunset where the long shadow of a pole is visible on the ground. I also think it is preferred in remote locations such as Alice Springs to run lines underground (particularly along roads) due to decreased total cost of ownership of not having to worry about bushfire and flood damage to infrastructure.
The ACT government provides ~10cm aerial imagery of Canberra and surrounds a few times a year and from this imagery, unless a minor power pole is obscured by trees or a building, it is generally easy to identify most poles. Evoenergy (distribution operator for the ACT) also publicly provide detailed maps of poles and lines no matter how minor they are. The reason this detail won't be mapped in OSM is lack of interest and availability of mappers to micro-map every minor power pole from aerial imagery, and OSM's very conservative approach to importing datasets, particularly from a licensing perspective (e.g. attempting to apply European database directive concerns in countries like Australia which don't have equivalent laws, and even have opposing case law precedents).
Australia is one of the most open countries when it comes to supplying electrical grid data. Even underground conduit locations are available publicly for most distributors, as well as designed summer/winter constraints for each transmission line (e.g. maximum kA per line). See [1] for some links to maps and other data that is made publicly available.
I was looking for @marklit (Mark Litwintschik / https://tech.marksblogg.com) in the list as he's a geospatial-focussed blogger I've seen regularly on HN with interesting blog posts where he finds and presents open source datasets I'd never thought about, walks through some basic processing/querying steps, and provides some examples of what can be produced with the data. Many of the blog posts have left me thinking about possibilities to setup bots for uploading maps to Wikimedia Commons (for embedding within Wikipedias etc) based on these interesting datasets, or at least automating via scripts the development/upload of maps on a country-by-country basis (or other criteria) for static once-off datasets.
Unfortunately he doesn't show in the top 100. Also unfortunately, there is no blogger described in the top 100 as having a geospatial interest/focus.
reply