The census data is available for bulk download, mostly as CSV (for example [1]). Scraping census.gov is worse for both the Census Bureau (which might have to do an expensive database query for each page) and for the scraper (who has to parse the page).
Blocking scrapers in robots.txt is more of a way of saying, "hey, you're doing it wrong."
It's also worth noting that the original article is out of date. The current robots.txt at census.gov is basically wide-open [2].
Scrapers don't care about robots.txt. I have scraped multiple websites in a previous job and the robots.txt means nothing. Bigger sites might detect and block you but most don't.
Blocking scrapers in robots.txt is more of a way of saying, "hey, you're doing it wrong."
It's also worth noting that the original article is out of date. The current robots.txt at census.gov is basically wide-open [2].
[1] https://www.census.gov/programs-surveys/acs/data/data-via-ft...
[2] https://www.census.gov/robots.txt