Hacker Newsnew | past | comments | ask | show | jobs | submit | awsthro00945's commentslogin

I think I've seen you post something similar on r/aws about how Rick was "top DynamoDb person at AWS" (apologies if that wasn't you). I think you are overestimating Rick's "rank".

I just looked him up (I had not heard of him before seeing his name mentioned on r/aws a few days ago) and he was an L7 TPM/Practice Manager in AWS's sales organization. That's not really a notably high position, and in the grand scheme of Amazon pay scales, isn't that high up. An L7 TPM gets paid about the same as, or sometimes less than, an L6 software dev (L6 is "senior", which is ~5-10 years of experience).

Also, him being in the sales org means he had practically nothing to do with the engineering of the service. AWS Sales is a revolving door of people. I mean no offense towards Rick (again, I didn't know him or even know of him before I read his name in a comment a few days ago), but I would not read anything at all into the fact that an L7 Sales TPM left for another company.


Actually, I was a direct report to Colin Lazier (https://twitter.com/clazier) who is the GM for DynamoDB, Keyspaces, and Glue Elastic Views. I was the original TPM for DocumentDB before joining the Professional Services team as a Senior Practice Manager to head up the NoSQL Blackbelt team which led the archtecture/design effort for Amazon's RDBMS->NoSQL migration. I was brought back to the service team by Jim Scharf to lead the technical solutions team for strategic accounts, but I maintained the org chart role of Senior Practice Manager until I left for MongoDB.

Compensation was a minor issue. I was an org chart aberration already and AWS pulled out all the stops to retain me. I will always appreciate the opportunity that AWS provided me and my time at DynamoDB will always hold a special place in my heart. I really do believe that MongoDB is poised to do great things and my decision had more to do with being a part of that than anything else.


Whoa straight from the source!


You never heard of Rick Houlihan? He is the 90% of DynamoDB Evangelism... At the same time you are able to this internal lookups? Do you work with DynamoDB?

AWS re:Invent 2018: Amazon DynamoDB Deep Dive: Advanced Design Patterns for DynamoDB (DAT401) https://youtu.be/HaEPXoXVf2k

AWS re:Invent 2019: [REPEAT 1] Amazon DynamoDB deep dive: Advanced design patterns (DAT403-R1) https://youtu.be/6yqfmXiZTlM

AWS re:Invent 2020: Amazon DynamoDB advanced design patterns – Part 1 https://youtu.be/MF9a1UNOAQo

AWS re:Invent 2020: Amazon DynamoDB advanced design patterns – Part 2 https://youtu.be/_KNrRdWD25M

AWS re:Invent 2021 - DynamoDB deep dive: Advanced design patterns https://youtu.be/xfxBhvGpoa0

Amazon DynamoDB | Office Hours with Rick Houlihan: Breaking down the design process for NoSQL applications https://www.twitch.tv/videos/761425806


Do you expect the engineers on your team to know the top sales person at your company?

This person might be responsible for the majority of evangelism and revenue for the company. Do you expect the SDEs to know about him?

Again, no shot against against Rick - he is amazing, smart, technical, competent, and a deep owner.

But the average SDE on the team won't know about these or watch these talks. There are too many deep internal engineering challenges to solve.


Maybe that was the problem. He cited that there was seemingly not enough effort in making DynamoDB better as evidenced by the many orthogonally very close other DBs that AWS promotes. If Rick was ears to the ground listening to customers and sending back feedback but it was falling on deaf ears that's enough ground for someone as high up and as influential and productive as him to leave. It also speaks to inner AWS turmoil at least at DynamoDB.


Based on what I know, that's not the case.

DDB is a steady ship. The explanation on https://news.ycombinator.com/item?id=30009611 is likely the best explanation. L7 TPMs make the same money as L6 SDEs.

Getting promoted to L8 - director - is a monumental effort and likely seemed much harder than pursuing a comprable position at MongoDB.

Good for him for doing it, and for making Amazon take a long hard look at every way they failed in not keeping him.


>It also speaks to inner AWS turmoil at least at DynamoDB.

How? Rick wasn't part of the DynamoDB service team. He wasn't an engineer, nor a manager on the team, nor even a product manager. He was a salesperson that specialized in DDB. He most likely had very little interactions, if any, with the engineering team. I don't see how him leaving speaks at all to anything about the inner workings of the engineering teams.

Rick seems cool, and after skimming some of his chats he seems really knowledgeable about the customer-facing side of DDB, and I mean absolutely no disrespect to him. But I think you're making way too many assumptions about his "rank" and "influence" within the company.


I have watched almost all those talks as they are technically dense and full of very good and very useful technical knowledge that I would be much poorer for not watching. These are not sales videos but highly complex instructional content meant for developers on the ground


Are you calling the person who did the core DynamoDB Technical Deep Dive sessions at reInvent, for the last 4 years in a row, a sales person?


There are over a thousand breakout sessions at every reinvent every year. Some of the speakers are sales people, some are engineers, some are managers. There are L5 or junior engineers who give reinvent session talks. It's a fun gig, but it doesn't mean that the speaker is some top executive or anything like that.

Rich was in the sales org. His primary job was sales. Reinvent is a sales conference. Speaking at reinvent is a sales pitch. He was a salesperson. I'm not sure why you're so offended by that. Being a salesperson isn't bad, it's just an explanation for why engineers wouldn't have heard of him.


What do you think Solutions Architects and Developer Advocates (between the two groups who do most Re:invent sessions) are?

Hell, what do you think re:Invent is? It's a sales conference.

In any company you have two groups of people: Those that build the product, and those that sell it. Ultimately, solutions architects and developer advocates are there to help sell the product.

Of course Amazon is customer obsessed. And genuinely interested in ensuring customers have a good experience, and their technical needs are met - through education, support, and architectural guidance. But ultimately, that's what it is.


I think I understand now why he left...


No, I haven't. There are thousands of reinvent sessions every year. I don't watch them all (I don't watch hardly any of them, and most people I know in Amazon watch a couple breakout sessions if that. Some don't even watch the keynotes). Their targeted audience is AWS customers, not internal engineers. Reinvent itself is a sales conference. If internal Amazonians want to learn about something like DDB, there are internal talks and documents given by the engineering leaders that we watch.

>At the same time you are able to this internal lookups?

I looked him up on LinkedIn. Nothing internal about it.


was not me at r/aws

unless he posts here about it we can't really know -- we can only speculate but I think he had a higher amount of influence than his title/rank might suggest. I think Rick's influence with respect to DynamoDB is akin to that of Kelsey Hightower's influence over k8s at Google.


>No one uses DynamoDB alone

Almost every single team at Amazon that I can think of off the top of my head uses DynamoDB (or DDB + S3) as its sole data store. I know that there are teams out there using relational DBs as well (especially in analytics), but in my day-to-day working with a constantly changing variety of teams that run customer-facing apps, I haven't seen RDS/Redis/etc being used in months.


The thing about Amazon is that it is massive. In my neck of the woods, I've got the complete opposite experience. So many teams have the exact DDB induced infrastructure sprawl as described by the GP (e.g. supplemental RDBMS, Elastic, caching layers, etc..).

Which says nothing of DDB. It's an god-tier tool if what you need matches what it's selling. However, I see too many teams reach for it by default without doing any actual analysis (including young me!), thus leading to the "oh shit, how will we...?" soup of ad-hoc supporting infra. Big machines look great on the promo-doc tho. So, I don't expect it to stop.


"My logs and firewall are less cluttered" is not at all the correct metric to measure the security of your box.

IP address spoofing is a thing. Blocking CIDR ranges might protect you from low-effort, drive-by botnets that constantly scan the entire internet (which all should be completely mitigated by using certificate based auth anyway), but blocking based on IP address is absolutely not an effective control against a determined hacker.

You must consider your threat model. For your personal instance that you host hobby things on, you probably won't be targeted via IP spoofing. For any type of company, you should not be relying on CIDR blocking as part of your security layers. CIDR blocking is only effective at reducing the clutter of your logs, which is a convenience, not a security control. The real security control is using proper auth methods, which are so easy to do at this point that it's ridiculous for even a hobbyist to not do them.


my understanding is that spoofing only works for sessionless protocols or situations -- eg a single udp packet or a series of packets that do not rely on any kind of response, since the response (like a tcp ack, or a dh handshake) is routed to the spoofed address. this would not apply to ssh. what contexts are you thinking of?


Yeah, the parent comment is not accurate. IP spoofing is only possible if you control the entire L4 stack.


There's network-level "IP spoofing" and then there's just routing traffic through an IP-diverse botnet.


Why are you assuming that a determined attacker doesn't control your L4 stack? MITMs are a threat, your network could be compromised, routers (especially consumer routers) are rife with vulnerabilities. This is the entire reason "zero trust" is pushed.


> ... determined attacker doesn't control your L4 stack?

Lets face it, for most businesses and pretty much all home users* the best they can hope to achieve is not to get owned by various automated attacks.

If some determined attacker is trying to get in, he will get in.

* Sure there are exceptions


people who control the l4 stack probably aren't brute forcing my ssh server


"the attacker probably won't do that" is not a security control.


"the attacker probably won't do that" is very much part of threat modeling, the #1 step in any serious security design.


In any serious security design, "the attacker probably won't do that" would and should be shot down immediately. If your security strategy is hoping that an attacker will be kind enough to not exploit your open vulnerability, you've already failed at threat modeling and at security.

If an attacker can do it, you must assume they will do it. Because they will. That should be the starting point for any threat model.


"the attacker probably won't intercept the mail and install rootkits on brand new hardware"

"the attacker probably won't read my password through the wall from the radiation off my keyboard"

if your starting point is APT-level adversary then you might as well give up


What brand of locks do you have on your doors at home and does the lockpicking lawyer have a video of going through them in a few seconds?


that's cool man, i'm still going to block the 99.9999% of attackers that don't own my isp. you are conflating "bad idea in extremely exotic scenario" with "counterproductive"; ever heard of defense in depth?


Financial markets are more than just the SP500. The sentence before that is talking about both stock markets and crypto markets. Crypto (BTC specifically, but also others) dropped ~30% in December.


And how much did crypto go up this year compared to S&P500?


That's not the point. There was a selloff in December. How crypto fared compared to anything else at any time period is irrelevant.


This is a false equivalency. Being vaccinated or not is the difference between requiring days/weeks in the hospital or only spending 1-2 days with a mild headache. It's "mildly inconvenienced if you do, damned if you don't".

I was at the hospital yesterday (for something unrelated to covid) and there are 0 rooms available. The hallways are still packed with unvaccinated people with covid laying in every open space they can find. Nurses and doctors are still worked past their breaking point.

We cannot move on until the thick-skulled members of society realize that their unwillingness to get vaccinated is the number one thing stopping us from moving on.


It ain't happening though so we need to move on with it. There are too many stubborn people in the United States and probably elsewhere.

There are people who would rather die than take the vaccine for whatever ridiculous reason so why are we sitting around waiting for them.


I'll say it again: we cannot move on with it until people are vaccinated. It's not a choice. It's not something where we just say "eh well it looks like it won't get better so let's move on". It physically cannot happen.


I encourage everyone eligible to protect themselves by getting vaccinated, but moving on is an entirely separate issue. We can move on as soon as people stop panicking and decide to accept the risks. In fact that's already happening in some states.

Strong circumstantial evidence indicates that another coronavirus HCoV-OC43 caused another worldwide pandemic starting in 1889. It killed a lot of people. There were no vaccines or effective treatments. The same virus is still endemic today; the only reason it doesn't kill many people today is that most of us get infected as youths and the resulting immunity protects us later in life. People moved on.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7252012/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC544107/


Circumstantial evidence - no direct evidence for HCoV-OC43. In fact it is simply conjecture.

It would be instructive to compare the virulence factors encoded in SARS-CoV-2 vs the common cold coronas.

People can move on when Covid stops making people seriously ill and compromising our healthcare systems. That is not yet, indeed the way we are going it may be never. Even if Omicron turns out to be 'mild', the next variant may not be.


I think there's a possibility that omicron may mark the end of this pandemic. Anybody who refuses to get vaccinated will very likely get omicron within the next few weeks. So your immune system will either develop antibodies as a result of being vaccinated, or as a result of being infected. Well, there is the third option of dying from covid, but the current evidence seems to indicate that the risk of hospitalization or death from omicron is lower than from covid-19 or delta.

To be sure, the descendants of the novel coronavirus that appeared in Wuhan in 2019 will float around the human population indefinitely. Omicron isn't the end of covid, but it could be the end of widespread hospitalizations and deaths. At least until the next crisis comes along.


So let's assume these people never get vaccinated. We wait forever?


You're still not understanding. We are not "waiting". Waiting implies that we are making some type of conscious choice to put things on hold. But there is no choice. We cannot simply choose to stop waiting. We cannot move on until people are vaccinated. We are blocked, not waiting.


Lol, you act like this is the first time in humanity's history that we've had a virus. Humanity continues despite it and we will continue despite many people choosing not get easily vaccinated.

Eventually people will move on. It's the human condition.


Humanity "continuing" or "moving on" naturally due to the passage of time (which will be quite a long time) is a completely different thing than humanity "choosing" to move on. Your original comments imply/ask that humanity collectively "chooses" to move on and stop letting covid affect us, but again, that is simply not possible. It's not something we choose.

Individuals can individually choose to pretend covid isn't a thing, but society as a whole cannot simply choose to suddenly restore our medical infrastructure, fix supply chains, grow the labor market, etc. Covid's effect on those things won't magically go away just because someone says "you know what, I'm tired of waiting on covid! I'm going to be normal now!"

Adjustments to these will happen over time and humanity may "continue", but when that happens is not a choice we make.


Most of those issues are due to choices like quarantine and lockdowns.

In the UK we have issues with driving tests because everything closed down "cus Covid". Except the virus had nothing to do with it and now most of the instructors have already had their mild cold anyway.

0.2% of the population dying, heavily weighted towards the elderly, does not break supply chains.


I mean, there is a third option that neither of you have presented. We could fix the emergency room and infectious disease ward situation with federal money, and then move on instead of using federal money to prolong lock downs. Addressing infrastructure isn't always popular, or the fastest, but it lasts longer and solves the problem of overcrowding.

Yes, it might mean another 9 months to a year of lockdown, but it would be there still when (not if) another disaster occurs. Now, people will get angry at subsidizing private hospitals... but I could go on about how emergency care should be publicly funded anyway. But I digress.


> Yes, it might mean another 9 months to a year of lockdown

Oh goodness no. There’s a supply limit for medical professionals that will take half a decade to solve even with unlimited funding. And as this is a worldwide issue, not just an American one, you can’t just outbid the rest of us for migrant healthcare workers.


Is there a fix for this? Is it just a payment of loans issue, or is it a working conditions issue or is it a lack of interest issue? Or is it some combination of the three, or another, unknown thing (or a known, unmentioned thing)?

Granted, I have zero power. I ask because I'm curious and just want to know.


It takes a long time to train medical professionals, and the current number is for the world 5-ish years ago (the training delay depends on the actual role, but that looks to me like the common one). If you want to have enough to cope with the extra demand from COVID, it will just take a long time.


See! We agree.


> I was at the hospital yesterday (for something unrelated to covid) and there are 0 rooms available. The hallways are still packed with unvaccinated people with covid laying in every open space they can find. Nurses and doctors are still worked past their breaking point.

Anecdote: I had to go to the ER in 2017 in San Francisco and my experience was exactly like this back then too. It was a ~4 hour wait in the ER waiting room, then another several hours on a bed in a bright loud busy hallway, then some tests, back to the hallway for a few hours, and then emergency inpatient surgery.


Unfortunately that can vary from hospital to hospital, it also depends on how you're triaged.

If you go to SF General, yes, you're in hell. Its an extremely poorly run city hospital that is where most GSW victims go, its busy. If you go to UCSF or CPMC, you'll get world class care.


It was UCSF


Interesting, I guess it could be related to CV.

Whenever I've had to go there its very speedy.


It's not that. Like most niche AWS services, this was likely the "pet project" of a major AWS customer that wanted something like this as part of a major business agreement [0]. And then after building it for that customer, AWS also expanded offering it to anyone else who wants to use it.

0: https://blog.maxar.com/earth-intelligence/2018/sending-data-... (posted the same day Ground Station was launched in 2018)


I had no idea this was the case. I've often wondered why these niche services have marketing pages which appear to appeal to the masses.


>This a) is not new

This specific announcement is new, because this specific announcement is about Outposts in a new, smaller form factor that just went GA today.

>One use case is wanting to run the same cloud stack globally, but having a market where there is no local region and local law requires that data stay in country.

The use case mentioned in the announcement is more about running EC2 instances in small branch offices or retail stores where you 1) still want to run AWS, 2) need the servers to be in very close proximity, and 3) don't have the room or infrastructure for a full rack.


Why do you "need the servers to be in very close proximity" in a retail store? It's not high frequency trading


Poor connectivity to the internet?


Given that AWS Outposts behave badly without connection to mothership, well...


Why do you want it close then?

Purely local latency?


https://news.ycombinator.com/item?id=29398880 summarizes it best when it comes to my answer :)


For some customers, I imagine this will start off as an emotionally-driven request...until they see the price.


I think the closest you can get is using a VPS like DigitalOcean where you pay $X for a server and there's no autoscaling to worry about. But even with those, if you go over the bandwidth limit (although with DO the bandwidth limit is a lot higher) you would be charged more.

The unfortunate reality is that hobby developers that just want to pay $20/month aren't the target audience for GCP etc. They don't really care if you're using them less for your personal hobby projects. They target large enterprises, and those large enterprises would have very little use for something like "cap my spend at $20".

Even as an AWS employee, I sometimes use non-AWS hosting providers for my own projects. Even outside of the billing situation, AWS is often too complicated for my use cases. It's just not targeted at me and my hobby development projects.

disclaimer: am AWS employee but the above is my own opinion and not official position of the company, etc etc.


> and those large enterprises would have very little use for something like "cap my spend at $20".

For the enterprise as a whole, probably not. But it would still be useful to be able to create sandbox accounts for experimentation with a hard limit on spend. Or give developers their own cloud accounts to run development and testing infrastructure that they can control, without having to worry about them accidentally spending way too much.


+1 to this; also an AWS employee, also use other VPS providers for personal projects (Hetzner and Vultr). Don't need the full breadth of AWS services to tinker with Caddy and Tailscale, but more than that I simply follow the practice of "don't shit where you eat".


I think AWS drastically needs to create some type of "sandbox account" flag that severely locks down the services you can use and the amount you can scale up, exactly for reasons like you said.

However, I also think a big problem is that many people on the internet and especially people who try to sell AWS tutorials or learning courses push AWS as some toy that every developer should sign up for on a whim without understanding what they are doing. An AWS account is an industrial-grade tool, it's not a toy, and it should be treated as such. It's like renting a backhoe when you don't even know how to use a shovel yet, and then being surprised when you completely screw up your yard.

Sites like acloudguru that offer ephemeral sandbox AWS accounts are becoming more popular, and people new to AWS should really be steered towards those.


You're misreading it. Only some parts of the free tier "age out". Other parts of the free tier are free forever (it's really stupidly confusing). The things announced in this announcement are free forever.


Thanks, that wasn't clear to me. I'll edit my post.


It's stupidly confusing because the regional transfer of 1GB free per month isn't technically part of the "Free Tier" as advertised on this page [0], it's just part of the normal egress pricing model of the individual services [1] [2]. So I guess really this announcement post is misleading/confusing because the regional transfer increase is just a change in the normal pricing model, not the "Free Tier"... but really that's just semantics. AWS really needs to fix the "Free Tier" to make it less confusing.

0: https://aws.amazon.com/free/

1: https://aws.amazon.com/ec2/pricing/on-demand/

2: https://aws.amazon.com/s3/pricing/


Thanks, really appreciate the detail. Edited my post.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: