Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does this introduce more latency compared to running a LAMPs stack on EC2 or does it target larger databases? My database is about 300 megabytes.


You probably aren't the target audience if your database fits in RAM.


You can see notable changes in performance if your database gets fragmented, and tweaking the .my.cnf script seems like a massive distraction from my actual work. Having somebody handle these and improve performance for long queries would be wonderful.


With a database that small, performance shouldn't ever be noticeable. Are you running the DB on EBS?

1) If you are having trouble with tweaking my.cnf, give the perl script `mysqltuner` a go. It is pretty good.

2) The biggest improvement would be moving the storage to SSDs. I personally prefer Linode to DO, but both are cheap, and good.

My linode servers can do full text non-blocking backups at about 100MB per second, so depending on your outage recovery requirement, you may be able to get away with a simple cron job.


I you manage to have "long queries" with a 300MB dataset, you're doing something seriously wrong unless your definition of "long" is very different from mine. Or you're running on harware from the 90's.

Run "explain" on all your queries and review your indexes to begin with.

That dataset is so small that there should also be no need to be continuously tweaking the config file. 10 minutes looking at a performance guide, and ensuring you have enough ram to load everything into cache should be enough to get your configuration to a "good enough" shape - there's just nothing reasonable you'll be doing with a 300MB dataset that should need anything but getting the very basics of the config right to perform decently.


You can just use RDS m1.smalls then. We have databases that are 20-40GB running off smalls with relatively high throughput (more than I expected when builiding it for sure).

$200/mo minimum pricetag is extremely high for 300mb of data, or even 10GB of data.

We're looking at migrating a product of ours to Aurora and it has an operational dataset on the order of terabytes.

(inb4 nosql: Why not NoSQL? It needs transactions that don't suck (I'm looking at you Cassandra). We use C* for other large datasets, I do wish we could just use that.)


It must introduce a millisecond or two of latency b/c cross-AZ quorums aren't free. That will likely be trumped by whatever differences in implementation quality exist between mysql and aurora.


I have to agree with what others said. This is most likely targeted at the several GB + crowd.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: