Perhaps that's the trick -- if you make an ad look less like an ad, it's 0.1% more likely to convert.
And Google still hasn't figured out how to best their search advertising network business, so I assume that people click on ads, even if I'm not in that demographic.
You should really watch a child play a game that has interstitial ads. It's quite obvious that they often click on ads because they want to learn more (maybe not fully convert, but intentionally click).
> If you attempt to read the script in your browser first, and everything's great, then go pipe to bash, the server can send alternate content based on your user agent.
curl | less
Or copy the request "as curl" from the network tab of any modern browser.
We (actor.im) also moved from google cloud to our servers + k8s. Shared persistent storage is a huge pain. We eventually stopped to try to do this, will try again when PetSets will be in Beta and will be able to update it's images.
We tried:
* gluterfs - cluster can be setup in seconds, really. Just launch daemon sets and manually (but you can automate this) create a cluster, but we hit to that fact that CoreOS can't mount glusterfs shares at all. We tried to mount NFS and then hit next problem.
* NFS from k8s are not working at all, mostly this is because kubelet (k8s agent) need to be run directly on a machine and not via rkt/docker. Instead of updating all our nodes we mounted NFS share directly to our nodes.
* PostgreSQL we haven't tried yet, but if occasional pod kill will take place and then resyncing database can became huge issue. We ended up in running pods that is dedicated to specific node and doing manual master-slave configuration. We are not tried other solutions yet, but they also questionable in k8s cluster.
* RabbitMQ - biggest nightmare of all of them. It needs to have good DNS names for each node and here we have huge problems on k8s side: we don't have static host names at all. Documentation said that it can, but it doesn't. You can open kube-dns code it doesn't have any code at all. For pods we have only domain name that ip-like: "10-0-0-10". We ended up with not clustering rabbitmq at all. This is not very important dataset for us and can be easily lost.
* Consul - while working around problems with RabbitMQ in k8s and fighting DNS we found that Consul DNS api works much better than built-in kube-dns. So we installed it and our cluster just goes down when we kill some Consul pods as they changed it's host names and ip. And there are no straightforward way to fix IP or hostnames (they are not working at all, only ip-like that can easily changed on pod deletion).
So best way is to have some fast(!) external storage and mount it via network to your pods, this is much much slower than direct access to Node's SSD but it give you flexibility.
As long as you associate a separate service with each RabbitMQ pod, you can make it work without petsets. (Setting the hostname inside the pod is trivial, just make sure it matches.) Then you can create a "headless" service for clients to connect to, which matches against all the pods.
If you set it up in HA mode, then in theory you don't need persistent volumes, although RabbitMQ is of course flaky for other reasons unrelated to Kubernetes -- I wouldn't run it if I didn't have existing apps that relies on it.
RabbitMQ doesn't have a good clustering story. The clustering was added after the fact, and it shows. I've written about it on HN several times before, e.g. [1]. Also see Aphyr's Jepsen test of RabbitMQ [2], which demonstrates the problem a bit more rigorously.
With HA mode enabled, it will behave decently during a network partition (which can be caused by non-network-related things: high CPU, for example), but there is no way to safely recover without losing messages. (Note: The frame size issue I mention in that comment has been fixed in one of the latest versions.)
We have also encountered multiple bugs where RabbitMQ will get into a bad state that requires manual recovery. For example, it will suddenly lose all the queue bindings. Or queues will go missing. In several cases the RabbitMQ authors have given me a code snippet to run with the Erlang RELP to fix some internal state table; however, even if you know Erlang, you have to know the deep internals of RabbitMQ in order to think up such a code snippet. There have been a couple of completely unrecoverable incidents where I've simply ended up taking down RabbitMQ, deleted its Mnesia database, and started up a new cluster again. Fortunately, we use RabbitMQ in a way that allows us to do that.
The bugs have been getting fewer over the years, but they're not altogether gone. It's a shame, since RabbitMQ should have a model showcase for Erlang's tremendous support for distribution and fault-tolerance. You're lucky if you've not had any issues with it; personally, I would move away from RabbitMQ in a heartbeat, if we had the resources to rewrite a whole bunch of apps. We've started using NATS for some things where persistence isn't needed, and might look at Kafka for some other applications.
Yeah this is tricky. I'm not a huge fan of all these distributed file systems like EBS, NFS or others - That doesn't make much sense for most DBs.
I prefer to have a DB which is "cluster-aware", in this case, you can tag your hosts/Nodes and use Node affinity to scale your DB service so that it matches the number of Nodes which are tagged - Then you can just use the a hostDir directory as your volume (so the data will be stored directly on your tagged hosts/Nodes) - This ensures that dead DB Pods will be respawned on the same hosts (and be able to pickup the hostDir [with all the data for the shard/replica] from the previous Pod which had died).
If your DB engine is cluster-aware, it should be able to reshard itself when you scale up or down.
I don't think it's possible for a DB to not be cluster-aware anyway - Since each DB has a different strategy for scaling up and down.
I've found this to be a pain when setting up a container-based environment. The easiest approach is to just to avoid it as much as possible - hopefully your cloud provider has some managed services (i.e. AWS RDS) that will handle most things for you.
Otherwise you need to separate your available container hosts into clusters: Elasticsearch cluster, Cassandra cluster, etc. and treat those differently from your machines you deploy your other apps to, which to be fair, they are different and need to be treated differently.
GlusterFS was pretty much the only damn thing that I could get to work in a reasonable amount of time (Tried NFS, Ceph, Gluster, Flocker).
Basically the solution (until we get PetSets at least) is to:
1. manually spin up two pods (gluster-ceonts image) without replication controllers because if they get down we need them to stay down so we can manually fix the issue and bring it back up.
2. each pod should have a custom gcePersistentDisk mounted under /mnt/brick1
3. from within each pod, probe the other (gluster peer probe x.x.x.x)
4. each pod should preferably be deployed to it's own node via nodeSelector
5. once the pods have been paired on gluster, create and start a volume
6. make a service that selects the two pods (via some label you need to put on both)
They could be a shelf full of $2 novels. Owning a few hundred books doesn't cost very much (thanks printing press), and only implies an interest in reading not in being of a high socioeconomic status.
Yeah, that's the case with my family. I was in the "lower middle class" when growing up, but we had a library completely filled with books. Most of them were cheap, used books, or were just collected over the years.
In context of the article as anecdata, I don't think the books themselves had anything to do with mine or my brother's successes. We never even read any of them. It was more of a reflection of what my parents were interested in, which obviously had effects in how we were raised.
Owning books is not the hard part, it's reading them. Reading a novel takes a lot of time and is a singularly self-centered leisure activity. People lower on the SES spectrum generally lack for leisure time. Heck, the image of a person lounging with a book has a very strong cultural link to (non-working class) status.
"leisure" was probably the wrong word, in context. The sort of 'leisure' (or pastime) of watching TV can be completely passive. One can 'watch' TV for 2-3 hours and literally not have to think about anything difficult. Reading a book is far more of an 'activity - requiring active thought processing - than TV watching. And for many people, reading a book is anything but pleasure.
My main argument is that reading books is the hard part. Leisure time is an example I gave of a contributing factor. There are other contributing factors, such as cultural ones, which I alluded to.
Beyond that, though, it's not merely the case of an individual having time to read. A parent who does not read to their child is raising a child to be less literate or even illiterate.
I'd very much appreciate it if both of you present some evidence, because I now realize my vague assumptions about this are not really based on anything...
The general phenomenon here is very well known; just searching around for this I found stuff like "UK time use data for the period 1961-2001 do indeed indicate a reversal of the previously negative leisure/status gradient". ( https://www.iser.essex.ac.uk/files/iser_working_papers/2005-... ). But that paper is obviously more concerned with the UK than the US.
Data on this is directly available from the American Time Use Survey, if you want to tabulate it yourself -- while they do collect various data on employment status, they don't publish a summary of time use by employment status.
It's weird to assume that all time spent outside a paying job is leisure time. People who have money can afford to spend a lot less time cooking, taking care of children and elderly parents, etc.
This makes me wonder if ebooks have the same effect --- because these days, owning a few thousand of them or more, although perhaps maybe not completely legally, is nothing more than the price of an Internet connection.
I too wonder about this. Having read from physical books most of my life and switching to ebooks/PDFs (reading from laptop or iPad) was not as fun/convenient as I thought it would be. Although saving resources, the feeling I get when holding a book and writing in it (jogging down thoughts or deriving equations) are irreplaceable by electronic files.
Because worse is better. https://www.jwz.org/doc/worse-is-better.html