You can also set up Cloud Scheduler to push to Pub/Sub, which can trigger your function. This is helpful if you don't want your function to be available via a public HTTPS endpoint.
Your intuition around concurrency is correct: Cloud Functions has "per instance concurrency" of 1. Cloud Run lets you go significantly higher than that (default 80). This means that our infrastructure will generally create more instances to handle a request spike when using Cloud Functions vs. Cloud Run.
Creating an instance incurs a cold start. Part of that cold start is due to our infrastructure (generally this is small) but the other part is in your control. For example: if you create a client that takes X seconds to you initialize, your cold start will be at least X seconds. The initialization time will manifest as part of your cold start.
This has a few practical implications:
* writing code for Cloud Functions is generally more straightforward as single concurrency solves many problems regarding shared variables. You may also see some benefits in terms of monitoring/metrics/logging since you only need to think about one request at a time.
* you will likely see a higher incidence of cold starts on Cloud Functions during rapid scale-up, such as in response to a sudden traffic spike
* the impact of a given cold start will depend heavily on what you're doing in your container
* though I haven't validated this experimentally, I would expect that the magnitude of any given cold start (i.e., total latency contribution) would be roughly the same on Cloud Run as Cloud Functions IF you're running the same code
Ah, thanks for the details there! So, given that my Cloud Functions project is a Go app (and would be the exact same code between Functions and Run), if I were to run that in a very minimal container (something like Alpine), I could get roughly the same cold start time as Cloud Functions, but fewer of them since I can respond to multiple requests using the same instance.
I'll probably do some experimentation on my end as well to test. Any suggestion how long I should wait between tests to ensure a cold start on both Cloud Functions and Cloud Run?
I think you can force cold starts between your tests by re-deploying your function/container. You could (optionally) leave a small buffer (<1 minute) after the deployment to ensure that traffic has fully migrated.
I spoke too soon. The deploy will bring up an instance instead of your first request. To force a cold start, you could set concurrency to '1' and send two concurrent requests. You should see a log entry such as the following when a new instance starts up:
"This request caused a new container instance to be started and may thus take longer and use more CPU than a typical request."
Alternatively, you could set up an endpoint that shuts down the server (which will shut down the instance - not advised for production code).
As an aside, the "K_REVISION" environment variable is set to the current revision. You can log or return this value to test whether traffic has migrated to a new version (instead of waiting a minute).
Indeed. If you want to compile the binary yourself/don't want to upload source code, we have a serverless containers product currently available as an early preview (sign up at g.co/serverlesscontainers). This would allow you to compile your binary locally, write a simple Dockerfile and then build/deploy the resulting container.
This is great. It sounds like what people were hoping would come from Zeit Now, though in their latest major release they moved away from serverless docker deployments. Their community was quite upset at that decision I think. I will have to check this out!
> I'm curious how the implementation of Go has affected the ease of integrating other languages.
In some ways, it helps. You start to see similar issues arise and know what to look out for when you're launching a new runtime. In other ways, every language has its peculiarities and its own set of design considerations.
Launching/polishing a completely new language still takes a decent amount of work. Launching a new version of an existing language tends to be much quicker.
Would you consider running a container that could be run like Cloud Functions? This container could run the binary that you create. It's not something that we support today but I'm curious whether this would meet your needs.
> running a container that could be run like Cloud Functions
Does this mean we actually run the container ourselves on our GKE cluster or in a VM? Or do you mean a "container" runtime for Cloud Functions? Both would be interesting, but we'd prefer the latter since there would be less to manage. I'd be interested to see the performance of it.
This is something Cloud Native Buildpacks (buildpacks.io) are intended to make easy. We hang out on buildpacks.slack.com, if you'd like to come pick our brains.
* We've been running a private early access preview/alpha since last August.
* This was our first compiled language on Cloud Functions, which came with its own set of challenges.
* It took us a while to find the right approach for supporting dependencies (both Go Modules and vendoring are supported). Unlike other providers, when you deploy your source code, Cloud Functions will automatically install dependencies listed in your go.mod file.
* Our testers gave us a ton of feedback that helped us polish the developer experience -- we identified and fixed many rough edges related to error messages during deployment/build and errors at runtime. Serverless products can be a bit opaque (since you can't just ssh into a machine), so getting this right is important.
I'd like to say that there was one big, interesting challenge that we had to tackle. But the reality is that we worked through many small details that only became apparent during testing. We wanted to address these so that we could offer a high quality experience for our public beta launch. We owe our alpha testers major credit for helping us find and solve issues.
Speaking of testers -- if you have feedback on the runtime, we'd love to hear from you in our Cloud Functions Beta Tester group [1].
Is there a reason dependency management happens like this? We currently deploy Go Lambda functions in AWS with the help of the Serverless Framework and it just uploads the cross compiled Binary and not the whole project.
Why wouldn't the binary be the deployed unit in this case?
While it's a lot more work for them, and some may already have the infrastructure setup for deploying their own binary, I think having GCP handle the end-to-end there is more user friendly in general. I can quickly write a Cloud Function from any computer, without having to deal with setting up the tool chains. If you want to just run binaries, sounds like Cloud Functions isn't what you're looking for.
Thanks for the response.
Would like to see more granular triggers (event types). Especially with Firebase.
Also, would like to see more examples with firestore.
These are not specific to Go.
firebase authentication -> user account from disabled to enabled.
firebase authentication -> when a new phone number is associated.
Firestore -> field level triggers. Right now we have only document level trigger.
well, if I have onUpdate trigger, which means my function is going to be triggered for every update on the document (even for the fields that I don't care)
You can say that triggering functions shouldn't cost you much, but thats not the way to do right. Correct me if I am wrong.
My use-case is simple. Want to get my CF triggered if the value of a particular field in a doc changes. Thanks.
Does this effort take you closer to the (supposed) goal of running arbitrary X86/ARM Linux binaries as cloud functions, or is that a completely different direction?
At the sandboxing level, it's already possible (you can upload any binary and fork/exec it from one of the supported languages). That's made possible by gVisor, which is the underlying sandbox technology used in GAE and GCF.
As for making that an actual product, we're working on that, too. Sign up for the alpha here:
You're not wrong. Cloud Functions Product Manager here. We've been running a private early access preview since we first showed this at GopherCon in August. I'm really happy to see this first step but we have a lot of work ahead of us.
Do you have suggestions on whether to deploy simple (functional - no state or external resources used, except perhaps usage tracking) APIs as cloud functions or on the App Engine? What would be the "turning point" when it's best to start considering App Engine?
Launching these new unmodified Second Generation runtimes required us to develop new security and isolation technology (based on gVisor [1]). This allows us to securely run arbitrary code on shared data centers with isolation guarantees. This took us significantly longer than expected. The good news is, now that we have this new stack in place, we should be able to deliver runtime updates significantly faster.
That said, they don't quite go into the details of what type of isolation is missing from standard containers - I'm curious. It does seem like it would have been ideal for everyone if LXC would have had better isolation, rather than having to run a userspace kernel emulator thingy for each container, but c'est la vie!
I work on gVisor. The answer is that having a separate kernel is required to achieve a high degree of isolation and by definition Linux containers share a kernel with the host. A separate Linux kernel could work as well, but gVisor tries to achieve a different set of trade-offs.
You can also set up Cloud Scheduler to push to Pub/Sub, which can trigger your function. This is helpful if you don't want your function to be available via a public HTTPS endpoint.