Last week the tech world gathered at the Moscone Center in San Francisco for Google Cloud Next, the annual event focussing on Google's latest cloud solutions, the future of AI and machine learning. A lot of what was announced has an impact on our work. Let's take a look at the exciting news!
Google Cloud versus the rest
This was the first edition Google Cloud Next with Thomas Kurian as CEO of Google Cloud, as he took over from Diane Green end of last year. He made it clear that Google will be focusing more on enterprise - while still embracing startups - and open source to strengthen its multi-cloud infrastructure offering.
We’re committed to substantially expanding the scale of our go-to-market teams in order to help more customers use our technology
As we can see from the graph below, Google Cloud is not the biggest public cloud provider, trailing some distance behind AWS and Azure. Their cutting edge offering of technologies like Kubernetes and TensforFlow however, alongside their commitment towards open source and competitive pricing, definitely swayed us into become Google Cloud Partner and leveraging Google Cloud's services for the digital products we create for our clients. Another thing we, and many others, want to mention is the high demand for more support from Google to enable Enterprise adoption of Google Cloud services.
One of the biggest announcements during the keynote was the release of a new open platform called Anthos, formerly known as Google Cloud Services Platform. Anthos aims to deliver on the promise to truly write once, run anywhere by allowing businesses to run and manage their applications on existing on-prem hardware investments or in any public cloud. It will not only work with Google Cloud Platform, but even with several other cloud providers, including some of Google’s biggest competitors: Amazon’s AWS and Microsoft’s Azure.
Many businesses have been eager to have the ability to migrate their applications between cloud providers, particularly as concerns about lock-in has served as an obstacle for further cloud adoption. By leveraging the Google-developed but open source Kubernetes container technology, along with a number of other enterprise-ready open source tools, Anthos offers a surprising flexible way to shift workloads from either AWS or Azure to GCP, and it even lets companies move in the opposite direction if they so choose. With Anthos, Google will offer a single managed service that will let you manage and deploy workloads across clouds, all without having to worry about the different environments (on- and off-prem) and APIs.
That’s a big deal and one that clearly separates Google’s approach from its competitors.
What is hybrid cloud again?
The term hybrid cloud describes a setup in which common or interconnected services are deployed across multiple computing environments, one based in the public cloud, and at least one being on-premises.
Alongside the Anthos platform, which can manage your hybrid cloud, they also announced Apigee Hybrid API Management (beta), which in turn gets you a single, full-featured API management solution across all your environments.
Another new product is called Cloud Run, which brings the Serverless concept to containers. It's a managed compute platform that enables you to run stateless containers that are invocable via HTTP requests. Serverless means not having to worry about infrastructure, with optimal resource utilization. As you only pay when your code is running, the system is very transparent towards costs. No more scaling management needed: the underlying Serverless infrastructures will automatically scale your application and (micro)services.
An interesting strategy is that it gives you the flexibility to run services in Google Cloud or on the Kubernetes Engine. This results in more freedom to move your workloads across different environments and platforms:
- Fully managed on Google Cloud Platform
- On Google Kubernetes Engine
- Anywhere Knative runs
New dev tools for our favourite IDEs
Google definitely didn't forget about its developers: with Cloud Code they bring excellent developer tools to help us write, deploy, and debug cloud-native applications quickly and easily. Extensions to IDEs such as VS Code (beta) and IntelliJ (alpha) will allow us to rapidly iterate, debug, and deploy code to Kubernetes.
In three years, Google Cloud has opened to 15 new regions and 45 zones across 13 countries. They continue to expand their global footprint to support its growing customers around the world.
Google Cloud announced two new additions to their global infrastructure, with new regions in:
- Seoul, South Korea
- Salt Lake City, Utah, USA.
Also Osaka, Japan will go live in the coming weeks and Jakarta, Indonesia early next year. This will bring the total number of global regions to 23 in 2020.
The open-source database market is big, and growing fast:
More than 70% of new applications developed by corporate users will run on an open source database management system
In that light Google Cloud announced strategic partnerships with leading open source-centric companies in the area of data management and analytics, including Confluent, Elastic, MongoDB and Redis Labs.
With partnerships like these Google can bring a very compelling story to the table:
- Fully managed services running in the cloud, with best efforts made to optimize performance and latency between the service and application.
- A single user interface to manage apps, which includes the ability to provision and manage the service from the Google Cloud Console.
- Unified billing, so you get one invoice from Google Cloud that includes the partner’s service.
- Google Cloud support for the majority of these partners, so you can manage and log support tickets in a single window and not have to deal with different providers.
AI as a platform
With AI Platform, Google continues to lower the barrier for machine learning engineers to deploy AI applications into production. Most of the underlying components like TensorFlow, TensorFlow Extended, Kubeflow, AutoML, BigQueryML and Cloud ML Engine have been around for some time, but AI Platform brings it all together nicely.
With the arrival of the AI Platform, the Deep Learning VM Images that were available at a premium cost a couple of months ago have been price aligned with the rest of the VM images.
No AI Platform would be complete without data labeling functionality. Google announced its own data labeling service, and it seems pretty competitive. You can easily upload your data in a storage bucket, and create a labeling job via their API. It doesn't come cheap though, $35 to classify 1000 images for example or $870 to label 1000 objects in images pixel-by-pixel.
BigQueryML and AutoML also got a couple of nice extensions. Aside from linear and logistic regression models, you can now run k-means clustering on huge BigQuery tables. If you've ever clustered 10 million examples with each having 100 features, you'll recognize that this scalability is just great.
Those BigQuery tables can also be funneled into the new AutoML Tables, which allows you to create classification and regression models. As data scientists, we always have random forests or xgboost up our sleeves to hit the ground running, but the power of AutoML Tables really lies within the direct deployment and not having to worry about the underlying algorithms. That being said, AutoML Tables also reports metrics like RMSE, precision, recall, which is a pretty nice indeed.
Once again we are pretty stoked with the announcements made at Google Cloud Next and are particularly eager to get some hands-on time with Anthos, Cloud Run and the new stuff from the AI Platform.