Google today officially launched AI platform prediction service, which allows developers to prepare, build, run and share machine learning models in the cloud. Based on the company’s kubernetes engine back-end, it aims to achieve a high reliability, flexibility, and low latency overhead architecture. IDC predicts that by 2022, global spending on cognitive and artificial intelligence systems will reach $77.6 billion, several times higher than last year’s $24 billion. < p > < p > Gartner gives the same view. A new survey of thousands of high-end enterprises around the world finds that AI implementation has increased by 270% in the past four years and 37% in the past year alone. < / P > < p > with the official release of AI platform prediction, Google has added another AI hosting service to its product portfolio, which is expected to maintain a greater leading edge than Amazon, Microsoft, IBM and other competitors in this field. < / P > < p > on the surface, the prediction service of Google AI platform can make the training and deployment of models easier with the help of xgboost and scikit frameworks, thanks to the compatible cloud hardware engines (such as AI accelerator chips) automatically selected. < / P > < p > on supported virtual machines, the platform can also display performance data such as GPU, CPU, ram, network utilization, and model copy indicators over time. In terms of security, AI platform prediction comes with user-defined parameters and model deployment tools, which are limited to access resources and services within the defined network scope. < / P > < p > in addition, the platform provides information and visualization tools related to model prediction, which helps to clarify the relevant prediction. Real time models can be continuously evaluated based on the true tags of the requests sent to the model, providing an opportunity to improve performance through retraining. < / P > < p > it is understood that all the functions of AI platform prediction can be used in a fully managed, non cluster environment with special enterprise support. If customers send too much traffic, Google also provides quota management of computing resources to prevent model overload. < / P > < p > interestingly, Google’s waze is also using the service to help its carpool ride. Philippe adjiman, waze’s senior data scientist, says it can deploy a model in a production environment in just a few weeks. Global Tech