Since pixel 2, Google has included its own chipset in its smartphones to improve photography and other features. However, Google seems to have ended the trend as it confirms that it has abandoned the neural core in its latest flagship smartphone, pixel 5. < / P > < p > pixel neural core is the successor product of pixel visual core. Both chipsets are made by Google to improve photography. Google also uses the chip to speed up facial looting, Google assistant and other new features on pixel 4, especially on the nerve core. However, Google’s pixel “a” series basically proves that there is no need to add chips. Pixel 3a and 4A are almost the same as pixel 3 and pixel 4 in terms of shooting and processing speed, but surprisingly, the specification list of pixel 5 lacks homemade chipsets. < / P > < p > does this mean that the pixel neural core is gone forever? Maybe not. Google has mentioned that it will restore soli in future hardware, so it is certain that the neural core will eventually return. However, it is worth noting that the two new pixel still offer Titan m chips to ensure security. < / P > < p > there is a reason why the pixel 3, Google’s previous generation, was hailed as the best camera phone. Because Google uses software algorithms from its HDR + software package to process pixels, when combined with a bit of machine learning, some of the most spectacular photos may come from phones with standard hardware. < / P > < p > to help with these algorithms, Google uses a dedicated processor called pixel visual core, the first chip we saw on pixel 2 in 2017. This year, Google seems to have replaced the pixel visual core with something called pixel neural core. < p > < p > according to user chenjie Luo, pixel 1’s HDR + runs on the high-throughput hvx accelerator, because hvx is not designed for image processing, so the speed is very slow. In order to achieve the user experience of fast continuous shooting, Google camera makes an image cache, which stores every frame of camera sensor in memory and queues for processing HDR. This generation of pixel users will not be aware of HDR processing time. However, this processing method brings a problem: third party photo sharing apps like instagram need to take what you get, and users can’t wait for several seconds for HDR to finish before sharing. Therefore, in the first generation, HDR + is a unique feature of Google camera App, which cannot be used on third-party apps. Because of this reason and the boss’s enthusiasm, pixel visual core was born to accelerate HDR +, so that third-party apps can instantly complete HDR + calculation and get processed photos. The measured results are amazing, especially in high light contrast, the front and back scenes are very clear, there will be no dark face photos. The advantage of this chip is that eight IPU cores can be programmed, not just an ASIC, so there are many other application scenarios. At that time, the original intention of the chip was to be an all-round image processing chip, and HDR + was the first showcase. < / P > < p > the original pixel visual core was designed to speed up the algorithms used by Google’s HDR + image processing, making the photos taken by pixel 2 and pixel 3 look great. It uses some machine learning programs and so-called computational photography to intelligently fill in less than perfect parts of the photo. And it works really well; it allows phones with off the shelf camera sensors to take better pictures. < / P > < p > if we believe in pixel neural core, pixel 4 will once again compete for the top spot in smartphone photography. < / P > < p > it appears that Google is using chips modeled on neural network technology to improve the image processing capability of its pixel hand in 2019. Neural networks are something you may hear more than once or twice, but the concept is not often explained. Instead, it looks like some sort of magic like Google class computer. This is not the case, and the idea behind the neural network is actually very easy to confuse your mind. < p > < p > neural networks are algorithms based on human brain modeling. What it “imitates” is not the appearance or even the working principle of the brain, but how the brain processes information. Neural network obtains sensory data through so-called machine sensing (data collected and transmitted by external sensors such as machine sensors) and identifies them. < / P > < p > these data are numbers called vectors. All external data (including images, sounds, and text) from the “real” world are converted into vectors and classified and classified into datasets. We can think of a neural network as an extra layer on top of something stored on a computer or phone, which contains data about its meaning – what it looks like, what it sounds like, what it says, and when it happens. After the catalog is established, new data can be classified and compared. < / P > < p > a real example can make all this clearer. NVIDIA’s processors are very good at running neural networks. The company spent a lot of time scanning and copying photos of cats into the network, and once it was done, a cluster of computers through neural networks could identify the cat in any picture that contained the cat. Small cats, big cats, white cats, calico cats, even mountain lions or tigers are cats, mainly because neural networks have a lot of data about what cats are. < / P > < p > with this example in mind, it’s not hard to understand why Google is taking advantage of this feature inside the phone. These neural cores, which can link to a lot of data, will be able to recognize what the camera lens sees and then decide what to do. Perhaps the data about what it sees and what it expects can be passed on to the image processing algorithm. Or, you can enter the same data into assistant to identify sweaters or apples. Or maybe you can translate written text faster and more accurately than Google does now. < / P > < p > it’s easy to think that Google can design a small chip that can interface with neural networks and image processors in phones, and it’s easy to understand why. We’re not sure exactly what the pixel neural core is, or what it might be used for, but we’ll certainly know more about the phone and its actual details once it’s “officially” released. Global Tech