David Immel/Android Authority
It is finally official that Google’s Pixel 6 will use the company’s first custom SoC. Although the company has previously been involved in custom hardware Pixel Visual Core and Titan M security add-ons, this is the first time Google has singled out all the internal workings of the chip itself. (Although the company has licensed many building blocks for the SoC.) However, the tensor processing unit (TPU) is internal, and Google puts it at the core of the Tensor SoC.
As we expected, the Google Tensor processor focuses on enhanced imaging and machine learning (ML) capabilities, rather than the original ability to change the rules of the game. Even so, it makes us feel excited and some reservations.
I told you: Real expectations for Google Pixel 6 SoC
Why is Google Tensor SoC important…
First of all, Tensor is a custom chip designed by Google that can efficiently handle the company’s most important priorities. This means that it should provide faster and more powerful image processing, speech processing and other machine learning-based functions. At least, it will be faster than the previous generation Pixel 5.
With the powerful internal TPU at the core of the chip, Google is promoting real-time language translation of subtitles on the device, text-to-speech without an Internet connection, dual keyboard and voice input methods, and excellent camera capabilities. We can imagine that Google Lens and other machine learning (ML) technologies will also be improved. Although these are mainly improvements made by Google to existing hardware, we hope to see some new features.
Google Tensor will adopt our favorite Pixel 5 and make it better.
Artificial intelligence and machine learning are at the core of Google’s work, and it can be said that it does better than others-that’s why it is the core focus of Google’s chips. As we have pointed out in many SoCs released recently, raw performance is no longer the most important aspect of a mobile SoC. Heterogeneous computing and workload efficiency are equally important, if not more important, for achieving powerful new software features and product differentiation.
By stepping out of the Qualcomm ecosystem and picking its own components, Google can better control how and where to invest precious silicon space to realize its smartphone vision. Qualcomm must cater to a broad partner vision, and Google clearly has more specific ideas. If Google thinks that the Pixel 6 experience will benefit more from enhanced AI, instead of Facebook’s opening rate being 5% faster than last year, it’s hard to argue. Just like Apple’s work on custom chips, Google is turning to custom hardware to help build custom experiences.
In addition, by moving to personalized or co-developed processors, Google may be able to provide updates faster and longer than ever before. Partners rely on Qualcomm’s support roadmap to roll out long-term updates. Samsung provides three years of operating system and four years of security updates through Qualcomm, while Google promises to provide the same service for Pixel 5 and earlier versions. It will be interesting to see if Google will go further and now it is closer to the chip design process.
…And why it might not be

If you want a disruptive performance, I think you will be disappointed here. Google has not shared any benchmarks or details about the internal workings of its CPU, GPU, or other components. However, if there is no architect, Google will definitely authorize off-the-shelf Arm components, such as Cortex-A78. We still don’t know anything about the 5G capabilities that mobile phones will have. In fact, Google won’t even say who made its chipset, although the rumors point to Samsung. Rick Osterloh, Head of Hardware at Google Say Tensor will be “very competitive” in terms of CPU and GPU performance. Do what you want to do.
Nor does Google necessarily do anything completely groundbreaking in its image and machine learning pipeline. After all, Google’s development cycle does not operate in isolation. Compared with the previous generation of Google’s high-end mobile phone Pixel 4 series, cutting-edge hardware has made great progress.
Compared with the previous high-end Google phone, the cutting-edge hardware has made great progress.
So far, Google’s demonstrations have demonstrated the application of its advanced image processing capabilities to multiple cameras and video scenes. This is possible because Google’s machine learning capabilities are now integrated into the image processing (ISP) pipeline, rather than located further away.
However, even for smartphones in 2020, this is not a new idea, let alone the end of 2021. In fact, the Qualcomm Snapdragon 855, which powered the Google Pixel 4 in 2019, introduced computer vision elements into the ISP chain. Since then, Snapdragon 865 and 888 have improved these features, allowing partners to use data from multiple cameras at the same time and apply effects such as HDR and real-time bokeh to 4K 60fps video. Google is not the first to propose these ideas, although this does not mean that it cannot implement them better.
You can also take a look: Qualcomm explains how Snapdragon 888 is changing the camera game
Similarly, other SoC manufacturers also have their own low-power sensor chips for uninterrupted speech recognition, environmental display, and other sensor functions. Safety enclaves, such as Titan M, are not new. In fact, they are essential in today’s obsessive biometric devices. You will find similar features in mobile SoCs from Apple, Huawei, Qualcomm, and Samsung. However, the exact function is different.
Google’s Tensor SoC: Get out of the status quo?

Robert Triggs/Android Authority
Google CEO Sundar Pichai pointed out that the Tensor chip has been developed for four years, which is an interesting time frame. Google started this project when mobile AI and ML capabilities were relatively new. The company has always been at the forefront of the ML market and seems to be often frustrated with the limitations of partner chips, as shown in the Pixel Visual Core and Neural Core experiments.
Tensor SoC is what Google is eye-catching with its own vision, not just machine learning chips, but also how hardware design affects product differentiation and software functions. It will be interesting to see if all of these are successfully combined to produce a Pixel 6 smartphone that can achieve some impressive industry firsts.
However, Qualcomm and other companies have not been sitting idly by for four years. Machine learning, computer imaging, and heterogeneous computing capabilities are at the core of all major mobile SoC vendors, not just in their high-end products. It remains to be seen whether Google is reinventing the wheel just for it, or whether its TPU technology and Tensor SoC are really in the lead.