Early this morning, Google officially launched the new Google Pixel 6 and Pixel 6 Pro smartphones. The Google Pixel 6 series is equipped with Google’s self-developed Tensor chip, which improves the performance by 80% compared to the Pixel 5, and the AI performance is also greatly improved.
At the new product launch on October 19, Apple also officially announced the new Apple Silicon – M1 Pro and M1 Max, of which M1 Max is also known as the strongest chip Apple has ever built. And Google calls its self-developed Tensor chip a “milestone for machine learning.” Designed by Google Research, the chip enables the company to easily translate advances in artificial intelligence and machine learning into actual consumer products. What is the mystery of Google’s self-developed Tensor chip?
Built by Google Research
A few years ago, Google’s research team brought together hardware, software, and ML (machine learning) aspects to build the best mobile ML computer, ultimately realizing Google’s vision for the Pixel smartphone.
With Tensor, Google is unlocking amazing new experiences, including motion modes, more accurate face detection, and features like translating text in real-time. Tensor helps Google push the limits of the smartphone, turning it from a general-purpose piece of hardware into a smart-enough device.
Google Tensor is an advanced system built on a chip (SoC) that has everything a mobile SoC and more.
The core experience domains of the new phones—voice, language, images, and video—are heterogeneous in nature, which means they require multiple resources across the chip. Google Tensors are carefully designed to provide the right level of computing performance, efficiency, and security. With Android 12, Google set out to build an operating system that lays the foundation for a future where hardware and software work together.
A Tensor is what’s called a system-on-a-chip, essentially a monster processor that combines the CPU (the brain that powers the device) with other functions.
As rumored, the Tensor chip uses a unique combination of CPU cores. There are custom TPUs (Tensor Processing Units) for artificial intelligence, and two high-power Cortex-X1 cores, two mid-range rumored old Cortex-A76 cores, and then four low-power efficiency cores. Graphics are provided by a 20-core GPU, in addition to an ambient hub that powers the ambient experience, a private computing core, and a new Titan M2 chip for security. There’s also a dedicated image processing core that helps the Pixel’s iconic photography.
Google’s Tensor chip was assisted by Samsung in the development process. It adopts a 5nm process technology and also integrates a tensor processing unit for machine learning operations, an ISP image processing unit, a Tensor security core, and a low-power core. In addition, the processor has 8MB system cache and 4MB L3 cache.
The Tensor Security Core is a CPU-based subsystem that is independent of the main application processor. It allows sensitive tasks and controls to run in an isolated and secure environment. In addition to the Tensor, there is a co-processor, the Titan M2, Google’s next-generation dedicated security chip. Since Google created Tensor, it’s extending security support for Pixel owners to five years, allowing people to use their devices for longer. Phil Carmack, vice president and general manager of Google’s chip business, said Google designed CPUs to provide the best responsiveness and power efficiency for intensive use cases such as computational photography.
In theory, Google offers the X1 with twice as many performance cores as the Snapdragon 888 or Exynos 2100 — the most powerful Arm designs, both of which use one Cortex-X1, three Cortex-A78s, and four Cortexes -A55 core. But Google also swapped out the two high-end cores for mid-range cores, which could potentially improve battery life and performance, as well as potentially make the overall device weaker. If you have the opportunity to test the Pixel 6 and Tensor, you will know the results very quickly.
Although the performance improvement seems to be inferior to Apple’s self-developed chips, in terms of vertical comparison, the Pixel 6 series mobile phones released by Google this time are still the fastest Pixel mobile phones. The CPU performance has been improved by 80%, and the GPU performance has been improved by 370%. Notably, the chip is slower than the top processors in other premium Android phones.
Functions implemented by Tensor
Still Photography and Sports Mode
One of the areas where the Pixel 6 improves is photography. Because the chip’s subsystems work better together, Tensor can process photography tasks faster, enabling smarter camera functions.
A new feature called Face Unblur brings the faces of moving subjects into focus while keeping the rest of the body blurry. For example, you can take a photo of a child jumping on a trampoline, with a Sharp and clear face and limbs blurred in motion.
Another photography feature called Sports Mode adds blur to still images. Google says the feature brings a certain professional look and feel to city photos, night out and even nature scenes.
Google has always wanted to make Pixel videos match the quality of Pixel photos, and Tensor makes that happen. By embedding on-chip, pixel features can be provided directly more efficiently,
Google has developed an algorithm called HDRnet. By embedding some of the functions of HDRNet directly into the Tensor image signal processor, the processing speed can be greatly accelerated, while the power consumption can be greatly reduced. HDRnet allows the Pixel 6 to capture video at still image quality, and it can be used in all video modes, even 4K video at 60 frames per second, delivering more accurate and vivid colors.
Speech and Transcription
The Tensor chip will have the “most accurate automatic speech recognition (ASR)” Google has to offer for Google Assistant’s quick queries and audio tasks, such as live captioning or voice recorder applications. ASR can even be used for long-running applications, such as recorders or tools like Live Caption, without consuming a lot of power.
Improved speech recognition will help with features such as voice access, allowing users to talk to their phones without typing commands, as well as using the Google Assistant to answer incoming calls, talk to callers, and provide a transcript of the call.
The new real-time translation feature on Pixel 6 series phones with Tensor helps users communicate better with people. It works with any Android chat app, and users can translate directly within the chat app, meaning no more copy-pasting text to Google Translate. Tensor can also use on-device speech and translation models for real-time translation of multimedia resources such as videos.
The Neural Machine Translation (NMT) model on the new device consumes less than half the power when using translation on a Tensor-equipped phone compared to a Pixel 4 phone. If it weren’t for Tensor, these types of tasks would often drain the phone’s battery.
For Google, the Tensor and Pixel’s new AI capabilities aren’t the end, these are just the beginning.