Category Archives: Electronic Products

ArcSoft VisDrive 6.0 newly upgrades visual AI collaborative vehicle chip to unlock a new driving mode in the future

With the deepening of intelligence, automobiles have increasingly become a typical technology-driven industry. Especially under the joint promotion of Moore’s Law and Kumei’s Law, automotive chips continue to innovate, computing power continues to break through, and cost and power consumption continue to decline. So thanks to this, what kind of iteration will the entire auto industry have? Aiming at the future trend, at the recent China Automotive semiconductor Conference, Dr. Chen Feng, deputy general manager of ArcSoft’s Vision Vehicle Business Group, demonstrated the newly upgraded ArcSoft one-stop vehicle vision solution VisDrive 6.0 on the spot, from the perspective of computer vision technology From the perspective of empowering cars, it details how visual AI cooperates with on-board chips to create a safe and intelligent driving experience in the future.

ArcSoft VisDrive 6.0 newly upgrades visual AI collaborative vehicle chip to unlock a new driving mode in the future

From mobile phones to cars, “algorithm + chip” is still the key variable to ignite intelligence

Regarding the development direction of automobile intelligence, Dr. Chen Feng believes that it will follow the same development path of “interaction change-architecture upgrade-ecological evolution” like smartphones.

As an epoch-making intelligent product, smartphones have completed a magnificent turn from communication devices to universal “scene tools”. According to Dr. Chen Feng, in the process of this change, the application of the camera enables mobile phones to carry out innovative designs from the perspective of visual interaction, thereby subverting traditional functions. He introduced: “Starting from the Sharp J-SH04, the world’s first mobile phone equipped with a rear camera, and then from single camera, dual camera to intelligent multi-camera, more and more lenses need to be applied on mobile phones. Following that, we can Video phone, beauty selfie, background blur, face unlock, AR ranging, etc., and its intelligent capabilities are becoming more and more powerful.”

The mass production and application of so many complex visual AI algorithms to smartphones is behind the evolution of mobile phone chips from the early single-core architecture to the current multi-processor core architecture such as DSP, CPU, GPU, and NPU. The rapidly increasing computing power and memory capacity make smartphones a powerful terminal computing platform.

Following the same path, we can find that similar changes are taking place by analogy with smart cars. Dr. Chen Feng pointed out that the traditional role of automobiles as a means of travel is being gradually broken and transformed into a “mobile third space” integrating entertainment, office and consumption. At the same time, the on-board camera is already one of the necessary sensors for the mainstream configuration of smart cars. Front view, surround view, rear view, side view and built-in five types of cameras can provide drivers with all-round safe and intelligent service capabilities, such as lane assist warning. , panoramic parking, blind spot detection, parking assistance, driver status detection, pedestrian collision warning and many other intelligent driving functions.

Dr. Chen Feng, Deputy General Manager of ArcSoft Vision Vehicle Business Group, delivered a keynote speech

Compared with the single window of the mobile phone, the application of the car is more complicated, and most of them are multi-function and multi-task concurrent mode, requiring higher security. With the increasing electronicization of vehicles, especially the addition of functions such as automatic driving and active safety, the number of single-vehicle ECUs is also increasing rapidly, with an average of 50-70 or more. So many ECUs not only bring complicated wiring harness design, but also lead to mixed logic control. This forces vehicle manufacturers to convert the automotive E/E architecture from distributed to centralized, thereby integrating a large number of ECUs and handing them over to the domain controller for unified management and scheduling. The centralized E/E architecture will separate sensing and processing, which is conducive to the standardization and generalization of underlying information resources, and further reduces the coupling between software and hardware, laying a solid foundation for automobiles to enter the “era of common definition of software and hardware”. base.

But in this way, higher requirements are put forward for the high performance, low power consumption and stability of the hardware, so the on-board chip with powerful processing capability emerges as the times require. It is precisely because of the corresponding hardware changes that Dr. Chen believes that the requirements for various in-vehicle software have also been greatly improved. Because the resources given by the hardware to a single application are extremely limited, in the process of processing various sensory information, the algorithm must not only achieve fast and accurate processing under the condition of low resources, but also maximize the use of the hardware under the condition of ensuring low power consumption. Ability. For a long time, the honing and accumulation of ArcSoft algorithms and various general-purpose chips on smartphones are in line with these requirements.

ArcSoft VisDrive 6.0 is newly upgraded for all scenarios of smart cockpit

As a core algorithm provider for the global smartphone imaging industry, ArcSoft has made remarkable achievements in the iterative process of smartphones. Among the top five mobile phone brands by global shipments, except for Apple, which completely adopts self-developed algorithms, the main mid-to-high-end models of other mobile phone brands are equipped with ArcSoft’s computational photography and computer vision artificial intelligence technology.

Dr. Chen Feng said that it is precisely because of the deep business accumulation in smartphones that ArcSoft has maintained long-term cooperation with top companies in the industry chain such as Qualcomm, MediaTek, and arm. Moreover, ArcSoft has a lot of practical implementation experience for in-depth optimization of the algorithm according to the characteristics of different hardware platforms.

So extending from smartphones to smart cars, what kind of imagination will ArcSoft bring to the industry? At the conference site, Dr. Chen Feng brought the newly upgraded ArcSoft one-stop vehicle vision solution VisDrive 6.0. It runs through the entire scene of the smart cockpit, covering DMS (driver monitoring system), ADAS (advanced driver assistance system), BSD (blind spot detection system), OMS (passenger monitoring system), Interact (visual interaction system), Authenticate (biological certification), AVM (3D surround view monitoring system), AR HUD (AR head-up Display) and other related software solutions based on core algorithms.

Among them, the driver monitoring system of VisDrive 6.0, in addition to commonly used driver behavior recognition, state recognition, emotion recognition, and seat belt detection, also has functions such as fatigue behavior reminders, steering wheel hands-off behavior reminders, and mobile phone behavior reminders. In the visual interaction system, VisDrive 6.0 has a wealth of functions such as gesture recognition, gaze tracking, lip motion wake-up, and gaze wake-up.

In the passenger monitoring system, it has personalized functions such as passenger number detection/expression recognition, leftover living body reminder, child crying detection, cabin photo beautification/video selfie, 3D virtual driving assistant and other personalized functions; in terms of biometric authentication, in addition to the basic face Identification and fingerprint recognition, VisDrive 6.0 has smart services such as keyless start of the car, safe and anti-counterfeiting and convenient payment of parking fees; in the 3D surround view monitoring system, VisDrive 6.0 is equipped with technologies such as surround view splicing, reversing assistance, 3D viewing angle, and transparent body .

At present, VisDrive 6.0 has been deeply optimized and adapted to various chip system platforms such as Qualcomm, Texas Instruments TI, MediaTek MTK, NXP NXP, Renesas, Ambarella, and Huawei. In fact, in addition to establishing long-term and stable cooperative relations with chip manufacturers, ArcSoft has also established close cooperation with major sensor and camera module factories such as Sony Sensors, Samsung Semiconductor, Gekewei, Sunny Optical, and ON Semiconductor. relation.

Right now, the process of cars becoming intelligent has just started. Whether it is various sensing hardware such as chips, camera modules, lidar, or visual algorithm software, every link is crucial. Only the deep integration and cooperation of software and hardware can continuously accelerate the development of the industry. ArcSoft believes that with the cooperation of many excellent manufacturers, it will bring users a new mode of “future driving” that is safer, more comfortable and smarter.

Source: Gasgoo

The Links:   LQ104V1LG61 PD10M44OH7B06 LCD-SOURCE

Google signs 10-year cloud contract with CME and invests $1 billion in CME

Beijing time on the evening of November 4th, according to reports, Google and the Chicago Mercantile Exchange Group (CME Group) announced today that the two sides have reached a 10-year cooperation agreement. Under the agreement, Google will help CME move all of its operations to the cloud.

As part of the agreement, Google also made a $1 billion investment in CME’s convertible preferred stock (non-voting), the companies said in a statement.

“This partnership will enable CME to bring new products and services to market faster,” CME Chief Executive Officer (CEO) Terry Duffy said in a statement.

CME also said it would move its technical infrastructure to Google Cloud. Starting next year, data and clearing services will be the first to go to the cloud. Eventually, CME moved all of its operations to the cloud.

The companies also said in the statement that they would explore other ways to collaborate to innovate for CME Group’s customers.

The Links:   LMG6911RPBC CM100RL-24NF TFT-LCD

PCB Layout Tips for Low-Side Gate Drivers with Overcurrent Protection

Infineon’s 1ED44173/5/6 are new low-side gate driver ICs with integrated overcurrent protection (OCP), fault status output and enable functions. This highly integrated driver is very friendly for PFC (Digitally Controlled Power Factor Correction) applications with boost topology and ground reference.

Author: Wchu1, Translator: Chen Ziying

Infineon’s 1ED44173/5/6 are new low-side gate driver ICs with integrated overcurrent protection (OCP), fault status output and enable functions. This highly integrated driver is very friendly for PFC (Digitally Controlled Power Factor Correction) applications with boost topology and ground reference.

In PFC applications, shunts are used to sample power switch current or DC bus current. The location of the diverter varies depending on the control method selected. For example, in Example 1 of Figure 1, a shunt is placed between the IGBT emitter and system ground to sample the current of the power switch when the controller implements peak current control or current balance control in an interleaved PFC application.

In contrast, Figure 1, Example 2 shows a shunt between system ground and the negative side of the DC bus to sense the DC bus current. This configuration is often used for average current mode control, where the digital controller can calculate the input power based on the average current and DC bus voltage feedback.

Figure 1: Two different types of low-side gate drivers with OCP: 1ED44176N01F (Example 1) has positive current sampling to meet the requirements of the first shunt position, while 1ED44173/5N01B (Example 2) has negative current sensing to Meets requirements for second shunt location

Application in Home Air Conditioning

In today’s Residential Air Conditioning (RAC) applications with digitally controlled PFC, the controller uses the power feedback signal to achieve adaptive DC bus voltage control. This allows for reduced losses at light loads when using a lower DC bus voltage, and switching to a full DC bus when full load is required.

Due to the different shunt configurations, Infineon has designed two different types of low-side gate drivers with OCP: 1ED44176N01F (Fig. 1, example 1), and 1ED44173N01B and 1ED44175N01B (Fig. 1, example 2). The former has positive current sensing to satisfy the shunt configuration of Example 1, while the latter two have negative current sensing to satisfy the shunt configuration of Example 2. The 1ED44175N01B is aimed at driving IGBTs, while the 1ED44173N01B is aimed at driving MOSFETs.

Figure 2: Differences in functionality of 1ED44173/5/6

In high-current, high-speed switching circuits like PFCs, PCB layout is always a challenge. A good PCB layout ensures device operating conditions and design stability. Improper components or layout can cause switching instability, excessive voltage ringing, or circuit latch-up.

Best PCB Layout Tips for Gate Driver ICs

1. When using an RC filter circuit between the microcontroller and the gate driver, keep the wiring at the input as short as possible (less than 2-3 cm).
2. The EN/FLT output is an open-drain output, so it needs to be pulled up to a 5V or 3.3V logic power supply with a pull-up resistor. When designing, place the RC filter close to the gate driver.
3. To prevent false triggering in overcurrent protection, the RC filter wiring between OCP and ground should be as short as possible.
4. Install each capacitor as close to the gate driver pins as possible.
5. Connect the ground wire of the microcontroller directly to the COM pin (1ED44173/5N01B).
6. Connect the gate output return to COM and connect the microcontroller’s ground pin to the VSS logic ground pin (1ED44176N01F), this prevents noise coupling of the logic input pins to the driver output return.

Let’s take a look at what the right layout can do. The example below shows the circuit (Fig. 3) and layout implementation (Fig. 4) of a 1ED44175N01B and a TO-247 IGBT such as the IKW40N65WR5. With this design, the loop area and inductance of the PCB can be reduced.

Figure 3: Circuit diagram of 1ED44175N01B

Figure 4: PCB layout of the above circuit

How to reduce PCB trace enclosing area to reduce parasitic inductance

・ Place 1ED44175N01B close to IGBT gate and emitter
・Place the decoupling capacitor (C3) directly on the VCC and COM pins
・ Place filter capacitors (C1 and C4) and fault clear time programming capacitor (C2) close to the pins
・ Place the ground plane directly above or below the 1ED44175N01B, which reduces trace inductance

Additionally, the ground plane connected to COM helps act as a radiated noise shield and provides a thermal path for the device to dissipate power. Following these layout tips can eliminate common noise coupling problems and save you development time.

The Links:   LQ9D011K FZ1000R16KF4

Wireless Design Challenges and Solutions Based on M2M Market

Growth in machine-to-machine (M2M) connections is now far outpacing new connections between people, and there will soon be far more machines than people connected via cellular networks, as predicted by the GSM Association in Figure 1. Show. These machines include security systems, meters, robots, vending stations, asset trackers and emergency call systems. The variety is growing, with millions of machines exchanging data 24 hours a day, 7 days a week, without human intervention, and silent conversations.

Authors: Thomas Nigg, Stefano Moioli

When choosing a wireless modem, there are a number of features to consider. We cover these here.

Growth in machine-to-machine (M2M) connections is now far outpacing new connections between people, and there will soon be far more machines than people connected via cellular networks, as predicted by the GSM Association in Figure 1. Show. These machines include security systems, meters, robots, vending stations, asset trackers and emergency call systems. The variety is growing, with millions of machines exchanging data 24 hours a day, 7 days a week, without human intervention, and silent conversations.

Figure 1: Growth of M2M communications.

At the same time, connecting to the internet has become cheaper and easier, and even mass-produced computing devices are capable of collecting and processing ever-increasing amounts of data. With the introduction of IP version 6 (IPv6), a potential bottleneck for the more than 4 billion IP version 4 (IPv4) addresses that have been allocated is a potential bottleneck for larger M2M connections. This supports 2128 addresses, enough for every grain of sand on the planet to have its own. So it’s perhaps no surprise that the fourth generation of mobile networks (4G), LTE, is designed to provide services such as data, voice, and video over IPv6.

To join the M2M networking revolution, simply embed a small, economical (wireless) modem in the machine. Machines also need a GPS or GNSS (Global Navigation Satellite System) receiver where position, speed or navigation information needs to be established. Both components with antennas can easily fit in devices smaller than a cell phone. GNSS is the standard generic term for satellite navigation systems that provide autonomous geospatial positioning with global coverage. It includes GPS (USA), GLONASS (Russia), Galileo (Europe), Beidou (China) and other regional systems.

When considering how to equip a machine with communication capabilities, the first thing to consider is the needs of the application. Product longevity, geographic network coverage, or future-proofing considerations for future wireless network upgrades are all important considerations. Here are some product features to consider when choosing a wireless modem.

Battery life matters

The time between battery charges or replacements is critical to the success of some products. For example, a tracking device mounted on a container may take days if shipped by air or road, or weeks if shipped by sea. Battery life must be sufficient to support these timescales.

Phones typically run for two to three days on a charge. As a result, consumer expectations for the longevity of health and fitness equipment will be similar. Active and standby current consumption and power saving features are important when comparing modem and GNSS receiver specifications in these applications. The latter may include automatic wake-up capabilities and smart power-saving modes, such as the ability to autonomously log data without waking up the host processor. Ideally, components should only wake up when needed.

Mobility requires multi-standard compliance

The global mobility of people and goods is increasing, so it’s important to consider where modems need to work today and where they may need to work in the future. GSM is supported by four major frequency bands worldwide, UMTS is supported by six frequency bands and LTE is supported by more than 30 frequency bands. Electricity meters are usually static, while resource management systems may need to work in all regions of the world and should include quad- or dual-band GSM modems (depending on location) or six-band UMTS modems.

Certified modems to expedite product approvals

Any cellular network equipment, whether for GSM, UMTS or LTE, requires regulatory, industry and operator certification. If the modem embedded in the device is certified, it significantly simplifies and speeds up the certification process.

What you need today may be different tomorrow

While GSM/GPRS networks are fully capable of handling the small amounts of data transmitted in remote metering applications, GSM frequency bands have been considered for reallocation for 3G and 4G services. To save future-proofing costs, it is a good idea to design with future technical standards in mind. Today, that means designing with UMTS/HSPA or LTE modems, or at least using future-proof hardware to simplify upgrades.

Nested Design Simplifies Technology Upgrades

Cellular M2M technology is constantly evolving, and when designing new devices that support cellular connectivity, it is important to consider that they can be upgraded to newer technologies to optimize design costs. Here, there is layout compatibility for the entire range of cellular modems (GSM, UMTS, CDMA and LTE). With this approach, one PCB layout can be used for all end product variations, ensuring easy migration between cellular technologies and module generations, also thanks to AT command compatibility within different modules.

Bandwidth requirements are rarely reduced

Tracking an application’s bandwidth needs only goes in one direction — upward — so it’s important to consider the lifetime cost of a connection. Choose a modem based on what it’s likely to do in three to five years, or at least one that’s easy to upgrade.

Special needs for cars

In in-vehicle systems, temperature, humidity and vibration can be extreme. AEC-Q100 qualified equipment manufactured in an ISO/TS 16949 certified facility will ensure reliable, long-life operation. The qualification test for each component shall be in accordance with ISO16750, Road vehicles – Environmental conditions and tests for electrical and Electronic equipment. This applies to on-board and industrial equipment operating in harsh environments, such as ships or rail cars.

Emergency call systems are growing in popularity

More and more cars are equipped with systems that automatically report accidents or aid in recovery after theft. The United States, Europe, Russia, and Brazil already have plans to support such systems, and government mandates will increasingly require them. For these applications, an “in-band modem” is usually required. It sends data over a modem voice channel in the same way a fax machine sends data over a telephone line. This is necessary because carriers prioritize voice over data in mobile networks. In the event of an accident, the voice channel becomes a critical link in transmitting data to emergency services. Check if the suggested solution supports in-band modems on 2G and 3G networks.

Assisted localization in urban environments

In urban environments where satellites may be blocked by tall buildings, the loss of the location overview can be overcome by invoking a remote A-GPS server. This simple process uses a wireless modem to download a few bytes of satellite orbit data from the Internet. With this assistance data, visible satellites only need to be visible for a few seconds to calculate position, rather than the full 30 seconds required to receive the entire 1,500-bit satellite frame.

Check if the GPS receiver vendor offers online help with guaranteed availability and that covers the geographic area of ​​interest. Client software should transparently support this service, and both the positioning receiver and the wireless modem should have an interface to support this service. It is also increasingly important that the service be available for GPS and GLONASS.

Dead reckoning supports inferring positioning data from sensors

Satellite signals can be supplemented with dead reckoning support to infer position and velocity from data from vehicle sensors, as shown in Figure 4. This method helps determine the location of the vehicle in tunnels or other locations where satellite reception is temporarily unavailable. It is useful in vehicle-based telematics, including insurance tracking systems, which accurately record position, heading, and speed.

Figure 4: Dead reckoning infers position data from vehicle sensors, including gyroscopes and wheel scale sensors.

Check that the positioning receiver is automotive grade, supports dead reckoning, and can be plugged into the vehicle’s CAN bus to get data. Additionally, ensure they can interface directly with vehicle sensors such as gyroscopes and odometers, and that suppliers provide an evaluation environment to accelerate product development.

Indoor positioning is possible by combining satellite and cellular data

Combining a satellite receiver with a wireless modem can overcome the problem of satellite signals being blocked by walls or other obstacles in situations where an approximate indoor location needs to be determined. This hybrid solution takes advantage of the visibility of 2G or 3G cells, as GSM or UMTS signals can easily penetrate walls. The approximate location can be calculated by knowing where the cells overlap, knowing the boundaries of the visible mobile cells. This approach requires a wireless connection to an external service, similar to assisted positioning. Check if the positioning receiver and wireless modem supplier can provide such a solution and it is verified and available online. It is also important to ensure the accuracy of the system.

Positioning System Compatibility

Until recently, GPS was the only concern for system designers. Now, there are GLONASS in Russia, QZSS in Japan, Beidou in China and Galileo in Europe. Compatibility with GPS and at least one other satellite system will be required to improve the reliability and accuracy of the system and to fulfill the task of local governments being compatible with their own systems. Parallel operation using both systems at the same time may be part of the specification. An example is Russia’s new ERA-GLONASS vehicle emergency call system, which needs to be compatible with GLONASS. Look for GPS/GNSS receivers that offer multi-GNSS support and offer parallel GPS/GLONASS or GPS/Beidou reception.

These are just some of the considerations when adding wireless connectivity to M2M products. Remember that many new standards such as wireless and positioning are transitioning. It is important to consider how the product will operate during its life cycle and which markets it will serve. Also, consider whether it is important to include design support for next-generation performance and network coverage, or to choose designs that allow for easy product upgrades in the field.

The Links:   2MBI600NT-060 6MBI450VY-170

Ericsson: Global 5G subscribers will increase to 580 million this year, to exceed 3.5 billion in 2026

According to foreign media reports, since the three major operators in South Korea launched 5G commercial services on April 3, 2019, it has been more than two years since the global 5G commercialization began, and more and more countries have joined. With the 5G commercial ranks, the global 5G commercial network is also increasing, and 5G users are also increasing.

Telecom equipment supplier Ericsson said on Tuesday that global 5G subscribers will increase to 580 million by the end of this year, up 360 million from 220 million at the end of last year and a 163.6 percent increase from a year earlier.

Ericsson also predicts that in 2026, five years later, there will be more than 3.5 billion 5G users worldwide, and the proportion of global mobile communication users will increase significantly.

In terms of regions, Ericsson stated that Northeast Asia is currently the region with the highest 5G penetration rate in the world, followed by North America. However, Ericsson expects that by 2026, the proportion of 5G users in North America among mobile communication users will reach 84%, and the proportion of mobile communication users in North America will reach 84%. Become the region with the highest 5G penetration rate in the world.

Ericsson also said there were 6 billion smartphone users worldwide at the end of last year, and they expect that to rise to 7.7 billion by 2026.

The agency pointed out that the next 3-5 years will still be a period of 5G construction dividend release. In the short term, with the completion of the 5G base station bidding test by domestic and foreign major equipment manufacturers, the third phase of domestic 5G bidding and centralized procurement is about to start. As the fourth largest operator joins the construction of 700MHz 5G wireless network this year, the number of new 5G base stations will exceed 800,000, an increase of 38% from 580,000 in 20 years, which will effectively boost 5G main equipment manufacturers, optical fiber The performance of module/optical device manufacturers, etc.

Among the relevant listed companies, Zhongtong Guomai is a professional construction enterprise with the first-class qualification of general contracting of communication construction and a professional domestic professional communication technology service provider. Its customers include major domestic communication operators and communication equipment manufacturers. T&C’s main business is the R&D, manufacturing and sales of various optical communication devices and their integrated functional modules. It has many years of technical accumulation in high-density optical connectors, optical splitters, planar optical waveguide chips and optical fiber sensing. .

So far, more than 160 service providers in the world have launched 5G, and there are more than 300 brands of 5G phones that can use 5G technology. Earlier, my country’s Ministry of Industry and Information Technology issued a notice on the “Dual Gigabit Network Coordinated Development Action Plan (2021-2023)”, which pointed out that it will take three years to basically build a “Double Gigabit” network that fully covers urban areas and qualified townships. “Dual Gigabit” network infrastructure, enabling fixed and mobile networks to generally have “gigabit-to-home” capabilities. The development of gigabit optical network and 5G users accelerated, and the user experience continued to improve. Augmented reality/virtual reality (AR/VR), ultra-high-definition video and other high-bandwidth applications are further integrated into production and life, and the typical industry gigabit application model has formed a demonstration. The core technology research and development and industrial competitiveness of gigabit optical network and 5G have maintained the international advanced level, and the modernization level of the industrial chain and supply chain has been steadily improved.

According to previous news from the Ministry of Industry and Information Technology, more than 480,000 5G base stations have been built nationwide; the number of 5G online terminal connections has exceeded 100 million. According to incomplete statistics, as of the end of July 2020, 99 network operators in 46 countries/regions around the world have indicated that they have begun to provide 5G services.

The Links:   GP2500-SC41-24V FP35R12W2T4

The best in 5G chips! Qualcomm Snapdragon 865 + Huawei Kirin 990, see who is the strongest?

Qualcomm Snapdragon 865

In terms of performance and power consumption, the Snapdragon 865 adopts Samsung’s 7nm EUV process and adopts the latest semi-custom version of the A77 architecture. In terms of CPU frequency: the design of large, medium and small cores is still adopted, in which the main frequency of the large core is 2.84GHz, the main frequency of the medium core is 2.42GHz, the small core is 1.8GHz, and the GPU is upgraded to Adreno650.

The Qualcomm Snapdragon 865 mobile platform also supports the latest LPDDR5 memory, and the highest frequency supports 2750MHz, so the Qualcomm Snapdragon 865 configuration is very good. The most important feature of the 5G mobile platform is that the Qualcomm Snapdragon 865 is the first user to upgrade through the app store. GPU powered mobile platform. Bands up to 6GHz and mmWave are supported. Qualcomm’s new-generation 5G mobile platform can also support the effect of dynamic spectrum sharing and allow operators to deploy 5G networks directly on the 4G spectrum, greatly reducing network deployment costs. Supports non-standalone (NSA) and standalone (SA) networking modes, dynamic spectrum sharing (DSS), global 5G roaming, and supports multiple SIM cards. Beyond 5G connectivity, the Snapdragon 865 is redefining Wi-Fi 6 performance and the Bluetooth audio experience with the Qualcomm FastConnect 6800 mobile connectivity subsystem.Numerous Wi-Fi 6 feature innovations help users take full advantage of high speed (nearly 1.8Gbps) and low latency

Snapdragon 865’s ISP processing speed is as high as 2 billion pixels per second, and it supports new shooting features and functions. Users can shoot 4K HDR video with 1 billion colors, 8K video, or capture photos with up to 200 million pixels. With unlimited high-definition slow-motion video capture at 960 fps, users can also take full advantage of gigapixel processing speed to shoot slow-motion video, capturing every millisecond of detail.

The Snapdragon 865 is the most advanced 5G mobile platform so far, and the world’s first commercial baseband-to-antenna complete 5G solution, which can provide a peak downlink rate of up to 7.5 Gbps, and supports 5G PowerSave technology, Smart Transmit intelligent transmission technology, Wideband envelope tracking technology, Signal Boost signal enhancement technology, etc.

Huawei Kirin 990

Kirin 990 5G adopts 7nm+ EUV process, and integrates 5G Modem into SoC for the first time. The board-level area is 36% smaller than other solutions in the industry. It integrates 10.3 billion transistors on a chip the size of a fingernail, which is currently the largest number of transistors. The most complete and complex 5G SoC. Huawei has integrated the 5G baseband with the chip, and integrated 2G/3G/4G and 5G into one chip for the first time, achieving a true full Netcom; in terms of architecture, the Kirin 990 5G chip has a total of 8 cores, that is, 8 cores. 2 high-performance cores, 2 medium-performance cores, 4 low-performance cores, intelligent scheduling, and power consumption is greatly reduced by 58%. Compared with Qualcomm’s top Snapdragon 855 chip, the single-core performance is 10% higher and the multi-core performance is 9% higher.

The best way to reflect the strength of the Kirin 990 5G benchmark is of course in the use of 5G. Kirin 990 5G is the first to support 5G NSA/SA dual-mode, and it has been leading the industry at the time of its launch. However, when 5G dual-mode support has become the industry consensus, but it is still debated whether to support n79, Kirin 990 5G uses advanced 5G frequency bands. Support continues to lead the forefront: Kirin 990 5G has achieved full coverage of all frequency bands of domestic 5G operators, and continues to serve as the “benchmark” for the industry to catch up.

As the second half of the first year of 5G, major mobile phone manufacturers will definitely launch their 5G smartphones in the second half of this year. However, the key point lies in two parts. First, who can take the lead in launching products based on integrated 5G SoC chips with excellent user experience. Smartphone products; 2. How to sink the 5G experience to the mid-end or even low-end market. The release of the Kirin 990 5G chip undoubtedly shows us Huawei’s determination to do the first thing. At the same time, through the understanding of the Kirin 990 5G chip process, the author can also see the possibility of sinking this technology.

The Links:   6MBI100S-120-02 LQ050Q5DR01

2021 Apple Developer Conference, MacBookPro, AirPods 3 are not released!

Is it worth staying up late to watch the Apple event?

In the early morning of June 8th, Beijing time, the 2021 Apple Developer Conference (WWDC) was held, and this was another night that disappointed the fruit fans who stayed up late.

Previously exposed hardware products such as MacBookPro, AirPods 3, and AppleGlass have not been released, and the new software system released is only minor repairs on the basis of the original system, and does not bring the ideal leap-forward upgrade. Let’s put aside Apple’s “software” theme in the first half of the year, and focus on the regular operations of “hardware” in the second half of the year. But as far as Chinese consumers are concerned, staying up late to watch the launch may not interest more people.

Users want more, Apple gives less

In the eyes of the outside world, the annual WWDC is a grand event. During WWDC, Apple will introduce a new generation of systems, including new features, to developers and users, so as to point out the way for developers to develop in the future. This time is no exception. In the nearly 110-minute speech, Apple introduced new updates to its systems including iOS, iPadOS, MacOS, and WatchOS.

At the press conference, Apple constantly emphasized that these new system upgrades have powerful deep learning capabilities, thereby bringing consumers a more humane experience. However, for ordinary consumers, perhaps this deeper experience may not be experienced in daily use, and they may not be able to feel the difference from the previous generation.

As the most successful Apple device, iPhone users are naturally most concerned about the new iOS 15 system. At WWDC21, Apple did not sell everyone a secret, and as soon as it came up, it officially met everyone with ios 15 as the opening show.

However, unlike the major changes in the ios 14 era, ios 15 has no substantial upgrades, only a few small functions have been updated. It is still an upgrade of Apple’s native application functions and services. In FaceTime, iMessage, wallet, maps and other functions of almost the whole set of Apple apps have become more interesting and richer. However, these applications or functions are either not used or not used in China.

After all, in terms of mobile phone use, Chinese consumers seem to have a natural rejection of the original software of mobile phones. Whether it is Apple Maps, FaceTime or the software on Android phones, in the eyes of most Chinese consumers, they are only at the stage they have seen. It will not click to use, and more will choose third-party software to operate. For example: Baidu and AutoNavi for map navigation; QQ and WeChat for video chat; Toutiao for browsing news; Douyin and Kuaishou for video software. From this point of view, even if Apple upgrades its native applications to the extreme, it is still unknown how willing consumers are to use them under the inherent thinking of consumers. Because the habit of using it in the past has been formed.

Of course, ios 15 is not without its bright spots. One reassuring news is that the ios 15 adaptation still supports the iPhone 6S. This is enough to make those “nail households” of Apple mobile phones feel that the iPhone 6s in their hands can still be used for 3 years.

In addition to ios 15 being slightly embarrassing to the Chinese, the upgrade of Apple’s iPad OS is also extremely disappointing. The so-called greater hope, the greater the disappointment. Apple, who proposed “why is your next computer a computer”, obviously still has a long way to go.

In the minds of consumers, when the M1 chip comes to the iPad Pro, users naturally hope that the iPad will become a real productivity tool like the Mac. However, the iPad OS 15 released this time still focuses on adding user-friendly functions, supplemented by program development. Its new version of the desktop controls, split-screen operation and quick memo recording functions can only be seen as an extension of the ios 15 function.

As for Mac OS, as long as the interface of the Safari browser is upgraded, it becomes more beautiful. In addition, it also supports the air casting function, which can transfer the content played on the mobile phone to the Mac. As for watchOS just adding a few fitness features, it doesn’t seem too much to say that a version number is changed.

Through the content of this conference, we can find that compared with the previous Apple, Apple has brought us fewer and fewer surprises. Without a hardware release, the software is also tinkered with the original. This is undoubtedly disappointing for fruit fans who are full of expectations.

This article is originally from Blue Technology. Without authorization, any website and platform shall not be reproduced, and infringement must be investigated.

The Links:   2MBI600VE-120 G104SN02 V1 IGBTS

How to Build a Small and Lightweight Depth Perception System Through Stereo Vision

The advantages of using stereo vision to form depth information perception are numerous, including working well outdoors, being able to provide high-resolution depth maps, and being fabricated with low-cost off-the-shelf components. When you need to develop a customized embedded stereo perception system, following the instructions provided here will also be a relatively simple task.

There are various 3D sensor schemes to implement depth perception systems, including stereo vision cameras, lidars, and TOF (time-of-flight) cameras. Each option has its pros and cons, among them, embedded depth-sensing stereo systems are low-cost, rugged, suitable for outdoor use, and capable of providing high-resolution color point clouds.

There are various off-the-shelf stereo perception systems on the market today. Sometimes a system engineer needs to build a custom system to meet specific application needs based on factors such as accuracy, baseline (distance between two cameras), field of view, and resolution.

In this article, we first introduce the main parts of a stereo vision system and provide instructions for making a custom stereo camera using hardware components and open source software. Since this setup is focused on embedded systems, it will compute the depth map of any scene in real-time, without the need for a host computer. In another article, we will discuss how to build a custom stereo vision system with less space for use with a computer mainframe.

Overview of Stereo Vision

Stereoscopic vision is the extraction of 3D information from digital images by comparing information in a scene from two viewpoints. The relative position of the object in the two image planes provides information about the depth of the object from the camera.

An overview of the stereo vision system is shown in Figure 1 and includes the following key steps:

1. Calibration: Camera calibration includes internal calibration and external calibration. The internal calibration determines the image center, focal length and distortion parameters, while the external calibration determines the 3D position of the camera. This is a crucial step in many computer vision applications, especially when metrological information about the scene, such as depth, is required. We discuss the calibration steps in detail in Section 5 below.

2. Correction: Stereo correction refers to the process of reprojecting the image plane onto a common plane parallel to the line between the camera centers. After correction, the corresponding points are located on the same line, which greatly reduces the cost and ambiguity of matching. This step is done in the code provided to build your own system.

3. Stereo matching: This refers to the process of matching pixels between the left and right images, resulting in a parallax image. The code provided will use the semi-global matching (SGM) algorithm to build your own system.

4. Triangulation: Triangulation refers to the process of determining a point in a given 3D space given its projection onto two images. Parallax images are converted to 3D point clouds.

Figure 1: Overview of the Stereo Vision System

Design example

Let’s look at an example of a stereo system design. Following are the application requirements for mobile robots in dynamic environments with fast moving objects. The relevant scene size is 2 m, the distance from the camera to the scene is 3 m, and the required accuracy at 3 m is 1 cm.

See this article for more details on Stereo Accuracy. The depth error is given by: ΔZ=Z²/Bf * Δd, which depends on the following factors:

• Z is the range
• B is the baseline
• f is the focal length in pixels, related to the camera’s field of view and image resolution

There are various design options to meet these requirements. Based on the above scene size and distance requirements, we can determine the lens focal length for a particular sensor. Combined with the baseline, we can use the above formula to calculate the expected depth error at 3 m to verify that it meets the accuracy requirements.

Figure 2 shows two options, using a low-resolution camera with a long baseline or a high-resolution camera with a short baseline. The first option is a larger camera but with lower computing requirements, while the second option is a more compact camera with higher computing requirements. For this application, we chose the second option because the compact size is more suitable for mobile robots, we can use the Quartet embedded solution for TX2, which has a powerful on-board GPU for processing needs.

Figure 2: Stereo system design options for example application

hardware requirements

In this example, we used an IMX273 Sony Pregius global shutter sensor to mount two Blackfly S board-level 1.6-megapixel cameras on a 3D-printed pole at a 12 cm baseline. Both cameras have similar 6mm S-mount lenses. The camera is connected to the “Quartet Embedded Solutions for TX2” custom carrier board using two FPC cables. In order to synchronize the left and right cameras to capture images at the same time, a sync cable was made to connect the two cameras. Figure 3 shows the front and rear views of our custom embedded stereo system.

Figure 3: Front and rear views of a custom embedded stereo system

The following table lists all hardware components:




link method


Quartet carrier board with 8GB TX2 module



1.6 MP, 226 FPS, Sony IMX273, Color



S-Mount and IR Filters for BFS Color Board Level Cameras



6 mm S-mount lens


15 cm FPC cable for board level Blackfly S



NVIDIA® Jetson™ TX2/TX2 4GB/TX2i Active Cooler


NVIDIA® Jetson™ TX2/TX2 4GB/TX2i Active Heat Sink

Sync cable (homemade)


Mounting rod (homemade)


Both lenses should be adjusted to focus the camera within the desired distance for your application. Tighten the screws on each lens (circled in red in Figure 4) to maintain focus.

Figure 4: Stereo system side view showing lens screws

software requirements

a. Spinnaker

The Teledyne FLIR Spinnaker SDK is pre-installed in the Quartet Embedded Solutions for TX2. Spinnaker needs to communicate with the camera.

b. OpenCV 4.5.2 with CUDA support

SGM (the stereo matching algorithm we are using) requires OpenCV 4.5.1 or higher. Download the zip file containing the code for this article and extract it to the StereoDepth folder. The script to install OpenCV is Type the following command in the terminal:

cd ~/StereoDepth
chmod +x

The installer will ask you to enter an administrator password. The installer will start installing OpenCV 4.5.2. Downloading and building OpenCV can take several hours.


Code for grabbing and calibrating stereo images can be found in the “Calibration” folder. Use the SpinView GUI to identify the serial numbers of the left and right cameras. In our setup, the right camera is the master camera and the left camera is the slave camera. Copy the master and slave camera serial numbers to the file grabStereoImages.cpp lines 60 and 61. Build the executable with the following command in the terminal:

cd ~/StereoDepth/Calibration
mkdir build
mkdir -p images/{left, right}
cd build

Print out a checkerboard pattern from this link and stick it on a flat surface to use as a calibration target. For best results when calibrating, set Exposure Auto to Off in SpinView and adjust the exposure so that the checkerboard pattern is clear and the white squares are not overexposed, as shown in Figure 5. Gain and exposure can be set to automatic in SpinView after a calibration image is collected.

Figure 5: SpinView GUI Settings

To start collecting images, type


The code should start collecting images at about 1 frame/sec. The left image is stored in the images/left folder and the right image is stored in the images/right folder. Move the target so that it appears in every corner of the image. You can rotate the target to take images from near and far. By default, the program captures 100 image pairs, but this can be changed using command line arguments:

./grabStereoImages 20

This will only collect 20 pairs of images. Note that this will overwrite any images previously written to the folder. Some example calibration images are shown in Figure 6.

Figure 6: Example Calibration Image

After collecting the images, run the calibration Python code by typing:

cd ~/StereoDepth/Calibration

This will generate 2 files named “intrinsics.yml” and “extrinsics.yml” which contain the internal and external parameters of the stereo system. The code defaults to 30mm checkerboard squares, but can be edited as needed. At the end of the calibration, it will show the RMS error, indicating how good or bad the calibration is. Typical RMS error for a good calibration should be less than 0.5 pixels.

real-time depth map

The code to calculate parallax in real time is in the “Depth” folder. Copy the camera serial number to the file live_disparity.cpp lines 230 and 231. Build the executable with the following command in the terminal:

cd ~/StereoDepth/Depth
mkdir build
cd build

Copy the “intrinsics.yml” and “extrinsics.yml” files obtained during the calibration step to this folder.To run the live depth map demo, type


It will show the left camera image (the original uncorrected image) and the depth map (our final output). Some example output is shown in Figure 7. The distance from the camera is color-coded according to the legend to the right of the depth map. A black area in the depth map means that no disparity data was found in that area. Thanks to the NVIDIA Jetson TX2 GPU, it can run up to 5 fps at 1440 x 1080 and up to 13 fps at 720 x 540.

To see the depth of a specific point, click on the point in the depth map and the depth will be displayed, as shown in the last example in Figure 7.

Figure 7: Sampling the left camera image and corresponding depth map. The bottom depth map also shows the depth at a specific point.

The Links:   LQ104S1DG21 SEMIX703GB126V1

Clinical-grade wearable encounters a power crisis?Learn about the new structure sensor IC

[Introduction]In the list of functions of health and fitness wearables, heart rate (HR) and blood oxygen saturation (SpO2) are rapidly moving from the “desirable” stage to the “hopeful” stage. However, the shift has led to a decline in the quality of readings, as some sensor manufacturers have relaxed the quality of readings on their products in a rush to meet market demand for these features, raising questions about the accuracy of their products.

While reading accuracy may be less critical in everyday wearables, in clinical-grade wearables the quality and integrity of the measurements must be unquestionable. Designers face a key challenge: how to make high-quality HR and SpO2 measurements without draining the device battery too much? In this design solution, we first show why traditional optical readout methods waste power, and then introduce a sensor IC with a novel architecture that can perform clinical-grade measurements while significantly reducing power consumption.

Photoplethysmography (PPG)

We measured HR and SpO2 using an optoelectronic technique called photoplethysmography, or PPG (Figure 1). The PPG signal is obtained by detecting changes in the intensity of light reflected from blood vessels below the surface by illuminating the skin with a light-emitting diode (LED) and using a photodiode (PD) to generate a current proportional to the amount of light received (Figure 2).

Figure 1. Measurement of HR and SpO2 using a wrist-worn device.

Figure 2. PPG measurement using LED and PD.

The current signal is conditioned by the PPG analog front end (AFE) and then converted by the ADC for processing by an optical algorithm running on the system microcontroller. In principle, a single pair of LED-PDs is sufficient for PPG measurements, a configuration that is common in clinical devices (Fig. 3).

Figure 3. Measurement of SpO2 and HR in a clinical setting.

However, these devices operate in a completely different environment than in everyday life. First, the patient remains still and measurements are taken by sensors clipped to the patient’s fingertips. The lighting conditions are relatively stable, which simplifies the light detection of PDs, and these devices are generally powered by the mains supply, so there is no need to worry about power consumption.

In contrast, wearables are generally worn on the wrist, which means that the level of contact with the skin generally depends on personal preference (how tight the wristband is) and the movements of the wearer. Lighting conditions can vary greatly from day to day with location and time of day, and these devices are battery powered, so it is imperative to keep the sensor’s current consumption as low as possible. Also, different wearers have different skin tones, which makes the problem even more challenging. According to the description, the perfusion index of dark skin is lower than that of light skin, which means that to make the measurement, a greater illumination intensity is required (requiring the sensor to consume more power). Next, we look at the advantages of different AFE structures for making PPG measurements.

PPG AFE with single ADC channel

Increasing the LED current or using two LEDs is a very intuitive way to increase the intensity of the skin irradiation (Figure 4), which increases the skin’s irradiated area. However, this approach is very power-intensive, as the LED current accounts for at least 50% of the total power consumption of the PPG system, which may average 1 mW depending on the wearer’s skin perfusion index. Overall, this approach is inefficient and bad for battery life.

Figure 4. Using two LEDs to increase skin illumination intensity.

PPG AFE with two ADC channels

A better approach we can use to increase the skin exposure is to use an LED containing two PDs, which can be used to detect more reflected light (Figure 5).

Figure 5. Using an LED containing two PDs to improve light detection.

The advantage is that the standard 20 mA LED current can be reduced to 10 mA compared to using a single PD, resulting in the same level of total PD current. Under challenging operating conditions (low skin perfusion and/or when the wearer is moving), the system’s algorithm determines that higher LED current is required, and the system sensitivity increases proportionally. For example, with the same LED current as before, twice the PD current is provided, which results in higher overall sensitivity, albeit at an increased power cost.

PPG AFE with Four ADC Channels

Using four PDs (requiring a four-channel ADC) to detect reflected light saves more power (Figure 6) because the LEDs can operate at lower power (Table 1).

Figure 6. PPG measurement using one LED and four PDs.

Table 1. Typical Power Consumption Comparison for 1-, 2-, and 4-Channel ADC Architectures

Table 1 summarizes the relative power dissipation of the various structures considered previously, assuming a typical supply voltage of 1.6 V.

This configuration provides higher-quality readings due to the asymmetric distribution of blood vessels and bones in the wrist, but the four PDs can help eliminate the effects of motion, as well as the tightness of the device. The four PD receivers also increase the probability of detecting reflected light from the illuminated vessels. The graph in Figure 7 shows the HR measured using 4 photodiodes (configured as two independent pairs: LEDC1 and LEDC2) compared to a reference measurement (polar). Wearable devices need to ensure good skin-to-skin contact during measurements. Initially, the wearer rested and started exercising after 5 minutes (300 seconds), causing their HR to start to rise. Obviously, the signals on LEDC1 and LEDC2 deviate from the reference measurement by different degrees, so it is beneficial to use two pairs of PDs to capture the signal and account for all these deviations in combination.

Figure 7. HR readings obtained when using two independent pairs of PDs.

Practical Quad-Channel ADC Solution

The MAX86177 (Figure 8) is an ultra-low-power quad-channel optical data-acquisition system with transmit and receive channels ideal for clinical-grade (and general-purpose) portable and wearable devices. The transmitter integrates two high-current 8-bit programmable LED drivers, supporting up to 6 LEDs. The receiver integrates four low-noise charge integration front ends, each containing an independent 20-bit ADC that can multiplex input signals from eight PDs (configured as four independent pairs). It achieves a dynamic range of 118 dB and provides up to 90 dB of ambient light cancellation (ALC) at 120 Hz. The main supply voltage is 1.8 V, and the LED driver supply voltage is 3.1 V to 5.5 V. The device provides full autonomous support for I2C and SPI compatible interfaces. The MAX86177 is available in a 2.83 mm × 1.89 mm, 28-pin (7 × 4) wafer-level package (WLP) and operates over the –40°C to +85°C temperature range. A laboratory test sample of this AFE showed a total root mean square error of 3.12% for hypoxia measurements, which was within the 3.5% limit set by the FDA for clinical-grade monitors.

Figure 8. Block diagram of the MAX86177 quad optical AFE.

in conclusion

A major challenge facing clinical-grade wearable device designers is how to perform optical PPG measurements to obtain HR and SpO2 values ​​without significantly draining the device’s battery life. In this design solution, it can be seen that the four-channel ADC structure can save up to 60% of power compared to the basic structure using a single LED and PD. The MAX86177’s quad-channel architecture in a small package is ideal for finger, wrist, and ear-worn wearables for clinical-grade HR and SpO2 measurements. It can also be used to measure body water content, muscle and tissue oxygen saturation (SmO2 and StO2), and maximum oxygen consumption (VO2 max).

The Links:   LM150X05-B3 LTA104A261F

Strategy Analytics: 2020 User Experience Trends – Advances in Artificial Intelligence, 5G and Android Automotive Operating System Will Bring Digital Transformation

A series of new technologies in 2020 will bring digital transformation. The latest insights report from Strategy Analytics’ UX Innovation research team identifies the most noteworthy UX trends for 2020. Artificial intelligence (AI) will begin to deliver tangible user benefits in consumer technology; new and unique use cases will emerge that will take full advantage of ultra-high-capacity 5G networks; foldable devices will accelerate device specification innovation; voice assistants will Progress is being made on integration; and the Android Automotive operating system will have a huge impact on the infotainment user experience.

Key trends to watch in 2020 include:

In AI, there will be more meaningful use cases for technologies based on natural language processing, machine learning and emotion empowerment.

· As 5G emerges in more markets, new and unique use cases will emerge, bringing a new wave of user experience innovation.

Foldable devices will open up opportunities for UX designers and provide new experiences and value propositions.

The voice assistant platform will finally solve the user experience pain points associated with fragmentation.

· The Android Automotive operating system has the potential to eliminate smartphone mirroring – it can access various related applications in the car.

Chris Schreiner, Research Director for User Experience Innovation at Strategy Analytics and report author, commented: “In 2019, consumer expectations for continuous improvement were not being met, and new useful use cases were slowly emerging. AI is still playing a role in eroding consumer confidence. edge use cases; consumers will struggle to see any benefits of 5G other than faster speeds. Unless prices drop significantly, consumers won’t see widespread adoption of foldable-screen devices.”

Schreiner continued: “But in 2020 these will have to progress: improvements in AI will drive advances in voice assistants; foldable screens open up new possibilities for designers outside the smartphone world; and the Android Automotive operating system may Revolutionize the in-vehicle infotainment industry.”

Kevin Nolan, Research Vice President, User Experience Innovation at Strategy Analytics, added: “Consumers want greater accuracy, but they also want deeper functionality and integration across devices – from smart speakers to phones to cars. While this is challenging to implement, we are identifying successful strategies that OEMs can take.”

The Links:   NL6448BC26-09 LM64K837