Aceinna launches high-precision GNSS localization solution

The OpenRTK330L is designed for developers creating guidance and navigation systems for autonomous vehicles, robots, and drones.

Aceinna has developed a new low-cost but high-performance triple-band RTK/GNSS (Real Time Kinematics/Global Navigation Satellite Systems) receiver with built-in triple redundant inertial sensors. Designed to replace the current RTK/INS systems used in today’s autonomous systems, the OpenRTK330L compact navigation solution was created to meet the performance, reliability, and cost requirements of robot, drone, construction, and agriculture systems.

“The combination of a triple-band GNSS receiver and a high-precision IMU has enabled us to make a remarkably accurate, small, reliable, and cost-effective GNSS/INS solution,” said Mike Horton, CTO of Aceinna. “The OpenRTK Precise Positioning Engine optimizes satellite tracking and high RTK fixes rates while integrating seamlessly with Aceinna’s open-source, developer-friendly Open Navigation Platform.”

The new offering includes a triple-band RTK/GNSS receiver coupled with redundant inertial sensor arrays to provide accuracy, reliability, and performance during GNSS outages. It integrates a precise 2 degree/h IMU (inertial measurement unit) to offer 10 to 30 s of high accuracy localization during full GNSS denial. This aims to enable autonomous system developers to deliver accurate localization and position capabilities in their vehicles at prices that meet their budgets. The receiver’s embedded Ethernet interface allows easy and direct connection to GNSS correction networks around the world, and its CAN bus interface allows integration into existing vehicle architectures.

 Aceinna says that the multi-band GNSS receiver can monitor all global constellations—such as GPS, GLONASS, BeiDou, Galileo, QZSS, NAVIC, and SBAS—and simultaneously track up to 80 channels. The module has RF and baseband support for the L1, L2, and L5 GPS bands and their international constellation signal equivalents. The inertial measurement unit and dead reckoning function contains a total of nine accelerometer and nine rate gyro channels based on the company’s triple redundant six-axis IMU array so it can recognize and use only valid sensor data, ensuring high-accuracy protection limits and certifiability under ISO 26262 standards.

The OpenRTK330L is supported by Aceinna’s Open Navigation Platform allowing custom embedded application development on top of its positioning engine and dead-reckoning algorithms. Autonomous solution developers have full access to all resources on the module including the GNSS receiver measurement data, IMU measurement data, and all interfaces.

 

Bosch completes AV sensor portfolio with LiDAR

The company’s three technologies of radar, camera, and now LiDAR are designed to complement each other and deliver reliable information.

According to Bosch, before safe automated driving can become a reality, it is in alignment with nearly all in the industry that a third sensor technology is needed in addition to camera and radar. So, the company is looking to fill a gap in its portfolio by readying long-range LiDAR sensors for production automotive use. It believes that the laser-based distance measurement technology is important for driving functions at SAE Levels 3 to 5.

Details are very limited, but the new Bosch sensor will cover both long and short ranges on highways and in the city. By exploiting economies of scale, Bosch wants to reduce the price for the technology and render it suitable for the mass market.

“By filling the sensor gap, Bosch is making automated driving a viable possibility in the first place,” said Bosch Management Board Member Harald Kroeger.

Bosch says it is taking a holistic approach to technology for all automated driving situations. Its parallel deployment of three sensor technologies aims to ensure that automated driving will offer maximum safety when it is rolled out.

The need for three has reportedly been confirmed by Bosch analysis, where developers investigated all use cases of automated driving functions, from highway assist to fully automated driving in cities. For example, if a motorcycle approaches an automated vehicle at high speed at a junction, LiDAR is needed in addition to camera and radar to ensure the reliable sensing of the two-wheeler. In this instance, radar can struggle to detect the bike’s narrow silhouette and plastic fairings. Moreover, a camera can always be dazzled by harsh light falling on it. As such, there is a need for radar, camera, and LiDAR, with the three technologies complementing each other and delivering reliable information in every driving situation.

Bosch describes the laser as a third eye. In LiDAR systems, the sensor emits laser pulses and captures the laser light that is scattered back. The system then calculates distances based on the measured time it takes for the light to bounce back. LiDAR offers very high resolution, with a long range and a wide field of vision. As a result, the laser-based distance measurement tool can reliably detect even non-metallic objects at a great distance, such as rocks on the road. This means there is plenty of time to initiate driving maneuvers, such as braking or swerving. At the same time, using LiDAR in vehicles exposes the LiDAR system’s components, such as the detector and the laser, to many stresses—above all, with regard to temperature resistance and reliability over the vehicle’s entire lifetime. Bosch says it can draw on its sensor expertise and systems know-how in the fields of radar and camera technology when developing the LiDAR to ensure that all three sensor technologies dovetail with each other.

According to the company, Bosch’s long-range LiDAR will not only fulfill all safety requirements for automated driving, it will also enable automakers to efficiently integrate the technology into a very wide range of vehicle types in the future.

 

Rohde & Schwarz introduces test chamber for next-gen radar

The high-precision solution enabled the calibration and verification of Uhnder’s new, fully integrated 4D digitally modulated automotive radar-on-chip.

Rohde & Schwarz is introducing a new solution for testing state-of-the-art, next-generation automotive radar sensors. The test system consists of the new, compact ATS1500C automotive radar test chamber for far-field testing in combination with the AREG100A automotive radar echo generator for precise radar target simulation at various distances. Together, Uhnder says they form an indirect far-field testing solution for reliable and reproducible verification of radar sensors throughout the R&D and validation phase in a compact lab setup. The solution enabled the calibration and verification of Uhnder’s new, fully integrated 4D digitally modulated automotive radar-on-chip (RoC).

According to Uhnder the chamber features a compact antenna test range (CATR) reflector, generating a 30-cm diameter quiet zone for testing in the frequency range from 77-81 GHz. Its high-precision 3D tilt-tilt positioner permits testing of premium automotive radars. A “carefully designed” absorber layout eliminates ghost targets during simulation.

As a startup, Uhnder says it is launching an automotive RoC, introducing new levels of performance and integration with a mission to redefine key technologies for safer ADAS driving. The new technology behind Uhnder’s 4D digitally modulated radar chip offers its performance by integrating 192 virtual channels. A higher number of detections per frame makes it possible to track and classify objects with a processing power of more than 20 TeraOPS, despite using less than 8 W of power.

Uhnder says that its RoC also pioneers high contrast resolution (HCR) technology, which provides significantly improved range and angular resolution and makes it possible to separate small radar reflectors from large reflectors in proximity. According to the company, this permits a more accurate and safe reaction time than with current radar chip technology and paves the way for advanced ADAS functions for today’s vehicles and future driverless vehicles.

 

DJI-backed Livox introduces LiDAR sensors for L3/L4 AVs

Horizon and Tele-15 sensors designed to offer affordability and performance.

DJI, a global leader in civilian drones and aerial imaging technology, showed its full drone lineup at CES 2020. However, more intriguing was its introduction of Livox Technology Co., which introduced two high-performance, mass produced LiDAR sensors. The Horizon and Tele-15 are claimed by the company to feature a scanning method that offers improved sensing performance at a fraction of the cost of traditional LiDAR units—and it provided data to back up its claims.

“The growth potential of the LiDAR industry has been hindered for too long by ultra-high costs and slow manufacturing rates,” said Henri Deng, Global Marketing Director at Livox. “Livox seeks to change this by providing access to high quality LiDAR systems that are easily integrated into a wide array of different use applications. Through our technology, we hope to be the catalyst for the rapid adoption of LiDAR in the quickly growing industries of autonomous driving, mobile robotics, mapping, surveying, and more.” 

The environment scanned by a Livox sensor increases with longer integration time as the laser explores new spaces within its field of view (FOV). The company says that the Mid-40 or Mid-100 sensor generates a unique flower-like scanning pattern to create a 3D image of the surrounding environment. Image fidelity increases over time. In comparison, conventional LiDAR sensors use horizontal linear scanning methods that run the risk of blind spots, causing some objects in their FOV to remain undetected regardless of how long the scan lasts, according to the company. The non-repetitive scanning method of the Livox sensors reportedly enables nearly 100% FOV coverage with longer integration time.

The Horizon and Tele-15 are high-performance LiDAR sensors designed for Level 3 and 4 autonomous driving applications. The Horizon has a detection range of up to 260 m, measured in an environment of 25°C with the laser directly aimed at the object (80% reflectivity). With a horizontal FOV (HFOV) of 81.7°, it can cover four road lanes at a distance of 10 m, and its FOV coverage ratio is comparable with a 64-line mechanical LiDAR at the integration time of 0.1 s. Using five Horizon units enable full 360° coverage with only 5% of a cost of a 64-line mechanical LiDAR, which can be beneficial if customers want to scale up their robotaxi fleets to hundreds or thousands of cars, or if an OEM and Tier 1 want to consider LiDAR for their future models with L3/L4 autonomous driving functions. 

The Tele-15 is made for advanced long-distance detection and offers compact size, high-precision, and durability while vastly extending the real-time mapping range. It can scan 99.8% area within its 15° circular FOV at 0.1 s, which the company says outperforms 128-line mechanical LiDAR sensors currently available on the market. The sensor can successfully detect an object up to 500 m, measured in an environment of 25°C with the laser directly aimed at the object (80% reflectivity), which the company says is hardly achievable by human eyes or other sensors at this cost. As a result, the Tele-15 allows autonomous driving systems to detect remote objects well in advance.

Livox says it has optimized the hardware and mechanical design so that the compact bodies of the 77 × 115 × 84 mm Horizon and 112 × 122 × 95 mm Tele-15 sensors enable customers to easily embed units into existing and future vehicle designs. 

Additionally, the company says that its mass production capability enables it to conduct intensive reliability tests. More than 2900 units of Livox LiDAR sensors have been individually and thoroughly tested for use in various conditions. Each unit has a false alarm rate of less than 1/10000th, even in 100 klx sunlight. The result may vary under different test conditions.

The Horizon and Tele-15 can operate in temperatures between -40° to +185°F (-40° to +85°C). Additionally, the entire sensor of both Horizon and Tele-15 is IP67 rated. 

The company says that traditional mechanical LiDARs have a number of rotating electronic components to achieve a 360° coverage, which may increase the failure rate. Its product design has no electronic moving parts beside its rotating prism(s), which the company says increases its reliability and length of working hours. 

Each sensor’s laser power meets the requirements for a 905-nm Class 1 laser product to IEC 60825-1(2014) and is safe for human eyes, rigorously tested and certified by third-party agencies such as TÜV.

 

Velodyne introduces compact, $100 LiDAR sensor

The Velabit is the company’s smallest sensor and is designed to be embedded almost anywhere within vehicles, robots, and unmanned aerial vehicles.

Velodyne Lidar, Inc. introduced Velabit, its smallest and lowest-cost LiDAR sensor that leverages its technology and manufacturing partnerships to enable cost optimization and high-volume production. The company claims that the new sensor delivers the same technology and performance found on Velodyne’s full suite of state-of-the-art sensors. This compact unit is designed to be embedded almost anywhere within vehicles, robots, unmanned aerial vehicles (UAVs), infrastructure, and more. It is designed to be easy to manufacture at mass production levels. 

“The Velabit democratizes LiDAR with its ultra-small form factor and its sensor pricing targeted at $100 in high-volume production, making 3D LiDAR available for all safety-critical applications,” said Anand Gopalan, the newly appointed Chief Executive Officer for Velodyne LiDAR. “Its combination of performance, size, and price position the Velabit to drive a quantum leap in the number of LiDAR-powered applications. The sensor delivers what the industry has been seeking: a breakthrough innovation that can jumpstart a new era of autonomous solutions on a global scale.”

The Velabit is engineered to be an optimal automotive grade LiDAR solution for advanced driver-assistance systems (ADAS) and autonomous vehicles. It was designed to enables perception coverage for blind-spot monitoring, cross traffic detection, automatic emergency braking, and pedestrian and bicyclist safety. Configurable for custom applications, this mid-range sensor can be combined with other Velodyne sensors, such as the Velarray, for high-speed operation or function as a standalone LiDAR solution in low-speed applications.

Among the Velabit’s highlights are its integrated processing in a compact size of 2.4 x 2.4 x 1.38 in (6.09 x 6.09 x 3.5 cm)—smaller than a deck of playing cards—to be easily embedded in and configured for a range of solutions and applications. Its range is as far as 100 m (328 ft), with a 60-degree horizontal and 10-degree vertical FoV (field-of-view). It employs Class 1 eye-safe 903-nm technology and comes with a bottom connector with cable length options.

Multiple manufacturing sources scheduled to be available for qualified production projects. The Velabit will be available to high-volume customers in mid-2020s.

“Before the Velabit, there was no suitable small and lightweight LiDAR for small unmanned aerial vehicles and unmanned ground vehicles performing obstacle avoidance or mapping,” said Alberto Lacaze, President, Robotic Research. “Since Robotic Research’s Pegasus Mini is a fully autonomous ground and air vehicle, it requires the Velabit’s size and versatility. In addition, the Velabit enables the most advanced GPS-denied HD mapping in the industry.”

 

Marelli Automotive Lighting and XenomatiX to jointly develop LiDAR solutions

The two companies plan to offer modular LiDAR system solutions for advanced driver-assistance systems and autonomous driving applications.

Marelli and XenomatiX announced at CES that they will enter into a technical and commercial development agreement for autonomous driving tech. XenomatiX will provide Marelli’s Automotive Lighting division with its “true solid-state LiDAR” modules for advanced driver-assistance systems (ADAS) and autonomous driving (AD) applications. The two entities will combine competencies and technologies to offer modular LiDAR system solutions to meet future global automotive needs, also leveraging the artificial intelligence (AI) perception technology derived from Smart Me Up, the French startup acquired by Marelli in 2018.

XenomatiX’s technology is based on semiconductor technology said to be designed for mass production and known for its high resolution, reliability, and durability. Unlike many other LiDAR offerings, XenomatiX says its XenoTrack and XenoLidar product lines use non-scanning technology said to have impressed many Tier 1 suppliers and OEMs. The companies expect the solid-state and multi-beam LiDAR technology they develop together will initially provide high reliability and long-range coverage to enable a variety of ADAS functions.

“Our objective is to support our customers in enabling a crucial set of functions in the ADAS and AD field thanks to the true solid-state LiDAR technology of XenomatiX,” said Sylvain Dubois, CEO of Marelli’s Automotive Lighting division.

Marelli Automotive Lighting says its longstanding systems integration, optical, electronics, and software capabilities will complement XenomatiX LiDAR components technology—either in a standalone form or as part of larger front and/or rear modules. Leveraging its investments in perception technology, Marelli will be able to add object recognition and classification capabilities, based on AI, to the LiDAR systems built with XenomatiX components to support global OE customers “on their journey toward making mobility more convenient and safe.”

“Marelli is a leading automotive supplier with the right competencies to develop modular LiDAR solutions fulfilling different Automotive OEM needs, integrating them into larger systems, based on the true sold state LiDAR technology we designed for the automotive market,” said Filip Geuens, CEO of XenomatiX.

 

Ambarella develops automotive camera SoCs for ADAS and AV applications
 

The CV22FS and CV2FS automotive camera SoCs target forward-facing monocular and stereovision ADAS cameras, as well as computer vision ECUs for L2+ and higher levels of autonomy.

Ambarella has developed two new automotive camera system on chips (SoCs) with CVflow AI processing and ASIL B compliance to enable safety-critical applications. Both chips target forward-facing monocular and stereovision advanced driver-assistance systems (ADAS) cameras as well as computer vision ECUs (electronic control units) for L2+ and higher levels of autonomy.

Featuring low power consumption, the CV22FS and CV2FS are designed to be used by Tier 1s and OEMs to surpass New Car Assessment Program performance requirements within the power consumption constraints of single-box, windshield-mounted forward ADAS cameras. Other potential applications for the processors include electronic mirrors with blind-spot detection, interior driver and cabin monitoring cameras, and around view monitors with parking assist.

The two new SoCs are the latest additions to Ambarella’s CVflow SoC family that offers automotive OEMs, Tier 1s, and software development partners an open platform for differentiated, high-performance automotive systems. The CV22FS and CV2FS’s CVflow architecture provides computer vision processing in 8-megapixel or higher resolution at 30 fps for object recognition over long distances and with high accuracy. The SoCs include a dense optical flow accelerator for simultaneous localization and mapping, as well as distance and depth estimation. Multi-channel high-speed sensor input and Ambarella’s image signal processing pipeline are intended to provide the necessary camera input support, even in challenging lighting conditions. CV2FS also enables advanced stereovision applications by adding a dense disparity engine.

According to the company, key specs include CVflow architecture with DNN support, quad-core 1-GHz Arm Cortex-A53 with Neon DSP extensions and FPU, ASIL B functional safety, multi-exposure high dynamic range processing and LED flicker mitigation, and real-time hardware-accelerated fish-eye dewarping and lens distortion correction.

Ambarella demonstrated its CVflow SoC family during CES 2020, including in Hella Aglaia’s deep-learning ADAS algorithms and Ambarella’s EVA (Embedded Vehicle Autonomy) self-driving prototype vehicle. CV22FS and CV2FS are scheduled to sample to customers in the first half of 2020.

 

Ouster expands LiDAR range with two high-resolution sensors

The company says that the expansion of its portfolio addresses every LiDAR use case across a range of industries and now includes the option of 128-channel resolution on all OS0, OS1, and OS2 series sensors.

Ouster, Inc. introduced two new high-resolution digital LiDAR sensors, the ultra-wide field of view OS0-128 and the long-range OS2-128. Both sensors were on display at CES 2020 and are currently shipping to customers. The company says that the OS0 marks a new category of ultra-wide field-of-view LiDAR optimized for autonomous vehicle and robotics applications. The CES Innovation Award Honoree OS2-128 combines a reported industry-leading resolution and a more than 240-m range for high-speed driving applications.

According to the company, the new OS0 pairs Ouster’s rugged, affordable digital LiDAR technology with a 90° field-of-view. Built in partnership with leading OEMs and robotics companies, the OS0 reportedly enables a new level of high-resolution depth imaging that integrates into robotics platforms and autonomous vehicles. The OS0-128 was designed for the rigors of commercial deployment, and Ouster says it has already secured multiple design wins from leading robotaxi and autonomous trucking OEM customers. 

“High-resolution perception has always been reserved for expensive, long-range applications. That’s finally beginning to change,” said Angus Pacala, CEO and Co-Founder of Ouster. “With Ouster’s full range of 128-channel sensors, we have a complete high-resolution sensor suite for every application, and for short-range applications, the OS0-128 is in a class of its own.” 

The company says that the expansion of its digital LiDAR portfolio addresses every LiDAR use-case across a range of industries and now includes the option of 128-channel resolution on all OS0, OS1, and OS2 series digital LiDAR sensors. The company also says that the updated products feature a lower minimum range, improved range repeatability, and window blockage detection, which are key features for addressing customer edge cases in the push for commercial autonomy. 

“May Mobility wouldn’t be where we are today as a company delivering autonomous mobility as a service without incorporating ultra-wide view LiDAR sensors,” said Tom Voorheis, Director of Autonomy Engineering at May Mobility. “The Ouster OS0 will provide critical information for navigating urban environments full of tight spaces and crowded streets.”

The OS0 and OS2 series offer a range of resolution options, with the OS0 available with 32 or 128 channels, while the OS2 available in 32, 64, and 128 configurations. The OS0-32 is priced at $6000 and the OS0-128 at $18,000. The OS2-32 is priced at $16,000, the OS2-64 at $20,000, and the OS2-128 at $24,000.

 

Robosense has public road test for smart LiDAR

RS-LiDAR-M1 incorporates sensor hardware, AI perception algorithms, and IC chipsets, transforming conventional LiDAR sensors from an information collector to a complete data analysis and comprehension system.

RoboSense announced what the company says is the world’s first public road test of a vehicle equipped with a smart LiDAR sensor. Its car, running outside the Las Vegas Convention Center during CES 2020, featured a RS-LiDAR-M1 smart LiDAR—winner of the CES Innovation Award for two consecutive years in a row, 2019 and 2020—showcasing the real-time 3D point cloud data with the multi-LiDAR fusion system.

The company says that its RS-LiDAR-M1 Smart LiDAR is the world’s first MEMS smart LiDAR sensor to incorporate sensor hardware, AI perception algorithms, and IC chipsets, transforming conventional LiDAR sensors from an information collector to a complete data analysis and comprehension system.

“Based on extensive data optimization, RoboSense’s algorithm performance and software stability and reliability have proven to have many key advantages,” said RoboSense Co-Partner and Vice President Leilei Shinohara. “Developed by RoboSense after more than a decade of exhaustive research in perception technology, it has combined the deep-learning-based AI algorithm performance advantages with traditional algorithms to provide functional safety.”

The company aimed to demonstrate during real road tests on the streets of Las Vegas highlights of the RS-LiDAR-M1 features, including how quickly it provides high-resolution 3D point cloud data, how outputs are structured in a semantic-level comprehensive environment information in real-time, and show multi-LiDAR fusion system synchronous display. It also wanted to showcase the ability to help OEMs and Tier 1 suppliers quickly deploy LiDAR into mass-produced autonomous vehicles and ADAS systems. The final serial production version of the RS-LiDAR-M1 Smart will include additional functions such as automatic calibration, window fog detection, sleep mode, and automatic wake-up to further improve autonomous driving feasibility and safety, saving time on maintenance.

During CES 2020, the company also showcased its 128-beam LiDAR RS-Ruby and the short-range blind spot LiDAR RS-BPearl.

The high-performance 128-beam LiDAR RS-Ruby possesses a high resolution of 0.1°, object image detail with near pixel-level scanning, range performance of 200 m (660 ft), and 10% reflectivity target. Meanwhile, the sensor has achieved what the company says is a perfect balance between the consistency and distinction of reflectivity, further facilitating accurate road sign extraction and localization.

The company calls the RS-BPearl the first mass-produced short-range LiDAR for blind-spot detection. It identifies objects around the vehicle’s body, and can also detect the actual height information in particular scenarios, such as with bridge tunnels and culverts. RoboSense’s RS-Fusion-P5 solution uses four embedded RS-BPearls around the vehicle and one RS-Ruby on the top. It is able to achieve full coverage of the sensing area with reportedly zero blind spots in the vehicle’s driving space.

 

FLIR partners with Ansys, VSI Labs on thermal camera integration

The simulation solutions and tests will help validate the benefits of thermal imaging for assisted and autonomous vehicle development.

FLIR Systems, Inc. and Ansys partnered to deliver hazard detection capabilities for assisted driving and autonomous vehicles (AVs). Through this partnership, FLIR will integrate a fully physics-based thermal sensor into Ansys’ driving simulator to model, test, and validate thermal camera designs within an ultra-realistic virtual world.

According to the company, the new solution aims to reduce OEM development time by optimizing thermal camera placement for use with tools such as automatic emergency braking (AEB), pedestrian detection, and within future AVs. Having the ability to test in virtual environments complements the existing systems available to FLIR customers and partners, including for its automotive development kit (ADK) featuring a Boson thermal camera, the starter thermal dataset, and the regional/city-specific thermal datasets.

The FLIR thermal dataset programs were created for machine learning in advanced driver assistance systems (ADAS), AEB, and AV development. According to the company, the current AV and ADAS sensors face challenges in darkness or shadows, sun glare, and inclement weather such as most fog. Thermal cameras, however, can effectively detect and classify objects in these conditions. Integrating FLIR’s thermal sensor into Ansys VRXPERIENCE enables simulation of thousands of driving scenarios across millions of miles in days. Furthermore, engineers can simulate difficult-to-produce scenarios where thermal provides critical data, including detecting pedestrians in crowded, low-contrast environments.

“By adding Ansys’ industry-leading simulation solutions to the existing suite of tools for physical testing, engineers, automakers, and automotive suppliers can improve the safety of vehicles in all types of driving conditions,” said Frank Pennisi, President of the Industrial Business Unitat Flir Systems. “The industry can also recreate uncommon corner cases that are exceedingly difficult to replicate in physical environments, paving the way for improved neural networks and the performance of safety features such as AEB.”

“FLIR Systems recognizes the limitations of relying solely on gathering machine learning datasets in the physical world to make automotive thermal cameras as safe and reliable as possible for automotive uses,” said Eric Bantegnie, Vice President and General Manager at Ansys. “Now with Ansys solutions, FLIR can further empower automakers to speed the creation and certification of assisted-driving systems with thermal cameras.”

Also in conjunction with CES 2020, FLIR announced results of a its collaboration with VSI Labs to develop a proof-of-concept automatic pedestrian detection system that fuses radar and FLIR thermal camera data to detect and estimate the distance of a pedestrian from the front of a test vehicle. The vehicle was programmed to automatically stop if a pedestrian is in its path.

Current typical automatic emergency braking (AEB) or pedestrian detection systems rely on systems using radar and, in some cases, visible-light cameras. There are several common conditions in which these sensors can have difficulty detecting a pedestrian, and an October 2019 study by AAA tested several production AEB systems and describes many such scenarios.

Initial FLIR/VSI tests, intended to show the benefits of adding FLIR technology to aid AEB, were completed in December 2019 at the American Center for Mobility (ACM) near Detroit. The test design was based on the Euro NCAP, but not all testing requirements were met as the winter weather was colder than the specified testing temperature range—roadways had snowy, wet, or slick surfaces, and wind interfered with the test fixtures. Three test cases were conducted in both daylight and darkness, giving six datasets and 35 total test runs using an adult Euro NCAP Pedestrian Target (EPTa).

Test results were promising. In all runs for all test cases, the AEB system successfully brought the VUT to a stop before reaching the EPTa. Additional testing is recommended and planned for spring/summer of 2020 following AEB algorithm optimization, EPTa heating improvements and when weather is in test parameters.

FLIR has provided more than 700,000 thermal sensors as part of its night-vision warning systems for a variety of carmakers, including General Motors, Audi, and Mercedes-Benz. The company recently announced that its thermal sensor has been selected by Veoneer, a Tier 1 automotive supplier, for its level-four AV production contract with a top global automaker, planned for 2021.

 

Cepton Technologies shows infrastructure LiDAR system

The LiDAR-based object detection, tracking, and classification solution aims to enable companies and cities to build a safer, smarter world.

Cepton Technologies, Inc.’s Helius smart LiDAR system was named a CES 2020 Innovation Awards Honoree by the Consumer Technology Association (CTA). Helius was honored in two categories. Tech for a Better World highlights product innovations aiming to make positive social and global impacts, and Smart Cities recognizes technologies and applications designed to improve urban experiences with increased intelligence. Cepton says the recognition validates its commitment to bringing intelligent LiDAR solutions to a variety of industries to help build a safer, smarter world.

Helius is designed to deliver object detection, tracking, and classification capabilities to enable a range of applications for Smart Cities, transport infrastructure, security, and more. It embodies a fusion of three technologies: 3D LiDAR sensing powered by Cepton’s patented Micro Motion Technology (MMT), edge computing for minimum data burden and maximum ease of integration, and built-in advanced perception software for real-time analytics.

The LiDAR is designed to provide centimeter-accurate 3D sensing of the dimension, location, and velocity of objects, regardless of lighting conditions, and can collect and process data from multiple sensors for seamless object tracking across sensor coverage zones. As it does not capture, show, or store any biometric and otherwise identifying data, it aims to maximize protection of people’s privacy while being installed as part of various Smart Cities and security systems. Cepton highlighted a few of many use cases of Helius.

Security and public safety: The LiDAR enables intrusion detection, access control, and behavior tracking to protect people and assets in public venues, critical infrastructure, construction zones, ports, and airports, and manufacturing facilities. It also enables platform safety at train and metro stations. Its anonymized surveillance allows it to work in a wide range of HIPAA (Health Insurance Portability and Accountability) and ADPR-compliant spaces such as schools and hospitals.

Smart intersections and traffic management: The company says that its LiDAR can help ensure safety and efficiency at road intersections and railway crossings by monitoring pedestrians and vehicles. It can also provide information on traffic density and patterns to automate traffic lights and enable route optimization.

Transport infrastructure: Helius can track vehicles to provide accurate and real-time information to optimize parking management in cities. It can profile vehicles and survey the road surface to evaluate potential road surface damage and to help automate tolling by classifying different types of vehicles passing at highway speeds.

Crowd analytics for large private and public venues: Helius can provide valuable anonymized information about how consumers navigate retail stores, parks, stadiums, and other venues and how they engage with advertisements and products on display. It can be used to track the occupancy of streets and buildings to automate lights, HVAC, and other appliances to help preserve energy. The company states that urban planners can rely on Helius to better understand how people move around and interact with public amenities and identify the impact of construction projects.

 

Mobileye advances camera-sensing goals with Asia expansion

The company says that the two deals show how it’s executing on its multiprong strategy toward full autonomy, which includes mapping, ADAS, MaaS, and consumer AVs.

Mobileye at CES 2020 reported sales close to $1 billion in 2019, and expects that to rise by double-digits in 2020, as the company announced two agreements surrounding its work using its camera-sensing technology in advanced driver-assistance systems (ADAS) and autonomous mobility-as-a-service (MaaS). SAIC plans to use Mobileye’s REM (Road Experience Management) technology to map China for L2+ ADAS deployment while paving the way for autonomous vehicles (AVs) in the country. In addition, the leaders of Daegu Metropolitan City, South Korea, agreed to establish a long-term cooperation to deploy MaaS based on Mobileye’s self-driving system.

The agreements build on other recent announcements, including an agreement with RATP in partnership with the city of Paris to bring robotaxis to France; a collaboration with Nio to manufacture Mobileye’s self-driving system and sell consumer AVs based on that system and to supply robotaxis exclusively to Mobileye for China and other markets; a joint venture with UniGroup in China for use of map data; and a joint venture with Volkswagen and Champion Motors to operate an autonomous ride-hailing fleet in Israel.

SAIC will use Mobileye’s REM mapping technology on vehicles via its licensed map subsidiary called Heading. The vehicles will contribute to Mobileye’s RoadBook by gathering information on China’s roadways, creating a high-definition map of the country that can be used by vehicles with Level 2 and higher levels of autonomy. The deployment of the mapping solution in China presents opportunities for additional OEM partners to enter the Chinese market with map-related features, says the company.

Daegu City will collaborate to test and deploy robotaxi-based mobility solutions powered by Mobileye’s autonomous vehicle technology. Mobileye will integrate its self-driving system into vehicles to enable a driverless MaaS operation. Daegu Metropolitan City partners will ensure the regulatory framework supports the establishment of robotaxi fleet operation.

Because it relies on crowd sourcing and low-bandwidth uploads, Mobileye says its REM technology is a fast and cost-effective way to create high-definition maps that can be used for enhanced ADAS such as L2+, as well as higher levels of autonomy for future self-driving cars. The company says that REM map data can bring insights to businesses in new market segments such as Smart Cities.

Mobileye’s strategy for deploying robotaxis covers the specification, development, and integration of all five value layers of the robotaxi market including: self-driving systems, self-driving vehicles, fleet operations, mobility intelligence, and rider experience and services. Mobileye says that the approach is cost-effective, allowing the company to scale global operations more quickly than competitors and thereby capture a greater share of the $160 billion global robotaxi opportunity. Mobileye says its unique approach of scaling globally with a more economical solution, coupled with its technology, enable it to lead MaaS and consumer-AV development at scale ahead of the market.

 

Insight shows high-resolution Digital Coherent LiDAR

The sensor combines a number of technologies to deliver a claimed low-cost, chip-scale LiDAR that has the sensitivity to range low-reflectivity objects at more than 200 m (660 ft) and delivers industry-leading resolution, putting 10 to 20 times more pixels on objects.

Insight LiDAR demonstrated what it calls Digital Coherent LiDAR, an ultra-high resolution, long-range sensor targeted at the autonomous vehicle (AV) market. The company claims that the sensor combines a number of technologies to deliver a low-cost, chip-scale LiDAR that has the sensitivity to range low-reflectivity objects at more than 200 m (660 ft) and delivers industry-leading resolution, putting 10 to 20 times more pixels on objects.

“Perception for autonomous vehicles is a really difficult problem,” said Michael Minneman, CEO of Insight LiDAR. “In the case of LiDAR, you really can’t pick and choose which system specifications you want to meet. To deliver safe, effective point clouds for the perception team, there are 20 or more really critical specs that all need to be met simultaneously. And they have to be met in an architecture that can be scaled up for low-cost production in the millions of units.”

That’s what he says he team has been able to do over 12 years of technology development.  

“Perception teams all ask for more pixels on objects,” added Dr. Chris Wood, Head of Development and Technology at Insight LiDAR. “More pixels allow algorithms to identify objects and make critical decisions faster. Our technology enables us to drastically increase the pixel count while simultaneously providing direct velocity information. Together, this data addresses most of the really difficult edge cases perception teams face.”

The company says that the ultra-high resolution, along with direct Doppler velocity, aims to allow perception teams to identify and classify objects faster than before while additionally providing the critical information necessary to solve difficult AV edge cases. The Doppler technology enables velocity measurement of every pixel to reduce system latency by 5 to 8 times. The true solid-state fast-axis scanner is said to have no moving parts in the fast-axis, enabling high reliability and low cost. Software defined foveation enables flexible pixel patterning. A chip-scale architecture, with all photonics on PICs (photonic integrated circuits) and all electronics on ASICs (application-specific integrated circuits), enables a low-cost, scalable semiconductor cost structure. On the immunity front, the company claims that its product is unaffected by sunlight, other LiDAR, or photonic hacking. 

 

Echodyne imaging radar enables two-way data flow

The breakthrough in high-resolution imaging radar for AVs is built on scanning array technology for significantly enhanced machine perception.

Echodyne says it has developed a breakthrough in high-resolution imaging radar for autonomous vehicles. Called EchoDrive, it is built on MESA (Metamaterial Electronically Scanning Array) technology and is said to offer a new type of sensor functionality that significantly enhances machine perception.

The radar technology delivers novel real-time control over the radar’s interrogation of the drive scene, reportedly enabling a richer form of machine perception. Its dynamic control API (application programming interface) leverages knowledge in the AV stack, such as HD maps, V2X, and other sensor data, to optimize real-time measurement through changing environments, conditions, and scenarios. For example, it can smoothly transition from normal drive scene to work zone heightened awareness, increase frame rate to secure an unprotected left turn, or zoom in on a tunnel approach. This dynamic tasking of a high-performance analog beam-steering radar significantly enhances safety by elevating the cognitive functions of artificial intelligence and machine learning in the AV stack.

“Today, AV stacks have a one-way flow of data where sensors deliver information and the AV system processes it and takes action,” said Eben Frankenberg, CEO of Echodyne. “What is missing is a dynamic, interactive flow of data that makes cognitive functions possible in the AV stack. With EchoDrive, the AV system can direct the radar to interrogate specific objects to gain clarity on the driving scene for more confident AV decision making.”

With the new product, Echodyne is extending its radar platform technology, which it has been delivering for a range of defense, government, aeronautic, and commercial applications, to help automotive manufacturers build and deliver safer autonomous trucks, buses, and shared mobility passenger vehicles. Among its advanced AV imaging-radar features, the company says that EchoDrive offers high resolution imaging in both azimuth and elevation, road following through active beam-steering, seamless adaptation to changing drive scenes, rich raw unfiltered data, and dynamic control API delivering “industry-first” cognitive sensor functionality.

“EchoDrive is a huge leap forward in imaging radar for AV,” said Tom Driscoll, CTO of Echodyne. “We’re delivering a sensor that, for the first time, brings cognitive sensor capabilities to the AV industry. Built on a combination of Texas Instruments millimeter wave sensors, our own proprietary MESA technology, and a powerful software framework, our adaptive radar sensor improves all aspects of the AV architecture.”