CN117848377A - Vehicle-mounted augmented reality navigation method, device, chip and intelligent automobile - Google Patents

Vehicle-mounted augmented reality navigation method, device, chip and intelligent automobile Download PDF

Info

Publication number
CN117848377A
CN117848377A CN202410136696.4A CN202410136696A CN117848377A CN 117848377 A CN117848377 A CN 117848377A CN 202410136696 A CN202410136696 A CN 202410136696A CN 117848377 A CN117848377 A CN 117848377A
Authority
CN
China
Prior art keywords
information
vehicle
augmented reality
data stream
reality navigation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410136696.4A
Other languages
Chinese (zh)
Inventor
杨辉
邬栋海
李成文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Chang'an Technology Co ltd
Original Assignee
Chongqing Chang'an Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Chang'an Technology Co ltd filed Critical Chongqing Chang'an Technology Co ltd
Priority to CN202410136696.4A priority Critical patent/CN117848377A/en
Publication of CN117848377A publication Critical patent/CN117848377A/en
Pending legal-status Critical Current

Links

Abstract

The invention relates to a vehicle-mounted augmented reality navigation method, a device, a chip and an intelligent automobile, which are used for a vehicle with an advanced driving assistance system, wherein the method comprises the following steps: acquiring data streams acquired by vehicle-mounted sensors, wherein the data streams comprise a first data stream acquired by a front-view camera and a second data stream acquired by other vehicle-mounted sensors; obtaining driving assistance information according to the first data stream and the second data stream; obtaining live-action information according to the first data stream; fusing the driving assistance information, the real-scene information and the map information of the advanced driving assistance system to obtain augmented reality navigation information; and displaying the augmented reality navigation information. Hardware such as an AR navigation camera or a vehicle recorder is not needed, and the cost of vehicle-mounted augmented reality navigation is reduced; and the driving auxiliary information obtained according to the data streams of the plurality of sensors is fused, the information of the vehicle-mounted augmented reality navigation display is richer, and the display effect is better.

Description

Vehicle-mounted augmented reality navigation method, device, chip and intelligent automobile
Technical Field
The invention relates to the technical field of intelligent automobiles, in particular to a vehicle-mounted augmented reality navigation method, a device, a chip and an intelligent automobile.
Background
In order to realize vehicle-mounted augmented reality navigation, AR (Augmented Reality: augmented reality) navigation cameras or automobile data recorders are mainly used for collecting AR navigation information at present, and the AR navigation information is displayed on a map. The AR navigation camera or the automobile data recorder is required to be additionally configured, so that the hardware cost is increased, and the display information of AR navigation is simple.
Disclosure of Invention
The invention aims to provide a vehicle-mounted augmented reality navigation method for solving the problems of high cost and insufficient display information of the vehicle-mounted augmented reality navigation in the prior art; the second aim is to provide a vehicle-mounted augmented reality navigation device; thirdly, providing a chip; fourth, an intelligent automobile is provided.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
an in-vehicle augmented reality navigation method for a vehicle having an advanced driving assistance system, the method comprising:
acquiring data streams acquired by vehicle-mounted sensors, wherein the data streams comprise a first data stream acquired by a front-view camera and a second data stream acquired by other vehicle-mounted sensors;
obtaining driving assistance information according to the first data stream and the second data stream;
obtaining live-action information according to the first data stream;
fusing the driving assistance information, the real-scene information and the map information of the advanced driving assistance system to obtain augmented reality navigation information;
and displaying the augmented reality navigation information.
According to the technical means, the real scene information for augmented reality navigation is generated according to the data stream acquired by the front-view camera, the front-view camera is multiplexed, and an AR navigation camera or a vehicle recorder is not required to be additionally configured; and driving auxiliary information obtained by fusing data streams acquired by a plurality of vehicle-mounted sensors can be displayed, the display information of augmented reality navigation is richer, and the display effect of AR navigation is improved.
Further, the fusing the driving assistance information, the real-scene information and the map information of the advanced driving assistance system to obtain augmented reality navigation information includes:
carrying out map registration on the map information and the real scene information to generate an actual road image;
and superposing the driving auxiliary information on the actual road image by adopting augmented reality to obtain the augmented reality navigation information.
According to the technical means, after the map information and the real scene information are registered, the driving auxiliary information is overlapped, so that the map information, the real scene information and the driving auxiliary information are effectively fused, and the display information of the augmented reality navigation is richer.
Further, the obtaining the live-action information according to the first data stream includes:
and correcting the first data stream to remove a distortion area, and obtaining the live-action information.
According to the technical means, the video stream acquired by the front-view camera is corrected, so that the image quality is higher, and the image is more accurate when the image is registered and fused with the map information.
Further, the obtaining driving assistance information according to the first data stream and the second data stream includes:
performing perceptual fusion processing on the video streams in the first data stream and the second data stream to obtain first environment information;
clustering point cloud data acquired by millimeter wave radar in the second data stream to obtain second environment information;
performing target identification on the video stream in the second data stream by adopting a target detection algorithm to obtain distance information;
and fusing the first environment information, the second environment information and the distance information to obtain the driving assistance information.
According to the technical means, various driving assistance information can be obtained by carrying out target recognition, clustering, fusion and the like on the video stream and the point cloud data in the first data stream and the second data stream, and the content of AR navigation display information is enriched.
Further, after acquiring the data stream acquired by the vehicle-mounted sensor, the method further comprises:
when the data stream acquired by the vehicle-mounted sensor is a video stream, optimizing the data stream according to the performance of the image signal processor of the vehicle-mounted sensor.
According to the technical means, the image quality in the video stream acquired by the vehicle-mounted sensor can be improved, and the image precision and the display effect of AR navigation are improved.
Further, a first task and a second task of parallel computation are set, wherein the first task is used for obtaining driving assistance information according to the first data stream and the second data stream, and the second task is used for obtaining live-action information according to the first data stream.
According to the technical means, the driving assistance information and the real scene information are obtained through parallel calculation, so that the generation efficiency of the AR navigation information can be improved, and the driving assistance information and the real scene information are more accurate in time synchronization.
Further, the front view camera is a camera for collecting near distance information in front of the vehicle.
According to the technical means, the camera for collecting the near-distance information in front of the vehicle is used as the forward-looking camera, so that the near-distance AR navigation effect is better, and the method is more suitable for vehicle-mounted augmented reality navigation in a complex urban scene.
An in-vehicle augmented reality navigation device, comprising:
the data stream acquisition module is used for acquiring data streams acquired by the vehicle-mounted sensors, wherein the data streams comprise first data streams acquired by the front-view cameras and second data streams acquired by other vehicle-mounted sensors;
the driving assistance information module is used for obtaining driving assistance information according to the first data stream and the second data stream;
the live-action information module is used for obtaining live-action information according to the first data stream;
the augmented reality navigation information module is used for fusing the driving assistance information, the real scene information and the map information of the advanced driving assistance system to obtain augmented reality navigation information;
and the display module is used for displaying the augmented reality navigation information.
According to the technical means, the cost of vehicle-mounted augmented reality navigation can be reduced, the display information content of the vehicle-mounted augmented reality navigation is enriched, and the display effect is improved.
Further, the driving auxiliary information module and the live-action information module are operated in a parallel computing mode.
According to the technical means, the generation efficiency of the AR navigation information can be improved through parallel calculation, and the driving auxiliary information and the real scene information are more accurate in time synchronization.
Further, the second data stream includes point cloud data acquired by millimeter wave radar.
According to the technical means, more driving assistance information can be obtained by adopting the millimeter wave radar, so that AR navigation is safer.
The chip is stored with a vehicle-mounted augmented reality navigation program, and the vehicle-mounted augmented reality navigation program realizes any one of the steps of the vehicle-mounted augmented reality navigation method when being executed by a processor.
The intelligent automobile comprises a memory, a processor and an onboard augmented reality navigation program which is stored in the memory and can run on the processor, wherein the onboard augmented reality navigation program realizes any one of the steps of the onboard augmented reality navigation method when being executed by the processor.
The invention has the beneficial effects that: hardware such as an AR navigation camera or a vehicle recorder is not needed, and the cost of vehicle-mounted augmented reality navigation is reduced; driving auxiliary information obtained according to data streams of the sensors is fused, so that information displayed by augmented reality navigation is richer, and the display effect is better.
Drawings
FIG. 1 is a schematic diagram of a display effect of a conventional vehicle-mounted augmented reality navigation;
FIG. 2 is a schematic diagram of a display effect of the vehicle-mounted augmented reality navigation of the present invention;
FIG. 3 is a schematic flow chart of implementing vehicle-mounted augmented reality navigation according to an embodiment of the present invention;
FIG. 4 is a functional frame diagram of the embodiment of FIG. 2;
FIG. 5 is a schematic diagram of the device connection of the embodiment of FIG. 2;
FIG. 6 is a flowchart of obtaining driving assistance information according to an embodiment of the present invention;
FIG. 7 is a flowchart of obtaining augmented reality navigation information according to an embodiment of the present invention;
FIG. 8 is a flow chart of an implementation scenario of the embodiment of FIG. 2;
fig. 9 is a schematic structural diagram of a vehicle-mounted augmented reality navigation device according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a smart car according to an embodiment of the present invention.
Detailed Description
Further advantages and effects of the present invention will become readily apparent to those skilled in the art from the disclosure herein, by referring to the accompanying drawings and the preferred embodiments. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be understood that the preferred embodiments are presented by way of illustration only and not by way of limitation.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the drawings and are not drawn according to the number, shape and size of the components in actual implementation, and the form, number and proportion of the components in actual implementation may be arbitrarily changed, and the layout of the components may be more complicated.
The current intelligent technology in the automobile industry is rapidly developed, and a plurality of sensors are added on the automobile, such as: a plurality of sensors in a front-view long-distance camera, a front-view short-distance camera, a peripheral-view camera, a rear-view camera, a panoramic camera, a millimeter wave radar and a laser radar. Advanced driving assistance systems (Advanced Driver Assistance System: ADAS) incorporate multi-sensor data to ensure reliable sensing results. The current system for realizing vehicle-mounted augmented reality navigation (AR navigation for short) is independent of an advanced driving assistance system, only captures the surrounding environment based on an AR (augmented reality) camera or a vehicle recorder, and superimposes virtual information on a screen to realize a navigation function, so that information obtained by multi-sensor data fusion cannot be effectively utilized, as shown in fig. 1, display information is not abundant enough, and the AR navigation camera or the vehicle recorder needs to be additionally configured, so that hardware cost is increased.
According to the invention, the data flow acquired by the front-view camera is multiplexed, the data flow is used for replacing the data flow of the AR navigation camera, hardware such as the AR navigation camera or a vehicle recorder is not required to be additionally arranged, and the cost of vehicle-mounted augmented reality navigation is reduced; as shown in fig. 2, the display information of AR navigation is further fused with driving assistance information obtained according to the data streams of the plurality of sensors, so that the information displayed in vehicle-mounted augmented reality navigation is richer.
The embodiment of the invention is used for displaying the AR navigation image on the vehicle with the advanced driving assistance system in real time. The vehicle is provided with a large-power central computing unit, a forward looking long-distance camera, a forward looking short-distance camera, a forward 4D millimeter wave radar, a peripheral looking camera and a central control screen for augmented reality navigation display.
The forward-looking short-distance camera is arranged in front of the vehicle and used for capturing information such as road conditions, traffic signs, pedestrians, other vehicles and the like in front of the vehicle and supporting functions such as automatic emergency braking, lane keeping assistance, traffic sign recognition and the like in an advanced driving assistance system of the vehicle. The physical parameters of the forward-looking short-distance camera are similar to those of a conventional AR camera, the horizontal viewing angle HFOV (Horizontal Field of View) is about 120 degrees, the vertical viewing angle VFOV (Vertical Field of View) is about 65 degrees, the pixels are 800 ten thousand, the frame rate of the camera is more than or equal to 30fps (frames per second), and an 800 ten thousand-pixel video stream with the far-front 60m visual field is provided for identifying the scene of the close-range object corresponding to the intersection. The equipment parameters of the forward looking long-distance camera are as follows: viewing angle FOV (Field of View) is 30 deg., pixels are 800 ten thousand, camera frame rate is 30fps, and a video stream of 800 ten thousand pixels beyond the most distant 235m field of view is provided for identifying a distant target vehicle and stationary objects. The forward looking short range camera and the forward looking long range camera are collectively referred to as a forward looking camera.
The device parameters of the forward 4D millimeter wave radar are as follows: viewing angle FOV (Field of View): not less than + -60 DEG or not less than + -20 DEG (300 m), horizontal angle resolution of 1 DEG and horizontal angle precision of 0.1 deg.
Device parameters of the panoramic camera: viewing angle FOV (Field of View) is 100 deg., pixels are 300 ten thousand, frame rate is 30fps, and a 300 ten thousand pixel video stream with 136m field of view furthest laterally is provided for identifying target vehicles and traffic light scenes.
The central computing unit is used for running the vehicle-mounted augmented reality navigation method and the advanced driving assistance system. The device parameters of the central computing unit are: AI (artificial intelligence) the calculated forces are 254AI TOPS (Tera Operations Per Second) to 508 AI TOPS. The central computing unit processes the original data of the cameras (the front-view camera and the peripheral-view camera), adjusts the ISP (Image Sensor Processor) performance of the cameras, serves as the input of a AI (artificial intelligence) algorithm model, and processes the original point cloud data input by the forward 4D millimeter wave radar.
The above-mentioned device parameters are only for explaining the device parameters of the vehicle in the present embodiment, and do not constitute a limitation of the device parameters of the vehicle using the method of the present invention.
As shown in fig. 3, the specific steps for implementing the vehicle-mounted augmented reality navigation include:
step S100: acquiring data streams acquired by a vehicle-mounted sensor, wherein the data streams comprise a first data stream acquired by a front-view camera and a second data stream acquired by other vehicle-mounted sensors;
the vehicle-mounted sensor is used for an advanced driving assistance system, and the advanced driving assistance system calculates data streams acquired by the vehicle-mounted sensor to obtain environment perception data, driving behavior data, risk assessment data and the like. The environment sensing data comprise road conditions around the vehicle, traffic signs, traffic signal states, lane line information, positions and speeds of obstacles and other vehicles and the like; the driving behavior data comprise rotation of a steering wheel, use conditions of an accelerator pedal and a brake pedal, fatigue degree of a driver and the like; the risk assessment data includes relative speed, distance, collision risk, etc. to other vehicles. The forward-looking long-range camera, the forward-looking short-range camera, the forward-looking 4D millimeter wave radar, and the periscope camera in the present embodiment are all in-vehicle sensors for the advanced driving assistance system, but the in-vehicle sensors are not limited to these, for example: and may also include lidar, rear view cameras, and the like.
The form of the data stream is related to the vehicle-mounted sensor, such as: the data streams collected by the forward-looking long-distance camera, the forward-looking short-distance camera and the peripheral-looking camera are video streams, and the data streams collected by the forward-looking 4D millimeter wave radar and the laser radar are point cloud data.
Referring to fig. 4, the data stream acquired by the front-view camera (the front-view camera data stream in fig. 4) is multiplexed, and is used not only for operation of the advanced driving assistance system (such as the preprocessing module and the fusion module in fig. 4), but also for generating the AR navigation image (such as the image restoration module and the image layer stacking module in fig. 4). The data streams collected by other vehicle-mounted sensors are the data streams collected by the vehicle-mounted sensors excluding the front-view camera, such as the data streams of the surrounding-view camera and the millimeter wave radar in fig. 4.
The method for acquiring the data flow acquired by the vehicle-mounted sensor is not limited, and the central computing unit can actively acquire the data flow acquired by the vehicle-mounted sensor at each set time interval; the vehicle-mounted sensor can also actively send the collected data stream to the central computing unit. In this embodiment, the central computing unit is an SOC (System on Chip) Chip of the driver domain controller, and referring to fig. 5, the camera sensor disposed around the vehicle inputs the original video stream to the central computing unit through LVDS (Low Voltage Differential Signaling: low voltage differential signaling technology) signal lines, and the forward millimeter wave radar inputs the original data to the central computing unit through CANFD.
In one embodiment, after the data stream acquired by the vehicle-mounted sensor is obtained, the video stream is optimized according to the ISP (Image Sensor Processor: image signal processor) performance of the vehicle-mounted sensor (such as a front-view long-distance camera, a front-view short-distance camera and a peripheral-view camera) for acquiring the video stream, such as white balance adjustment, exposure, contrast, saturation, sharpening, noise suppression, automatic focusing and automatic exposure functions, color correction, dynamic range enhancement and the like, so that the image shot by the camera is clearer, more real, more accurate in color, and the image quality and performance are improved.
Because this embodiment is mainly used to realize the on-vehicle augmented reality navigation under the complicated urban scene, under this scene, the vehicle is running at a low speed, and the short-range camera of forward-looking is focused on closely shooting, catches the near-view in vehicle the place ahead, and the angle of view is bigger than the angle of view of the long-range camera of forward-looking, therefore, the first data stream in this embodiment only includes the data stream that the short-range camera of forward-looking gathered, and the second data stream includes the data stream that millimeter wave radar gathered, the data stream that the camera gathered of periscope and the data stream that the long-range camera gathered of forward-looking. The data flow acquired by the forward-looking short-distance camera is used for replacing the data flow of the AR navigation camera, so that the picture effect of the augmented reality navigation image can be improved. However, the data stream collected by the front-view long-distance camera may be used as the first data stream, or the data stream collected by the front-view long-distance camera and the data stream collected by the front-view short-distance camera may be used as the first data stream together, as required.
Step S200: obtaining driving assistance information according to the first data stream and the second data stream;
at present, the visual identifiable distance is 230m, the stabilized target distance of a common millimeter wave radar is 120m, and the 4D millimeter wave radar is 200m. The 4D millimeter wave radar and vision are fused, so that the reliability detection of a remote target can be ensured. The number of millimeter wave points of the common millimeter wave radar is small, so that the detection performance of the common millimeter wave radar for bicycles, battery cars and motorcycles is poor, a static target cannot be distinguished, the 4D millimeter wave radar can be well identified, and complex urban scenes can be better dealt with. Moreover, since the accuracy of the current visual algorithm in terms of longitudinal distance, speed and longitudinal precision is poor, the safety of AR navigation is ensured by combining the forward 4D millimeter wave radar and the camera. Correspondingly, the second data stream also comprises point cloud data acquired by the forward 4D millimeter wave radar.
After the first data stream and the second data stream are acquired, the sum of the first data stream and the second data stream is equivalent to the data stream acquired by all the vehicle-mounted sensors for the advanced driving assistance system. And then, based on the difference of the data flow forms, adopting various methods such as target identification, target detection, clustering algorithm and the like to process the first data flow, the second data flow or the fusion data flow of the first data flow and the second data flow so as to obtain data in the aspects of environment perception, driving behavior, risk assessment and the like. The driving assistance information that needs to be used in AR navigation is then screened out of these data. The driving assistance information may include, but is not limited to: road conditions, traffic signs, traffic light status, lane line information, obstructions, locations and speeds of other vehicles, relative speeds to other vehicles, distances, collision risk, etc.
In this embodiment, as shown in fig. 6, the specific steps for obtaining the driving assistance information include:
step S210: performing perceptual fusion processing on video streams in the first data stream and the second data stream to obtain first environment information;
step S220: clustering point cloud data acquired by a forward 4D millimeter wave radar in a second data stream to obtain second environmental information;
step S230: performing target identification on the video stream in the second data stream by adopting a target detection algorithm to obtain distance information;
referring to fig. 4, after a preprocessing module is deployed in the central computing unit and the first data stream and the second data stream are input into the central computing unit, the first data stream and the second data stream are input into the SoC through a deserializer in the central computing unit, and the preprocessing module performs perceptual fusion processing on video stream data with optimized ISP performance by adopting a target detection algorithm and a fusion algorithm, so as to obtain first environmental information, such as: target information, distance information, freespace, and the like.
And clustering and target recognition are carried out on the point cloud data acquired by the forward 4D millimeter wave radar in the second data stream through a clustering algorithm in the preprocessing module, so that second environment information containing information such as target information and freespace is obtained.
An OD (Object detection) recognition algorithm in the preprocessing module analyzes the data labels of the video streams in the second data stream, recognizes corresponding targets of types such as people, vehicles and objects, and obtains distance information.
Step S240: and fusing the first environment information, the second environment information and the distance information to obtain driving auxiliary information.
Referring to fig. 4, a fusion module is also deployed in the central computing unit. And the fusion module carries out fusion processing on the obtained first environmental information, second environmental information and distance information by adopting a fusion algorithm to obtain driving auxiliary information comprising final target information and alarm prompt.
The target detection algorithm, the fusion algorithm and the clustering algorithm are general algorithms in the advanced driving assistance system, and are not described herein.
Because the existing advanced driving assistance system can process and perceptively fuse the original data of each sensor by each function deployed in the central computing unit, intelligent driving information is generated. Therefore, the present embodiment improves the functional modules in the advanced driving assistance system, and the fusion module provides the data output interface. And outputting information required for AR navigation through the data output interface to obtain driving assistance information.
Step S300: obtaining live-action information according to the first data stream;
step S400: fusing driving assistance information, real scene information and map information of an advanced driving assistance system to obtain augmented reality navigation information;
referring to fig. 4, in this embodiment, the data flow of the forward looking short distance camera is split inside the central computing unit, the upper part is used for generating intelligent driving information, and the lower part is used for generating AR navigation information. When the first data stream is used for intelligent driving, only target detection is needed, and image correction is not needed; and when the AR navigation information is used for generating the AR navigation information, hierarchical fusion is needed in the later period, so that correction is needed to enable the fusion to be more accurate. Therefore, when AR navigation information is generated, firstly, the data flow of the front-view short-distance camera is processed through the image restoration module, the place with distortion in the original data of the front-view short-distance camera is corrected, the corrected image after distortion correction is obtained, the quality of the image is guaranteed, and then the corrected image is processed through the image processing technology, so that the real-scene information such as road conditions in front of a vehicle, traffic signs around the vehicle, lane lines, pedestrians, vehicles and the like is obtained. And then, the real scene information, the map information of the advanced driving assistance system and the driving assistance information output by the fusion module are overlapped through the map layer overlapping module, so that the augmented reality navigation information is obtained. The map information includes, but is not limited to, data such as a map of a location of the vehicle, a road network, a navigation route, and the like. The map information in the present embodiment is high-precision map data provided by the advanced driving assistance system.
Specifically, as shown in fig. 7, the step of obtaining the augmented reality navigation information according to this embodiment includes:
step S410: carrying out map registration on the map information and the real scene information to generate an actual road image;
feature points such as intersections, buildings and signboards are extracted from the real-scene information and the map information, feature matching is performed by using feature point matching algorithms such as SIFT (Scale-Invariant Feature Transform: scale invariant feature transform) and SURF (Speeded-Up Robust Features: acceleration robust feature), and then registration of the real-scene information and the map information is achieved by using a transformation matrix or other transformation models. After registration is completed, the registered map information can be used for correcting and transforming the real scene information to generate an actual road image.
Step S420: and (5) superposing the driving auxiliary information on the actual road image by adopting augmented reality to obtain augmented reality navigation information.
According to the position and posture information of the vehicle, driving auxiliary information (such as navigation route, traffic sign, lane line, front obstacle and the like) is superimposed on the actual road image by adopting augmented reality technologies such as visual angle transformation, perspective transformation and the like, so that augmented reality navigation information is obtained.
Because the AR navigation technology is usually based on capturing the surrounding environment by a camera and superposing virtual information on a screen to realize the navigation function, the camera does not have the functions of distance measurement and collision early warning, and therefore the AR navigation technology cannot display information such as vehicle distance, collision early warning and the like by adopting the AR camera at present. According to the embodiment, the driving auxiliary information obtained by superposing and fusing the sensor data streams can display information such as vehicle distance, collision early warning and the like, the information displayed during AR navigation is more abundant, and the AR navigation is more convenient to use; and may be applicable to various driving scenarios, such as: in a normal driving scene, the distance between the vehicle and the left vehicle and the distance between the vehicle and the right vehicle can be displayed in the AR navigation image, and collision early warning information of the vehicle, the front vehicle, obstacles and the like can be displayed; under extreme weather, such as heavy fog weather and rain and snow weather, due to the adoption of the data streams of various sensors, even if the data streams collected by the camera are affected, the millimeter wave radar can still detect normally, can still provide rich driving auxiliary information for AR navigation, and is safer during driving.
In this embodiment, since the SOC chip of the central computing unit has strong calculation power, the driving assistance information obtained in step S200 and the real scene information obtained in step S300 are calculated in parallel, that is, the driving assistance information obtained according to the first data stream and the second data stream is used as the first task, the real scene information is obtained according to the first data stream as the second task, and then the first task and the second task are calculated in parallel, so that the calculation efficiency is improved, and the time is more synchronous when the AR navigation is generated by fusing the data streams of each sensor.
Step S500: displaying the augmented reality navigation information;
referring to fig. 5, after the augmented reality navigation information is obtained, image rendering is performed on the SOC chip of the central computing unit according to the augmented reality navigation information, an augmented reality navigation image (AR navigation image) is generated, and then the augmented reality navigation image is transmitted to the cockpit controller through the LVDS signal line, and the cockpit controller is transmitted to the display device through the LVDS signal line for display. Note that the display device includes, but is not limited to: a central control screen, a liquid crystal instrument, an information entertainment display screen, an instrument panel, an electronic rearview mirror, an AR-HUD and the like.
In the current augmented reality navigation method, the data stream of the AR navigation camera is directly sent to a cabin controller for operation and then sent to a central control screen or a display device such as a liquid crystal instrument for display. Because the SOC chip of the cockpit area controller has little calculation power and can only process the map with low precision, the embodiment adopts the SOC chip of the cockpit area controller to perform operations such as layer superposition, layer rendering and the like, can display AR navigation information on the map with high precision, has clearer AR navigation images and improves the display effect.
The implementation process of the embodiment is as follows: as shown in fig. 8, after the user enters the vehicle, the vehicle is allowed to work normally, and the central computing unit completes power supply, initialization and self-inspection of the front-view long-distance camera and the front-view short-distance camera. After the vehicle normally works, the data stream acquired by the forward-looking short-distance camera is transmitted to the central computing unit, the data stream is not only applied to a sensing algorithm of the intelligent driving auxiliary system, but also transmitted to the image restoration module, the distortion area in the data stream acquired by the camera is corrected through the image restoration module, and then the data stream is fused with a high-precision map and other sensing information (such as distance information sensed by a millimeter wave radar) built in the central computing unit, and is rendered through the map layer superposition module, and then an augmented reality navigation image (AR navigation image) is displayed through the display device. After the AR navigation function is turned on, a driver can see scene information in 120-degree visual field of the FOV through a central control screen or a liquid crystal instrument, wherein the 60 meters of the head of the vehicle is provided by the front-view short-distance camera. In the running process of the vehicle, the central computing unit continuously fuses and processes the original data acquired by other cameras, and if the situation occurs right in front of the vehicle, the original data can appear on the central control screen or the liquid crystal instrument through characters or obvious identification information, so that a user is prompted to look ahead and keep concentration of attention, and accidents are avoided.
In summary, the data flow of the front-view short-distance camera is utilized to replace the data flow of the AR navigation camera, and the augmented reality navigation interface is displayed on the central control screen or the liquid crystal instrument, so that hardware resources such as large calculation power of the vehicle-mounted sensor and the central computing unit can be reasonably and maximally multiplexed, and the software development capability and the capability of continuous evolution of functional software of the whole vehicle enterprise are created and improved; driving auxiliary information obtained by fusion according to data streams acquired by a plurality of vehicle-mounted sensors can be displayed in augmented reality navigation, and the display effect of AR navigation is improved; moreover, the intelligent driving information is visually displayed through AR navigation, so that the trust and transparency between a driver and a vehicle can be improved.
The embodiment of the invention also provides a vehicle-mounted augmented reality navigation device, as shown in fig. 9, comprising:
the data stream obtaining module 600 is configured to obtain a data stream collected by the vehicle-mounted sensor, where the data stream includes a first data stream collected by the front-view camera and a second data stream collected by other vehicle-mounted sensors;
a driving assistance information module 610, configured to obtain driving assistance information according to the first data stream and the second data stream;
a live-action information module 620, configured to obtain live-action information according to the first data stream;
the augmented reality navigation information module 630 is configured to fuse the driving assistance information, the real scene information and map information of the advanced driving assistance system to obtain augmented reality navigation information;
and a display module 640, configured to display the augmented reality navigation information.
Optionally, the driving assistance information module and the live-action information module are operated in a parallel computing mode.
Optionally, the second data stream includes point cloud data collected by a forward 4D millimeter wave radar.
In particular, in this embodiment, specific functions of the vehicle-mounted augmented reality navigation device may refer to corresponding descriptions in the vehicle-mounted augmented reality navigation method, which are not described herein.
Based on the embodiment, the invention further provides an intelligent automobile. As shown in fig. 10, the smart car includes a processor and a memory connected via a system bus. Wherein the processor of the smart car is configured to provide computing and control capabilities. The memory of the intelligent automobile comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and an in-vehicle augmented reality navigation program. The internal memory provides an environment for the operation of the operating system and the vehicle-mounted augmented reality navigation program in the nonvolatile storage medium. The method comprises the steps of implementing any one of the above-mentioned vehicle-mounted augmented reality navigation methods when the vehicle-mounted augmented reality navigation program is executed by a processor.
The embodiment of the invention also provides a chip, for example: and the SOC chip is stored with a vehicle-mounted augmented reality navigation program, and the vehicle-mounted augmented reality navigation program realizes any one of the steps of the vehicle-mounted augmented reality navigation method provided by the embodiment of the invention when being executed by the processor.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or N embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "N" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present invention.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or N wires, a portable computer cartridge (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the N steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. As with the other embodiments, if implemented in hardware, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above embodiments are merely preferred embodiments for fully explaining the present invention, and the scope of the present invention is not limited thereto. Equivalent substitutions and modifications will occur to those skilled in the art based on the present invention, and are intended to be within the scope of the present invention.

Claims (12)

1. An in-vehicle augmented reality navigation method for a vehicle equipped with an advanced driving assistance system, comprising:
acquiring data streams acquired by vehicle-mounted sensors, wherein the data streams comprise a first data stream acquired by a front-view camera and a second data stream acquired by other vehicle-mounted sensors;
obtaining driving assistance information according to the first data stream and the second data stream;
obtaining live-action information according to the first data stream;
fusing the driving assistance information, the real-scene information and the map information of the advanced driving assistance system to obtain augmented reality navigation information;
and displaying the augmented reality navigation information.
2. The vehicle-mounted augmented reality navigation method of claim 1, wherein the fusing the driving assistance information, the real-scene information, and the map information of the advanced driving assistance system to obtain the augmented reality navigation information comprises:
carrying out map registration on the map information and the real scene information to generate an actual road image;
and superposing the driving auxiliary information on the actual road image by adopting augmented reality to obtain the augmented reality navigation information.
3. The vehicle-mounted augmented reality navigation method of claim 1, wherein the obtaining real-scene information from the first data stream comprises:
and correcting the first data stream to remove a distortion area, and obtaining the live-action information.
4. The vehicle-mounted augmented reality navigation method of claim 1, wherein the obtaining driving assistance information from the first data stream and the second data stream comprises:
performing perceptual fusion processing on the video streams in the first data stream and the second data stream to obtain first environment information;
clustering point cloud data acquired by millimeter wave radar in the second data stream to obtain second environment information;
performing target identification on the video stream in the second data stream by adopting a target detection algorithm to obtain distance information;
and fusing the first environment information, the second environment information and the distance information to obtain the driving assistance information.
5. The vehicle-mounted augmented reality navigation method according to claim 1, further comprising, after acquiring the data stream acquired by the vehicle-mounted sensor:
when the data stream acquired by the vehicle-mounted sensor is a video stream, optimizing the data stream according to the performance of the image signal processor of the vehicle-mounted sensor.
6. The vehicle-mounted augmented reality navigation method according to claim 1, wherein a first task and a second task of parallel computation are set, the first task is used for obtaining driving assistance information according to the first data stream and the second data stream, and the second task is used for obtaining live-action information according to the first data stream.
7. The vehicle-mounted augmented reality navigation method according to claim 1, wherein the forward-looking camera is a camera for collecting near-distance information in front of a vehicle.
8. Vehicle-mounted augmented reality navigation device, characterized in that it comprises:
the data stream acquisition module is used for acquiring data streams acquired by the vehicle-mounted sensors, wherein the data streams comprise first data streams acquired by the front-view cameras and second data streams acquired by other vehicle-mounted sensors;
the driving assistance information module is used for obtaining driving assistance information according to the first data stream and the second data stream;
the live-action information module is used for obtaining live-action information according to the first data stream;
the augmented reality navigation information module is used for fusing the driving assistance information, the real scene information and the map information of the advanced driving assistance system to obtain augmented reality navigation information;
and the display module is used for displaying the augmented reality navigation information.
9. The vehicle-mounted augmented reality navigation device of claim 8, wherein the driving assistance information module and the live-action information module are operated in parallel computing.
10. The vehicle-mounted augmented reality navigation device of claim 8, wherein the second data stream comprises point cloud data acquired by millimeter wave radar.
11. A chip, wherein a vehicle-mounted augmented reality navigation program is stored on the chip, and when executed by a processor, the vehicle-mounted augmented reality navigation program implements the steps of the vehicle-mounted augmented reality navigation method according to any one of claims 1 to 7.
12. A smart car comprising a memory, a processor and a vehicle-mounted augmented reality navigation program stored on the memory and operable on the processor, the vehicle-mounted augmented reality navigation program when executed by the processor implementing the steps of the vehicle-mounted augmented reality navigation method according to any one of claims 1-7.
CN202410136696.4A 2024-01-31 2024-01-31 Vehicle-mounted augmented reality navigation method, device, chip and intelligent automobile Pending CN117848377A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410136696.4A CN117848377A (en) 2024-01-31 2024-01-31 Vehicle-mounted augmented reality navigation method, device, chip and intelligent automobile

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410136696.4A CN117848377A (en) 2024-01-31 2024-01-31 Vehicle-mounted augmented reality navigation method, device, chip and intelligent automobile

Publications (1)

Publication Number Publication Date
CN117848377A true CN117848377A (en) 2024-04-09

Family

ID=90532568

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410136696.4A Pending CN117848377A (en) 2024-01-31 2024-01-31 Vehicle-mounted augmented reality navigation method, device, chip and intelligent automobile

Country Status (1)

Country Link
CN (1) CN117848377A (en)

Similar Documents

Publication Publication Date Title
CN111880533B (en) Driving scene reconstruction method, device, system, vehicle, equipment and storage medium
EP3487172A1 (en) Image generation device, image generation method, and program
WO2020125178A1 (en) Vehicle driving prompting method and apparatus
CN103448653A (en) Vehicle collision warning system and method
JP2007228448A (en) Imaging environment recognition apparatus
US20190141310A1 (en) Real-time, three-dimensional vehicle display
US11082616B2 (en) Overlooking image generation system for vehicle and method thereof
CN110378836B (en) Method, system and equipment for acquiring 3D information of object
EP3859390A1 (en) Method and system for rendering a representation of an evinronment of a vehicle
CN108725318B (en) Automobile safety early warning method and device and computer readable storage medium
CN115056649A (en) Augmented reality head-up display system, implementation method, equipment and storage medium
CN109050401B (en) Augmented reality driving display method and device
CN115520100A (en) Automobile electronic rearview mirror system and vehicle
Kemsaram et al. An integrated framework for autonomous driving: object detection, lane detection, and free space detection
CN113459951A (en) Vehicle exterior environment display method and device, vehicle, equipment and storage medium
JP2023165721A (en) display control device
US11766938B1 (en) Augmented reality head-up display for overlaying a notification symbol over a visually imperceptible object
US20230314157A1 (en) Parking assist in augmented reality head-up display system
CN112406703A (en) Vehicle and control method and control device thereof
CN117848377A (en) Vehicle-mounted augmented reality navigation method, device, chip and intelligent automobile
US10864856B2 (en) Mobile body surroundings display method and mobile body surroundings display apparatus
Miman et al. Lane Departure System Design using with IR Camera for Night-time Road Conditions
CN109703556B (en) Driving assistance method and apparatus
US20220065649A1 (en) Head-up display system
CN110979319A (en) Driving assistance method, device and system

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination