CN109895697B - Driving auxiliary prompting system and method - Google Patents

Driving auxiliary prompting system and method Download PDF

Info

Publication number
CN109895697B
CN109895697B CN201910207174.8A CN201910207174A CN109895697B CN 109895697 B CN109895697 B CN 109895697B CN 201910207174 A CN201910207174 A CN 201910207174A CN 109895697 B CN109895697 B CN 109895697B
Authority
CN
China
Prior art keywords
module
information
acquisition module
projection module
driver
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910207174.8A
Other languages
Chinese (zh)
Other versions
CN109895697A (en
Inventor
郑艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201910207174.8A priority Critical patent/CN109895697B/en
Publication of CN109895697A publication Critical patent/CN109895697A/en
Application granted granted Critical
Publication of CN109895697B publication Critical patent/CN109895697B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention provides a driving auxiliary prompting system and a driving auxiliary prompting method, which comprise a plurality of acquisition modules arranged at the front end of a vehicle and used for acquiring barrier information, wherein the acquisition modules are electrically connected with a main control module, and the main control module is electrically connected with a projection module. When in use, the following steps are adopted to establish a virtual model; setting a spatial position of a driver; information acquisition is carried out through an acquisition module; carrying out information screening; mapping the screened information to a virtual model; and the projection module projects the screened information to the windshield in a contour image. The invention can acquire the information with great threat to the driving safety and project the information graph on the front windshield in a protruding way, so that important information can be concerned on the premise of not influencing the driving of a driver, and traffic accidents are avoided.

Description

Driving auxiliary prompting system and method
Technical Field
The invention relates to the field of automobile auxiliary driving, in particular to a driving auxiliary prompting system and a driving auxiliary prompting method.
Background
During driving, due to the reasons of vision, blind areas or attention of a driver and the influence of weather factors, the driver can easily ignore important driving information, such as information of passing of pedestrians or animals in night and foggy weather, and obstacle information such as a hole generated by damage of a stone falling on a road or a manhole cover facility. Some driving assistance systems are used in the prior art to help drivers to appreciate the information about obstacles and pedestrians.
Chinese patent document CN105825711A describes a method and a module for warning an obstacle of a vehicle, which is used for warning a driver to notice the obstacle by collecting the obstacle and warning. In the patent, the early warning information is sent to the central control display screen, but the information of the central control display screen needs to be observed by a driver with a head down, so that the attention of the driver is easily dispersed, and great potential safety hazard exists.
Chinese patent document CN103123687A describes a fast obstacle detection method for detecting an obstacle by performing an operation on a video stream. The method involves a large number of image pixel operations, and the requirement on hardware resources is high, thereby resulting in insufficient real-time performance.
Disclosure of Invention
The invention aims to solve the technical problem of providing a driving auxiliary prompting system and a driving auxiliary prompting method, which can transmit information related to pedestrians, animals and obstacles threatening driving safety to a driver in a mode of not dispersing the attention of the driver, thereby improving the driving safety; preferably, the data volume of real-time operation can be greatly reduced, and the timeliness of the warning information is improved. The warning information can be superposed with pedestrians, animals and obstacles actually observed from the position of the driver, so that the judgment and treatment efficiency of the driver is improved in an auxiliary manner.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows: the driving auxiliary prompting system comprises a plurality of acquisition modules which are arranged at the front end of a vehicle and used for acquiring barrier information, wherein the acquisition modules are electrically connected with a main control module, and the main control module is electrically connected with a projection module.
In a preferred scheme, the acquisition module comprises a first acquisition module and a second acquisition module which are arranged on the left side and the right side;
the first acquisition module and the second acquisition module respectively comprise an ultrasonic acquisition module, an infrared acquisition module and a video acquisition module;
the system comprises an infrared image, a video image and an ultrasonic image, wherein the infrared image is used for identifying movable pedestrians and animals, the video image is used for identifying obstacles, and the ultrasonic image is used for determining the spatial positions of the pedestrians, the animals and the obstacles;
the projection module comprises a first projection module and a second projection module which are arranged on the left side and the right side below the front windshield;
the projection module is used for projecting information to the front windshield;
and a steering wheel corner sensor is also arranged and electrically connected with the main control module.
A method adopting the driving auxiliary prompting system comprises the following steps:
s1, establishing a virtual model;
s2, setting the spatial position of the driver;
s3, information acquisition is carried out through an acquisition module;
s4, information screening is carried out;
s5, mapping the screened information to a virtual model;
and S6, the projection module projects the screened information to the windshield in a contour image.
In a preferred embodiment, in step S1, the virtual model created includes the spatial position of the eyes of the driver, the spatial position of the windshield, the spatial position of the acquisition module, and the spatial position of the projection module.
Preferably, in step S2, the spatial position of the driver is set by setting the relative spatial position between the eyes of the driver and the windshield.
In a preferred embodiment, in step S3, the information collected by the collecting module includes infrared images, video images and ultrasonic images of left and right positions, wherein the infrared images are used for identifying movable pedestrians and animals, the video images are used for identifying obstacles, and the ultrasonic images are used for determining spatial positions of the pedestrians, the animals and the obstacles;
the system also comprises steering wheel angle information acquired by a steering wheel angle sensor, wherein the steering wheel angle information is used for generating a vehicle motion track area.
In a preferred scheme, in step S4, overlapping the infrared image with the video image, and generating a first contour graph from the video image covered by the highlight region of the infrared image according to the inter-pixel color difference threshold;
overlapping the running track area with the video image, deleting the video image outside the running track area, searching pixel colors and brightness mutation areas in the residual video image, and generating a second contour graph according to a pixel color difference threshold value;
the spatial positions of the first and second contour patterns are located from the ultrasound image.
In the preferred scheme, in the positioning process, the midpoint between the acquisition modules is taken as the origin of coordinates for positioning;
and determining the three-dimensional space coordinate according to the relative position of the space position of the eyes of the driver, the space position of the front windshield, the space position of the acquisition module and the space position of the projection module and the coordinate origin.
In a preferred scheme, the spatial positions of the first contour graph and the second contour graph are introduced into the virtual model to obtain the positions of the first contour graph and the second contour graph on the front windshield; leading out line segments from the virtual space positions of the eyes of the driver to the first contour graph and the second contour graph, wherein the point where the line segments are crossed with a windshield in the virtual model is the mapping of the first contour graph and the second contour graph on the virtual model;
the first projection module and the second projection module project the mapped first contour graph and the second contour graph to the windshield.
In an optimal scheme, the first projection module and the second projection module respectively emit polarized light waves with mutually perpendicular polarization angles, the polarized light waves of the first projection module and the second projection module respectively correspond to binocular vision, and a driver obtains a stereoscopic image through polarized glasses.
The driving auxiliary prompting system and the driving auxiliary prompting method provided by the invention can acquire information with great threat to driving safety and project the information graph on the front windshield in a protruding manner, so that important information is concerned on the premise of not influencing the driving of a driver, and traffic accidents are avoided. By adopting the combination of the ultrasonic acquisition module, the infrared acquisition module and the video acquisition module, each acquisition module is specially used for collecting corresponding information, and the information is combined and calculated. The method reduces the identification operation on the video image, greatly improves the speed of auxiliary prompt, and reduces the dependence on the image identification algorithm. Further preferably, the image calculation amount is further reduced and the auxiliary prompting efficiency is improved by selecting the area which is possibly threatened to the driving safety in the image. The information and the virtual model are mapped through the set virtual model, so that the highlighted outline graph is overlapped with the position of the obstacle or pedestrian or animal and the image observed from the direction of the driver, and the effect of auxiliary prompt is greatly improved. The driving auxiliary prompting system and method can obtain effective safe driving auxiliary prompting with lower hardware requirements and less calculation amount, and can greatly improve the driving safety. Especially, the safety hazard that pedestrians or animals in front of the vehicle may enter the barrier in front of the driving track with great driving safety threat and the two sides of the street has higher prompt effect. The auxiliary prompt has higher hit rate, so that the reliability of the driver on the prompt is improved, the driving auxiliary prompt is fully paid attention to the driver, and the front vehicle can be warned to avoid rear-end collision.
Drawings
The invention is further illustrated by the following examples in conjunction with the accompanying drawings:
FIG. 1: the invention is a schematic diagram for obtaining auxiliary prompts for the visual field of a driver;
FIG. 2: is a schematic front view of the invention;
FIG. 3: is a schematic top view of the present invention;
FIG. 4: is a schematic flow chart of the method of the invention;
FIG. 5: is a connection block diagram of the system of the present invention;
FIG. 6: the invention is a flow chart of image identification, screening and projection;
FIG. 7: the invention is a overlook X-Y coordinate space schematic diagram during the pedestrian distance identification;
FIG. 8: the invention is a schematic diagram of a main view Y-Z coordinate space during pedestrian recognition.
In the figure: the system comprises a front windshield 1, a barrier 2, a first projection module 3, a traffic track 4, a pedestrian 5, an ultrasonic acquisition module 6, an infrared acquisition module 7, a video acquisition module 8, a second projection module 9, an animal 10, a first acquisition module 100, a second acquisition module 101, a main control module 102 and a steering wheel corner sensor 103.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1 to 5, a driving assistance prompting system includes a plurality of acquisition modules disposed at the front end of a vehicle for acquiring barrier information, wherein the acquisition modules are electrically connected with a main control module, and the main control module is electrically connected with a projection module;
the main control module preferably adopts the combination of an Integrator GPU chip, an Intel CPU main chip and a high-speed solid state disk memory;
preferably, the acquisition modules comprise a first acquisition module 100 and a second acquisition module 101 which are arranged on the left side and the right side;
the first acquisition module 100 and the second acquisition module 101 respectively comprise an ultrasonic acquisition module 6, an infrared acquisition module 7 and a video acquisition module 8; according to the unmanned driving experiment of Google, only video images are adopted for identification to assist automatic driving, so that abundant computing resources are needed, 6 GTX 1080-level video cards are adopted in one experiment, and the generated data volume is very large. According to the invention, the special acquisition module is adopted to respectively acquire the infrared image, the ultrasonic image and the video image, and the video image is identified and selected by using the assistance of the infrared image and the ultrasonic image, so that the calculation amount of graphic operation is greatly reduced, and the efficiency and the real-time performance of auxiliary prompt are improved. With the further optimization of the algorithm, the requirement on hardware can be further reduced. In the embodiment, the ultrasonic acquisition module preferably adopts an HG-C40U ultrasonic sensor of Hagisonic company, and can detect the obstacles with the height of more than 50M on the premise of improving the voltage and reducing the precision requirement. The infrared acquisition module 7 is preferably a thermal imager of the RNO company. The video acquisition module 8 adopts a commercially available camera for vehicles with the camera size of above 1080P.
The projection module comprises a first projection module 3 and a second projection module 9 which are arranged on the left side and the right side below the front windshield 1; the projection module in this example preferably adopts a projection device of a DMD chip, and projects an image onto the windshield 1 by using a micromirror on the DMD chip, and since the requirement for the image by the driving assistance prompt is not high, a product with a lower resolution, for example, a product with a resolution of 800 × 600, may be adopted to reduce the cost, and with further reduction of the chip cost, a chip with a higher specification is adopted. Further preferably, the light source of the projection module is a laser light source or an LED light source.
And a steering wheel angle sensor 103 is also arranged, and the steering wheel angle sensor 103 is electrically connected with the main control module. The trajectory of the vehicle is obtained by the steering wheel angle sensor 103. Examples which may be referred to are, for example, a reverse trajectory.
As shown in fig. 4 to 6, a method using the driving assistance prompting system includes the following steps:
s1, establishing a virtual model;
preferably, in step S1, the virtual model created includes the spatial position of the driver' S eyes, the spatial position of the windshield 1, the spatial position of the acquisition module, and the spatial position of the projection module;
in the modeling process, the midpoint between the acquisition modules, i.e., between the first acquisition module 100 and the second acquisition module 101, is preferably used as the coordinate origin for spatial positioning;
determining a three-dimensional space coordinate according to the relative position of the space position of the eyes of the driver, the space position of the front windshield 1, the space position of the acquisition module and the space position of the projection module and the origin of coordinates; preferably, the midpoint between the eyes of the driver can be used as a mass point, the windshield 1 is a fitting curved surface of the inner layer surface of the glass, the ccd position of the video acquisition module 8 is used as a mass point by the acquisition module, and the position of the DMD chip is used as a mass point by the projection module. Thereby greatly reducing the amount of calculation.
The virtual model in this example is an X, Y, Z three-axis coordinate system with the midpoint between the first acquisition module 100 and the second acquisition module 101 as the origin of coordinates. The spatial position of the front windshield 1, the spatial position of the acquisition module and the spatial position of the projection module are different for each vehicle type and are calibrated by manufacturers before leaving factories.
S2, setting the spatial position of the driver;
preferably, the spatial position of the driver is set, that is, the relative spatial position between the eyes of the driver and the windshield 1 is set. The spatial position of the driver's eyes is set when the driver is using it for the first time. The setting mode adopts manual setting, and the height and the sitting posture of the driver can be automatically adjusted. Since each individual is different, secondary adjustment is required. More specifically, the step of verifying is that the projection module projects the video image collected in real time to the windshield 1, and the driver judges whether the spatial position of the eyes of the driver is adjusted in place according to the overlapping degree of the image.
Although there is an error in this setting, the error can be ignored after being projected onto the windshield 1, or the use experience of the driver is not affected even if there is a slight error. And storing the data in the steps S1 and S2 into a memory of the master control device, such as a high-speed solid state disk memory. For some data used at high frequency, it is stored in the DRAM.
S3, information acquisition is carried out through an acquisition module;
preferably, in step S3, the information acquired by the acquisition module includes infrared images of left and right positions, video images of left and right positions, and ultrasound images of left and right positions; the collected information is stored in a memory of the main control device.
Wherein, the infrared image of the left and right positions is used for identifying the movable pedestrian 5 and the movable animal 10, the video image of the left and right positions is used for identifying the obstacle 2, and the ultrasonic image of the left and right positions is used for determining the spatial positions of the pedestrian 5, the movable animal 10 and the obstacle 2;
the ultrasonic images at the left and right positions obtain the coordinates of the spatial position through a triangular algorithm, and proper floating point operation precision is selected according to the computing capacity of hardware. It should be noted that the working condition of the present application is in a high-speed motion state, for example, 60 to 100 km/h. In this state, a large amount of useless data is generated if all the data are identified in real time. Thus by selecting a smaller floating point precision. For example, only the floating point precision of 4-6 bits after the decimal point is adopted to improve the operation speed.
The system further comprises steering wheel angle information acquired by a steering wheel angle sensor 103, wherein the steering wheel angle information is used for generating a vehicle motion track area, and the vehicle motion track area is stored in a memory of the main control device. The method comprises the specific steps that the deflection angle of a wheel is obtained through calculation according to the steering wheel rotation angle, and the wheel rolls along the deflection angle to obtain the running track of the vehicle. And expanding the running track according to the width of the vehicle to obtain a vehicle running track area, converting the coordinates of the running track area into a preset coordinate system, and mapping the video image to a virtual model such as a Y-Z plane shown in FIG. 8. The area of the video image covered by the driving track area from the view angle of the driver is the overlapping area of the driving track area and the video image. The driver viewing angle refers to an image acquired by a virtual camera arranged at the position of the eyes of the driver in the virtual model, and in this example, is a contour image of a vehicle running track area. In the case of a single acquisition module, only one set of superimposed regions is generated, in the case of a dual acquisition module, two sets of superimposed regions are generated, both processed separately, and the subsequent image processing steps are the same. Further preferably, when the dual-acquisition module moves at a high speed, only one group of overlapping areas can be reserved to increase the operation speed.
S4, information screening is carried out;
preferably, the infrared image and the video image are overlapped, and a first contour graph is generated according to a color difference threshold value between pixels of the video image covered by the highlight area in the infrared image; the outline graphic is useful for alerting the driver of a pedestrian 5 or an animal 10 in the field of view, and is particularly advantageous for alerting a small child who is blind in the driver's view. As shown in fig. 1, due to the lack of safety awareness of children, there are many traffic accidents caused by children playing in blind areas of the driver's view. The scheme of the invention can greatly avoid the tragedy. In the case of a single acquisition module, only one set of superimposed graphs is generated, in the case of a double acquisition module, two sets of superimposed graphs are generated, both are processed separately, and the subsequent image processing steps are the same.
The highlight area is an area where heat of the object is actively radiated and collected by the infrared collection module 7, the outline of the highlight area is obtained through a set threshold value, and the threshold value is set according to specific conditions. Taking an RGB color gamut as an example, the color gamut of a single color generated by the infrared acquisition module 7 is in a range of 0 to 255, and a certain intermediate value is taken as a threshold value as required, for example, 180, and a color value exceeding 180 is determined as a highlight region.
And overlapping the running track area with the video image, and deleting the video image outside the running track area in a mode of reversely selecting the video image outside the vehicle running track area. Searching a pixel mutation area in the residual video image, and generating a second contour graph according to a pixel-to-pixel color difference threshold value. The second contour pattern helps to identify a leading vehicle tail, a road curb, or a collapsed well, often creating a pixel break between the edge of these objects and the road.
This configuration can greatly reduce the amount of computation for image recognition, and image recognition is not necessary for the region other than the trajectory 4. The marked second contour figures are all the barriers 2 with great threat to safe driving, so that the driving assistance prompt can be valued by the driver, and the driver can be prevented from neglecting the driving assistance prompt due to more invalid information. The obstacle 2 which threatens the safe driving is large, such as falling rocks on the road surface, a well head with a damaged well cover or the tail of a front vehicle.
The spatial positions of the first and second contour patterns are located from the ultrasound image. The preferred ultrasound image here is a distance matrix parameter graph returned by HG-C40U ultrasonic sensor of hagiosonic corporation to the obstacle, pedestrian 5, and animal 10, and generally, the higher the precision, the more huge the matrix parameter data, and the corresponding relationship is exponential. The distance matrix parameter pattern is overlapped with the first outline pattern and the second outline pattern. As the distance between the ultrasonic sensor and the video sensor is very close, the parallax between the ultrasonic sensor and the video sensor can be ignored in the overlapping process, as for the depth of field error between the ultrasonic sensor and the video sensor, the depth of field error needs to be calibrated in manufacturers according to different sensors, and different scaling coefficients are set for different sensors, thereby ensuring the accurate overlapping of the graphs. The spatial positions of the first outline graph and the second outline graph can be obtained through calculation. Namely, the distance matrix parameters are assigned to the first outline graph and the second outline graph, and then the space coordinates of the unified coordinate origin are converted according to the distance matrix parameters. Referring to fig. 7 and 8, generally, in the video recognition scheme, accurate calculation of the spatial coordinates needs to be performed by using parallax calculation in combination with a three-dimensional coordinate formula, and the calculation amount is large. In the application, the combination of the ultrasonic acquisition module 6, the infrared acquisition module 7 and the video acquisition module 8 greatly reduces the calculation amount. For example, in fig. 7, it is necessary to know the coordinates of the E point in the pedestrian 5, since the coordinates of the first and second capturing devices 100 and 101 are already set, and it is known that the coordinates of the E point on the Y axis can be obtained from the projection of the first contour pattern on the Y axis. Since in the previous step the video image has been mapped to the virtual model as the Y-Z plane shown in fig. 8, the coordinates of the E-point on the Y-axis and the coordinates on the Z-axis are both known, and only the coordinates of the E-point on the x-axis need to be found. As shown in fig. 7, when the coordinates of EY are known, the distances between O2EY and O1EY are known, and the distance between O2E and O1E is known by assigning values to the distances from the ultrasound acquisition module 6, and the value of EX can be obtained by the right triangle equation.
Namely EX2=O2E2-O2EY2(ii) a And
EX2=O1E2-O1EY2
due to the positions of the left and right acquisition devices, the coordinate of the point E on the y axis has errors, and the error can be reduced by an average formula so as to greatly reduce the calculation amount. OEY ═ OEY1+ OEY 2/2;
OEY is the length of the actual acquisition device OEY, OEY1 is the distance from the y-axis coordinate of the point E of the first profile of the first acquisition device 100 to the origin O, and OEY2 is the distance from the y-axis coordinate of the point E of the first profile of the second acquisition device 101 to the origin O.
Preferably, in the positioning process, the midpoint between the acquisition modules is taken as a coordinate origin to perform spatial positioning. According to the scheme, the operation amount is reduced.
S5, mapping the screened information to a virtual model;
preferably, the spatial positions of the first contour graph and the second contour graph are introduced into the virtual model to obtain the positions of the first contour graph and the second contour graph on the front windshield 1;
the specific mapping process is that a plurality of line segments are led out from the virtual space position of the eyes of a driver to a first contour graph and a second contour graph, the point of intersection of each line segment and a windshield 1 in a virtual model is the mapping of the first contour graph and the second contour graph on the virtual model, preferably, a line segment is led out from the pointed projection position of the first contour graph and the second contour graph, and at least three line segments are led out from a smooth curve and are respectively two end points and a middle point. And connecting the discrete points by straight lines or curves, and fitting the discrete points into a first contour graph and a second contour graph to obtain the projection coordinates of the first contour graph and the second contour graph on the front windshield 1. Different fitting accuracies are adopted according to hardware resources. The position and specific shape of the first and second outline patterns on the windshield 1 are obtained by mapping in the virtual model.
And S6, the projection module projects the screened information to the windshield 1 in a contour image.
The first projection module 3 and the second projection module 9 project the first contour pattern and the second contour pattern to the windshield 1, wherein the first projection module 3 and the second projection module 9 are respectively responsible for projecting half of the windshield 1. In this step, a driving assistance guidance image that overlaps with a real object is obtained as viewed from the driver. Namely, from the view angle of the driver, the barrier 2, the pedestrian 5 and the animal 10 which have the most threat to the driving safety are sketched out by the first outline graph and the second outline graph to be highlighted so as to remind the driver to pay attention to avoidance, thereby greatly improving the driving safety. The driver's full trust can be obtained through the driving assistance prompt after screening. The mode that each collection module makes up can reduce the calculated amount by a wide margin, improves the treatment effeciency.
In another optional scheme, the first projection module 3 and the second projection module 9 respectively emit polarized light waves with mutually perpendicular polarization angles, the polarized light waves of the first projection module 3 and the second projection module 9 respectively correspond to binocular vision, a driver obtains a stereoscopic image through polarized glasses, and at this time, the first projection module 3 and the second projection module 9 are responsible for projection of the whole windshield 1. The first projection module 3 and the second projection module 9 employ laser projection modules. The scheme for modulating polarized light waves is: the projection optical paths of the first projection module 3 and the second projection module 9 are respectively provided with a polarization lens, and the polarization angles of the polarized light waves which can pass through the two polarization lenses are mutually vertical.
The above-described embodiments are merely preferred embodiments of the present invention, and should not be construed as limiting the present invention, and the scope of the present invention is defined by the claims, and equivalents including technical features described in the claims. I.e., equivalent alterations and modifications within the scope hereof, are also intended to be within the scope of the invention. Due to the limited space, it is difficult to describe all the combinations of the technical features in this example, so that the technical features can be combined with each other to obtain more combinations without conflict.

Claims (3)

1. A method adopting a driving auxiliary prompting system comprises a plurality of acquisition modules which are arranged at the front end of a vehicle and used for acquiring barrier information, wherein the acquisition modules are electrically connected with a main control module which is electrically connected with a projection module;
the acquisition module comprises a first acquisition module (100) and a second acquisition module (101) which are arranged on the left side and the right side;
the first acquisition module (100) and the second acquisition module (101) respectively comprise an ultrasonic acquisition module (6), an infrared acquisition module (7) and a video acquisition module (8);
the projection module comprises a first projection module (3) and a second projection module (9) which are arranged on the left side and the right side below the front windshield (1);
the projection module is used for projecting information to the windshield (1);
the steering wheel angle sensor (103) is also arranged, and the steering wheel angle sensor (103) is electrically connected with the main control module;
the method is characterized by comprising the following steps:
s1, establishing a virtual model;
the established virtual model comprises the spatial position of the eyes of a driver, the spatial position of the front windshield (1), the spatial position of the acquisition module and the spatial position of the projection module;
s2, setting the spatial position of the driver, namely setting the relative spatial position between the eyes of the driver and the front windshield (1);
s3, information acquisition is carried out through an acquisition module;
the information acquired by the acquisition module comprises infrared images, video images and ultrasonic images of left and right positions, wherein the infrared images are used for identifying movable pedestrians (5) and movable animals (10), the video images are used for identifying barriers (2), and the ultrasonic images are used for determining the spatial positions of the pedestrians (5), the animals (10) and the barriers (2);
the system also comprises steering wheel angle information acquired by a steering wheel angle sensor (103), wherein the steering wheel angle information is used for generating a vehicle motion track area;
s4, information screening is carried out;
overlapping the infrared image and the video image, and generating a first contour graph according to a color difference threshold value between pixels of the video image covered by the highlight area of the infrared image;
overlapping the running track area with the video image, deleting the video image outside the running track area, searching pixel colors and brightness mutation areas in the residual video image, and generating a second contour graph according to a pixel color difference threshold value;
positioning the spatial positions of the first outline graph and the second outline graph according to the ultrasonic image;
the ultrasonic acquisition module (6) assigns the distance matrix parameters to the first contour graph and the second contour graph, and then converts the distance matrix parameters into space coordinates of a uniform coordinate origin;
s5, mapping the screened information to a virtual model;
introducing the spatial positions of the first outline graph and the second outline graph into the virtual model to obtain the positions of the first outline graph and the second outline graph on the front windshield (1); leading out line segments from the virtual space positions of the eyes of the driver to the first contour graph and the second contour graph, wherein the point of intersection of the line segments and a windshield (1) in the virtual model is the mapping of the first contour graph and the second contour graph on the virtual model;
the first projection module (3) and the second projection module (9) project the mapped first contour graph and second contour graph to the windshield (1);
and S6, the projection module projects the screened information to the windshield (1) in a contour image.
2. The method for using the driving assistance prompting system according to claim 1, wherein: in the positioning process, the midpoint between the acquisition modules is used as the origin of coordinates for positioning;
the spatial position of the eyes of the driver, the spatial position of the front windshield (1), the spatial position of the acquisition module and the spatial position of the projection module determine three-dimensional spatial coordinates according to the relative position of the three-dimensional spatial coordinates and the origin of coordinates.
3. The method for using the driving assistance prompting system according to claim 1, wherein: the first projection module (3) and the second projection module (9) respectively emit polarized light waves with mutually perpendicular polarization angles, the polarized light waves of the first projection module (3) and the second projection module (9) respectively correspond to binocular vision, and a driver obtains a stereoscopic image through polarized glasses.
CN201910207174.8A 2019-03-19 2019-03-19 Driving auxiliary prompting system and method Active CN109895697B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910207174.8A CN109895697B (en) 2019-03-19 2019-03-19 Driving auxiliary prompting system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910207174.8A CN109895697B (en) 2019-03-19 2019-03-19 Driving auxiliary prompting system and method

Publications (2)

Publication Number Publication Date
CN109895697A CN109895697A (en) 2019-06-18
CN109895697B true CN109895697B (en) 2020-06-09

Family

ID=66953603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910207174.8A Active CN109895697B (en) 2019-03-19 2019-03-19 Driving auxiliary prompting system and method

Country Status (1)

Country Link
CN (1) CN109895697B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112987002B (en) * 2021-02-22 2024-04-05 广州大学 Obstacle risk identification method, system and device
CN112966668A (en) * 2021-04-06 2021-06-15 中交三公局第一工程有限公司 Intelligent fire-fighting early warning system
CN115273552B (en) * 2022-09-19 2022-12-20 南通立信自动化有限公司 HMI control system of automobile instrument

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4475308B2 (en) * 2007-09-18 2010-06-09 株式会社デンソー Display device
CN202716870U (en) * 2012-07-23 2013-02-06 北京智华驭新汽车电子技术开发有限公司 Automobile forward track auxiliary device
JP6361492B2 (en) * 2014-12-19 2018-07-25 アイシン・エィ・ダブリュ株式会社 Virtual image display device
CN105774679B (en) * 2014-12-25 2019-01-29 比亚迪股份有限公司 A kind of automobile, vehicle-mounted head-up-display system and its projected image height adjusting method
CN105835777A (en) * 2016-03-30 2016-08-10 乐视控股(北京)有限公司 Display method and device and vehicle
CN206584040U (en) * 2017-02-21 2017-10-24 孙聪 A kind of vehicle obstacle-avoidance system and vehicle
CN207059958U (en) * 2017-07-28 2018-03-02 合肥芯福传感器技术有限公司 AR optical projection systems for vehicle safe driving

Also Published As

Publication number Publication date
CN109895697A (en) 2019-06-18

Similar Documents

Publication Publication Date Title
JP7228652B2 (en) OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD AND PROGRAM
EP3418943B1 (en) Object detecting apparatus, object detecting method, and computer-readable medium
WO2021226776A1 (en) Vehicle drivable area detection method, system, and automatic driving vehicle using system
EP3566903B1 (en) Method and apparatus for vehicle position detection
Gandhi et al. Vehicle surround capture: Survey of techniques and a novel omni-video-based approach for dynamic panoramic surround maps
KR102267562B1 (en) Device and method for recognition of obstacles and parking slots for unmanned autonomous parking
US7362881B2 (en) Obstacle detection system and method therefor
EP1961613B1 (en) Driving support method and driving support device
JP3619628B2 (en) Driving environment recognition device
US11338807B2 (en) Dynamic distance estimation output generation based on monocular video
US9056630B2 (en) Lane departure sensing method and apparatus using images that surround a vehicle
CN109895697B (en) Driving auxiliary prompting system and method
KR102146451B1 (en) Apparatus and method for acquiring conversion information of coordinate system
Perrollaz et al. A visibility-based approach for occupancy grid computation in disparity space
JPH1139596A (en) Outside monitoring device
JP2006184276A (en) All-weather obstacle collision preventing device by visual detection, and method therefor
JP3747599B2 (en) Obstacle detection device for vehicle
WO2021078812A1 (en) Method and system for object detection for a mobile robot with time-of-flight camera
KR102031635B1 (en) Collision warning device and method using heterogeneous cameras having overlapped capture area
JP2017129543A (en) Stereo camera device and vehicle
WO2022153795A1 (en) Signal processing device, signal processing method, and signal processing system
CN113658240B (en) Main obstacle detection method and device and automatic driving system
JP4106163B2 (en) Obstacle detection apparatus and method
CN113496601B (en) Vehicle driving assisting method, device and system
Iwata et al. Forward obstacle detection in a lane by stereo vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant