CN113837027A - Driving assistance sensing method, device, equipment and storage medium - Google Patents
Driving assistance sensing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN113837027A CN113837027A CN202111034889.1A CN202111034889A CN113837027A CN 113837027 A CN113837027 A CN 113837027A CN 202111034889 A CN202111034889 A CN 202111034889A CN 113837027 A CN113837027 A CN 113837027A
- Authority
- CN
- China
- Prior art keywords
- eyeball
- driver
- determining
- vehicle glass
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000003860 storage Methods 0.000 title claims abstract description 14
- 210000005252 bulbus oculi Anatomy 0.000 claims abstract description 143
- 239000011521 glass Substances 0.000 claims abstract description 109
- 230000008447 perception Effects 0.000 claims abstract description 69
- 238000013507 mapping Methods 0.000 claims abstract description 34
- 230000001360 synchronised effect Effects 0.000 claims abstract description 11
- 210000001747 pupil Anatomy 0.000 claims description 31
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 238000009434 installation Methods 0.000 claims description 5
- 210000001508 eye Anatomy 0.000 abstract description 26
- 241000282414 Homo sapiens Species 0.000 abstract description 22
- 230000003993 interaction Effects 0.000 abstract description 14
- 230000008569 process Effects 0.000 abstract description 11
- 239000011159 matrix material Substances 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000035699 permeability Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 2
- 102100034112 Alkyldihydroxyacetonephosphate synthase, peroxisomal Human genes 0.000 description 1
- 101000799143 Homo sapiens Alkyldihydroxyacetonephosphate synthase, peroxisomal Proteins 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000000848 angular dependent Auger electron spectroscopy Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000005282 brightening Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000012141 concentrate Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004297 night vision Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention belongs to the technical field of automobiles, and discloses a sensing method, a sensing device, sensing equipment and a storage medium for driving assistance. The method comprises the following steps: acquiring eyeball data collected by an eyeball camera; determining corresponding sight line characteristic information based on a preset driver eyeball model according to eyeball data; acquiring distance information between a driver and vehicle glass; determining the position information of the driver sight on the vehicle glass according to the sight characteristic information and the distance information; determining a target area corresponding to the position information; and determining an auxiliary camera perception area corresponding to the target area according to the preset area mapping relation so as to realize man-machine synchronous perception. Through the mode, the perception area of the auxiliary camera is determined according to the sight characteristic information of the eyes of the driver, the auxiliary camera is fused with the eyes of the driver, and the problems that human and machines are mutually independent, perception interaction is not carried out and the compatibility is low in the process of man-machine driving are solved.
Description
Technical Field
The invention relates to the technical field of automobiles, in particular to a driving assistance sensing method, a driving assistance sensing device, driving assistance equipment and a storage medium.
Background
At present, human and machine are mutually independent in the man-machine co-driving, namely, human does not participate in the driving when the vehicle is unmanned, when the machine cannot be driven independently, the human is prompted to take over the driving, or the human is mainly responsible for the driving, the machine assists the human, and the like. Particularly, in a vehicle camera perception scene, the machine and the human do not interact, and the compatibility is low.
At present most volume production motorcycle type, on-vehicle camera coverage is high, like driving supplementary ADAS camera, panorama look around the camera, driver fatigue detection camera etc. all realize the function through the camera perception, and do safe backup without redundant sensor, when the camera sheltered from by foreign matter, dirty, rainwater etc., the camera can not effective perception information, leads to the function to become invalid or withdraw from, has the problem that the camera fault-tolerant rate is low. The Chinese patent application: the invention discloses a night target detection and tracking method (publication number: CN111967498A) based on millimeter wave radar and vision fusion, which adopts a camera original image to obtain richer image dark part information, utilizes an image brightening algorithm for deep learning to restore image dark part details, enhances night vision capability of an unmanned vehicle, and ensures that a perception system can still work normally when one sensor fails. This patent suffers from the following drawbacks: the fault tolerance rate is improved by stacking the sensors, but different sensors have different functions, the sensors are in a complementary relationship rather than being mutually replaceable, when one sensor fails, the other sensor cannot completely replace the failed sensor, and if the sensor fails, two identical sensors are assembled on one trolley, so that the cost is doubled.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a driving-assistant sensing method, a driving-assistant sensing device, driving-assistant equipment and a driving-assistant storage medium, and aims to solve the technical problems that human beings and machines are independent from each other, sensing interaction is not carried out and the compatibility is low in the existing man-machine co-driving.
In order to achieve the above object, the present invention provides a driving assistance perception method, including the steps of:
acquiring eyeball data collected by an eyeball camera;
determining corresponding sight line characteristic information based on a preset driver eyeball model according to the eyeball data;
acquiring distance information between a driver and vehicle glass;
determining the position information of the sight of the driver on the vehicle glass according to the sight characteristic information and the distance information;
determining a target area corresponding to the position information;
and determining an auxiliary camera perception area corresponding to the target area according to a preset area mapping relation so as to realize man-machine synchronous perception.
Optionally, after determining the auxiliary camera sensing region corresponding to the target region according to the preset region mapping relationship, the method further includes:
and when detecting that the number of times that the driver looks at the target region in a preset time period is greater than a preset threshold value, simplifying the image data acquired by the auxiliary camera according to the perception region of the auxiliary camera.
Optionally, after determining the auxiliary camera sensing region corresponding to the target region according to the preset region mapping relationship, the method further includes:
when the fact that the current image data acquired by the auxiliary camera is shielded is detected, determining a shielded area according to the current image data;
determining a current area of the vehicle glass corresponding to the shielded area according to the preset area mapping relation;
and prompting the driver to pay attention to the current area of the vehicle glass.
Optionally, after the current area of the vehicle glass corresponding to the blocked area is determined according to the preset area mapping relationship, the method further includes:
judging whether the current area of the vehicle glass is consistent with the target area;
and when the current area of the vehicle glass is inconsistent with the target area, prompting a driver to pay attention to the current area of the vehicle glass.
Optionally, the determining, according to the eyeball data, corresponding sight line characteristic information based on a preset driver eyeball model includes:
determining transverse position information and longitudinal position information of the pupil relative to the center of the eyeball based on a preset eyeball model of the driver according to the relative position information of the pupil and the inner canthus;
the determining the position information of the sight line of the driver on the vehicle glass according to the sight line characteristic information and the distance information comprises the following steps:
determining the transverse coordinate of the sight of the driver on the vehicle glass according to the transverse position information, the preset eyeball radius and the distance information;
and determining the longitudinal coordinate of the sight of the driver on the vehicle glass according to the longitudinal position information, the preset eyeball radius and the distance information.
Optionally, before determining the position information of the driver's sight line on the vehicle glass according to the sight line feature information and the distance information, the method further comprises:
determining a vehicle glass reference center corresponding to the eyeball center according to the installation position of the eyeball camera and the distance information;
establishing a two-dimensional coordinate system by taking the vehicle glass reference center as a coordinate center;
the determining the position information of the sight line of the driver on the vehicle glass according to the sight line characteristic information and the distance information comprises the following steps:
determining the transverse coordinate of the sight of the driver on the vehicle glass according to the transverse position information, the preset eyeball radius, the distance information and the two-dimensional coordinate system;
and determining the longitudinal coordinate of the sight of the driver on the vehicle glass according to the longitudinal position information, the preset eyeball radius, the distance information and the two-dimensional coordinate system.
Optionally, before acquiring eyeball data collected by an eyeball camera, the method further includes:
converting the eyeball camera and the auxiliary camera into a world coordinate system;
acquiring eyeball data based on the converted eyeball camera;
and acquiring image data based on the converted auxiliary camera.
In order to achieve the above object, the present invention also provides a driving support sensing apparatus, including:
the acquisition module is used for acquiring eyeball data acquired by the eyeball camera;
the determining module is used for determining corresponding sight characteristic information based on a preset driver eyeball model according to the eyeball data;
the acquisition module is also used for acquiring distance information between a driver and the vehicle glass;
the sight line conversion module is used for determining the position information of the sight line of the driver on the vehicle glass according to the sight line characteristic information and the distance information;
the determining module is further configured to determine a target area corresponding to the location information;
and the mapping module is used for determining an auxiliary camera perception area corresponding to the target area according to a preset area mapping relation so as to realize man-machine synchronous perception.
Further, to achieve the above object, the present invention also proposes a driving assistance sensing apparatus including: the device comprises a memory, a processor and a driving-assistant perception program stored on the memory and capable of running on the processor, wherein the driving-assistant perception program is configured to realize the driving-assistant perception method.
In addition, in order to achieve the above object, the present invention further provides a storage medium having a driving assistance perception program stored thereon, which when executed by a processor implements the driving assistance perception method as described above.
The method comprises the steps of acquiring eyeball data acquired by an eyeball camera; determining corresponding sight line characteristic information based on a preset driver eyeball model according to eyeball data; acquiring distance information between a driver and vehicle glass; determining the position information of the driver sight on the vehicle glass according to the sight characteristic information and the distance information; determining a target area corresponding to the position information; and determining an auxiliary camera perception area corresponding to the target area according to the preset area mapping relation so as to realize man-machine synchronous perception. By the mode, sight tracking is carried out on human eyes of a driver, the perception area of the auxiliary camera is determined according to sight characteristic information of the human eyes of the driver, the auxiliary camera is fused with the human eyes of the driver, the permeability of man-machine interaction is improved, and the problems that human beings and machines are mutually independent, perception interaction is not carried out and the compatibility is low in man-machine driving are solved.
Drawings
FIG. 1 is a schematic structural diagram of a sensing device for assisting driving of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a first exemplary embodiment of a driving assistance method according to the present invention;
FIG. 3 is a schematic diagram of a three-dimensional spatial model of an eyeball according to an embodiment of the driving assistance perception method of the invention;
FIG. 4 is a flowchart illustrating a driving assistance sensing method according to a second embodiment of the present invention;
FIG. 5 is a flowchart illustrating a driving assistance sensing method according to a third embodiment of the present invention;
fig. 6 is a block diagram showing the configuration of the first embodiment of the driving assistance sensing apparatus of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a sensing device for assisting driving in a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the driving-assistance sensing apparatus may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the driving-assistance sensing apparatus, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and a driving assistance perception program.
In the driving assistance perception device shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 of the driving assistance sensing apparatus according to the present invention may be provided in a driving assistance sensing apparatus that calls the driving assistance sensing program stored in the memory 1005 through the processor 1001 and performs the driving assistance sensing method according to the embodiment of the present invention.
An embodiment of the present invention provides a driving assistance sensing method, and referring to fig. 2, fig. 2 is a schematic flow diagram of a first embodiment of the driving assistance sensing method according to the present invention.
In this embodiment, the driving assistance sensing method includes the steps of:
step S10: eyeball data collected by the eyeball camera is obtained.
It can be understood that the execution main body of the embodiment is a sensing device for driving assistance, the sensing device for driving assistance may be a vehicle-mounted computer, a controller connected to a vehicle control end, or a domain controller, or other devices with the same or similar functions.
It should be noted that the eyeball camera may periodically collect eyeball data according to a preset collection period, and may also collect the eyeball data under the control of the domain controller, specifically, when it is detected that the current image data collected by the auxiliary camera is blocked, the eyeball camera is controlled to collect the current eyeball data. The eyeball data mainly comprise eyeball image data, and eyeball characteristic information carried by the eyeball image data is determined by analyzing the eyeball image data.
Further, in order to improve the accuracy of data fusion between the human eye and the auxiliary camera, before the step S10, the method further includes: converting the eyeball camera and the auxiliary camera into a world coordinate system; acquiring eyeball data based on the converted eyeball camera; and acquiring image data based on the converted auxiliary camera.
It should be understood that the eyeball camera and the auxiliary camera are converted to the world coordinate system according to equation (1):
the camera coordinate system and the real coordinate system are converted through a rotation matrix R and a translation matrix T, the rotation matrix R is a 3 x 3 matrix, the translation matrix T is a 3 x 1 matrix, and the rotation matrix and the translation matrix are adjusted according to the arrangement positions of the two cameras so as to realize data fusion and perception interaction on a reference coordinate system (a world coordinate system).
Step S20: and determining corresponding sight characteristic information based on a preset eyeball model of the driver according to the eyeball data.
Before step S20, the method further includes: and constructing a preset eyeball model of the driver. Referring to fig. 3, fig. 3 is a schematic view of an eyeball three-dimensional space model according to an embodiment of the driving assistance sensing method of the present invention, and a process of constructing a preset driver eyeball model includes: establishing a three-dimensional space model by taking the eyeball center O as the sphere center O1OO is the point where the pupil is at the center of the eyeball when the eyes of the driver face the front of the vehicle1The straight line is used as a z-axis to establish a three-dimensional coordinate system, namely the surface AO of the sphere1B is the part of the eyeball exposed on the surface, C is the position to which the pupil moves when the eye sees an object in front of the vehicle, and the angle theta is the angle of the sight line of the driver deviating from the initial sight line. The method comprises the following steps of (1) providing a spherical model: x is the number of2+y2+z2=r2。
Specifically, because the three-dimensional model has a large data computation amount and the vehicle has a high real-time requirement, in this embodiment, the three-dimensional spherical model is mapped to a two-dimensional space to obtain a simplified preset eyeball model of the driver. The movement of the sight line is from the initial point O when the pupil observes the object1(x0,y0,z0) To C (x)c,yc,zc) From C (x)c,yc,zc) The point is made into a straight line vertical to the z-axis and intersects the z-axis at a point D (0,0, z)c) The magnitude of the angle θ is determined according to equation (2):
wherein theta is smaller and sin theta is approximate to theta, and the arc O is determined according to the formula (2)1Distance to C:
and constructing a simplified preset eyeball model of the driver according to the formula (2) and the formula (3).
It can be understood that the sight line characteristic information is mainly coordinate position information of the pupil relative to the center of the eyeball, and the relative position information between the pupil and the inner canthus is determined according to the acquired eyeball data, so that the relative position information of the pupil relative to the center of the eyeball, namely the relative position information of the pupil relative to the center of the eyeball is determinedAnd determining coordinate information of the pupil in the preset driver eyeball model based on the preset driver eyeball model.
Step S30: distance information between a driver and a vehicle glass is acquired.
It should be noted that the distance information between the driver and the vehicle glass may be a fixed value, and is stored in a preset storage area in advance, and the distance information is retrieved from the preset storage area when the human-computer interaction is performed, and in a specific implementation, in order to reduce a synchronization error, the domain controller of this embodiment is further connected with a laser sensor for measuring the distance information between the driver and the vehicle glass, and when the human-computer interaction is performed, the laser sensor is controlled to be turned on, so as to obtain the distance information between the driver and the vehicle glass. In addition, the process of acquiring the distance information between the driver and the vehicle glass may further be: determining the transverse distance between the installation position of the eyeball camera and the vehicle glass, determining the transverse distance between a driver and the camera according to the eyeball size shot by the eyeball camera and the preset standard eyeball size, and determining the distance information between the driver and the vehicle glass according to the two transverse distances.
Step S40: and determining the position information of the sight of the driver on the vehicle glass according to the sight characteristic information and the distance information.
It can be understood that the vehicle glass refers to a vehicle windshield, and the position information corresponding to the vehicle glass, in which the sight line falls, is determined according to the eyeball center O of the preset driver eyeball model, the coordinate position information of the pupil relative to the eyeball center, and the distance information.
Step S50: and determining a target area corresponding to the position information.
It should be noted that, in this embodiment, the vehicle glass is divided into regions in advance, and a specific process may be that 4 regions are constructed with an intersection point of the center of the pupil of the driver, which is parallel to the ground, and the vehicle glass as an origin, and the target region where the target region is located is determined according to the position information of the sight line of the driver on the vehicle glass.
Step S60: and determining an auxiliary camera perception area corresponding to the target area according to a preset area mapping relation so as to realize man-machine synchronous perception.
It can be understood that, in this embodiment, the auxiliary camera is divided into sensing regions in advance, and a specific process may be to construct 4 regions with a central point of the sensing region as an origin, map and correspond the 4 regions divided by the vehicle glass and the 4 sensing regions divided by the auxiliary camera, and construct a preset region mapping relationship.
Further, in order to simplify the perception data of the auxiliary camera and improve the camera recognition effect, after step S60, the method further includes: and when detecting that the number of times that the driver looks at the target region in a preset time period is greater than a preset threshold value, simplifying the image data acquired by the auxiliary camera according to the perception region of the auxiliary camera.
It should be noted that the preset time period is a detection time period set in advance, a counting mechanism is set in the preset time period and is used for determining the number of times that the driver gazes at each area of the vehicle glass, for example, the preset time period is set to 2 minutes, the preset threshold is a critical value set in advance and used for distinguishing the number of gazing times, and when the eyeball camera senses that the sight line of the driver stays in the area 1 for multiple times within the preset time period, the driving assistance camera does not recognize the sensing content of the area 1, so that the calculation power of 1/4 can be saved and the method is used for processing the image data of other areas.
According to the embodiment, eyeball data collected by an eyeball camera is obtained; determining corresponding sight line characteristic information based on a preset driver eyeball model according to eyeball data; acquiring distance information between a driver and vehicle glass; determining the position information of the driver sight on the vehicle glass according to the sight characteristic information and the distance information; determining a target area corresponding to the position information; and determining an auxiliary camera perception area corresponding to the target area according to the preset area mapping relation so as to realize man-machine synchronous perception. By the mode, sight tracking is carried out on human eyes of a driver, the perception area of the auxiliary camera is determined according to sight characteristic information of the human eyes of the driver, the auxiliary camera is fused with the human eyes of the driver, the permeability of man-machine interaction is improved, and the problems that human beings and machines are mutually independent, perception interaction is not carried out and the compatibility is low in man-machine driving are solved.
Referring to fig. 4, fig. 4 is a flowchart illustrating a driving assistance sensing method according to a second embodiment of the present invention.
Based on the first embodiment, the driving assistance sensing method of the present embodiment is after step S60, and the method further includes:
step S601: when the fact that the current image data acquired by the auxiliary camera is shielded is detected, determining a shielded area according to the current image data.
It can be understood that, image analysis is performed on the current image data acquired by the auxiliary camera, specifically, whether the current image data is occluded or not can be determined in a foreground image connectivity analysis mode, and if occlusion exists, the region where the occluded pixels are located is determined.
Step S602: and determining the current area of the vehicle glass corresponding to the shielded area according to the preset area mapping relation.
It should be noted that the preset area mapping relationship is mapped with the auxiliary camera acquisition area and the vehicle glass area which are in one-to-one correspondence, and after the shielded area is determined, the corresponding vehicle glass current area is determined according to the preset area mapping relationship.
Further, after the root step S602, the method further includes: judging whether the current area of the vehicle glass is consistent with the target area; and when the current area of the vehicle glass is inconsistent with the target area, prompting a driver to pay attention to the current area of the vehicle glass.
It should be understood that if the area watched by the eyes of the driver at the current moment is consistent with the shielded auxiliary camera area, the driver does not need to be prompted, and if the area watched by the eyes of the driver at the current moment is inconsistent with the shielded auxiliary camera area, the driver is prompted to keep track of the shielded area.
Step S603: and prompting the driver to pay attention to the current area of the vehicle glass.
It should be noted that, in this embodiment, the area controller is further connected with a voice playing device for prompting the driver to keep track of the area by voice or connected with a lighting device for prompting installed in each area, in the driving process of the vehicle, when the driving assistance camera is blocked by dirt, rain, high beam, or the like, the blocked interference area is identified by image data collected by the assistance camera, if there is rain blocking the part of the auxiliary camera area 2, at this time, the area controller can send a signal to prompt the driver to keep track of the part of the vehicle glass surface area 2, the camera senses that the driver's sight line stays in the area 2 for many times within a period of time, then the driver's attention is considered to be in the area 2, then the driving assistance camera will not process the blocked area 2, and the function is performed normally.
According to the embodiment, eyeball data collected by an eyeball camera is obtained; determining corresponding sight line characteristic information based on a preset driver eyeball model according to eyeball data; acquiring distance information between a driver and vehicle glass; determining the position information of the driver sight on the vehicle glass according to the sight characteristic information and the distance information; determining a target area corresponding to the position information; determining an auxiliary camera sensing area corresponding to the target area according to a preset area mapping relation; when the fact that the current image data acquired by the auxiliary camera is shielded is detected, determining a shielded area according to the current image data; determining a current area of the vehicle glass corresponding to the shielded area according to a preset area mapping relation; and prompting the driver to pay attention to the current area of the vehicle glass. Through the mode, the eyeball camera is used for tracking the sight of the driver, the driving auxiliary camera is mapped to correspond to the sight area of the driver, when the driving auxiliary camera is interfered by external bad factors, the driver is guided to concentrate on the perception area shielded by the camera, and the fault-tolerant rate of the camera is greatly improved.
Referring to fig. 5, fig. 5 is a flowchart illustrating a driving assistance sensing method according to a third embodiment of the present invention.
Based on the first embodiment, the step S20 of the driving assistance perception method of the present embodiment includes:
step S201: and determining transverse position information and longitudinal position information of the pupil relative to the center of the eyeball according to the relative position information of the pupil and the inner canthus based on a preset eyeball model of the driver.
It can be understood that the image information collected by the eyeball camera is analyzed, the coordinate information of the pupil and the inner corner of the eye under the world coordinate system is obtained, the relative position information is determined according to the coordinate information, and the relative position of the pupil relative to the center of the eyeball, namely l, is determined based on the preset driver eyeball model and the standard relative position of the inner corner of the eye and the center of the eyeball marked in advanceO1CAnd determining transverse position information and longitudinal position information according to a formula (3) under a three-dimensional coordinate system of a preset eyeball model of the driver, and determining an angle theta of the sight line deviating from the initial sight line according to a formula (2).
The step S40 includes:
step S401: and determining the transverse coordinate of the sight of the driver on the vehicle glass according to the transverse position information, the preset eyeball radius and the distance information.
It should be noted that the preset eyeball radius is a standard eyeball radius set in advance and is a radius r corresponding to a three-dimensional driver eyeball model, and in a specific implementation, the horizontal and vertical coordinates of the sight line falling on the vehicle glass in the world coordinate system can be determined according to the angle θ of the sight line deviating from the initial sight line, the preset eyeball radius and distance information.
Step S402: and determining the longitudinal coordinate of the sight of the driver on the vehicle glass according to the longitudinal position information, the preset eyeball radius and the distance information.
Further, before step S40, the method further includes: determining a vehicle glass reference center corresponding to the eyeball center according to the installation position of the eyeball camera and the distance information; establishing a two-dimensional coordinate system by taking the vehicle glass reference center as a coordinate center;
the step S40 includes: determining the transverse coordinate of the sight of the driver on the vehicle glass according to the transverse position information, the preset eyeball radius, the distance information and the two-dimensional coordinate system; and determining the longitudinal coordinate of the sight of the driver on the vehicle glass according to the longitudinal position information, the preset eyeball radius, the distance information and the two-dimensional coordinate system.
It should be noted that, based on the simplified eyeball model of the driver, a sight line model is established to analyze the sight line direction, the falling point of the sight line of the driver on the vehicle glass is determined according to the relationship between the pupil center and the inner canthus and the distance between the driver and the vehicle glass, and the current attentive area of the driver is determined according to the moving distance, direction and stay time of the pupil relative to the inner canthus.
Specifically, a mapping relation between a vehicle glass coordinate system and a driver eye coordinate system is established, and the relative positions of the pupils of the driver are mapped to the falling points of the sight lines on the glass according to the mapping relation. The sight line feature vector is expressed as g ═ xc,yc,gx,gy]TWherein (x)c,yc) Vector representing the center of the driver's pupil to the inner corner of the eye, (g)x,gy) Indicating the location coordinates of the inner corner of the eye.
It can be understood that the vehicle glass is divided into 4 areas in advance, wherein the corresponding coordinate of the pupil when the driver looks at the central point 1 of the area is (x)1,y1) The coordinate corresponding to the pupil at the central point 2 in the gazing area is (x)2,y2) … … the distance of movement of the pupil relative to the corner of the eye at the center of each region is d (x)1,y1)、d(x2,y2) … …, establishing a mapping relation between the vehicle glass surface and the two-dimensional surface of the pupil of the driver, and expressing as formula (4):
where l represents the horizontal distance of the driver from the glass of the vehicle.
The positional relationship between the pupil and the inner corner of the driver corresponds to 4 regions on the vehicle glass. The mapping function is expressed as formula (5):
wherein r represents a preset eyeball radius of about 15mm, l represents a horizontal distance between a driver and a vehicle glass, and c represents a constant, and the formula (6) is obtained after simplification:
similarly, the longitudinal coordinate is determined by equation (7):
according to the formula (6) and the formula (7), the coordinates (x) corresponding to the pupilc,yc) And obtaining the position (x, y) of the sight line of the driver on the glass surface of the vehicle, and determining the corresponding area on the glass of the vehicle according to the position.
According to the embodiment, eyeball data collected by an eyeball camera is obtained; determining transverse position information and longitudinal position information of the pupil relative to the center of the eyeball based on a preset eyeball model of the driver according to the relative position information of the pupil and the inner canthus; acquiring distance information between a driver and vehicle glass; determining the transverse coordinate of the sight of the driver on the vehicle glass according to the transverse position information, the preset eyeball radius and the distance information; determining the longitudinal coordinate of the sight of the driver on the vehicle glass according to the longitudinal position information, the preset eyeball radius and the distance information; determining a target area corresponding to the transverse coordinate and the longitudinal coordinate; and determining an auxiliary camera perception area corresponding to the target area according to the preset area mapping relation so as to realize man-machine synchronous perception. According to the mode, the eye tracking is carried out on the eyes of the driver through the eyeball camera and the preset eyeball model of the driver, the transverse position information and the longitudinal position information of the pupil relative to the center of the eyeball are determined, the sight of the driver is mapped to the vehicle glass based on the distance information between the driver and the vehicle glass, the auxiliary camera and the eyes of the driver are fused according to the preset area mapping relation, the permeability of human-computer interaction is improved, the problems that human beings and machines are mutually independent in man-machine driving, perception interaction is not carried out, and the compatibility is low are solved.
Furthermore, an embodiment of the present invention further provides a storage medium, where a driving assistance perception program is stored, and the driving assistance perception program, when executed by a processor, implements the driving assistance perception method as described above.
Since the storage medium adopts all technical solutions of all the embodiments, at least all the beneficial effects brought by the technical solutions of the embodiments are achieved, and no further description is given here.
Referring to fig. 6, fig. 6 is a block diagram illustrating a first embodiment of a driving assistance sensing apparatus according to the present invention.
As shown in fig. 6, a driving assistance sensing apparatus according to an embodiment of the present invention includes:
the obtaining module 10 is configured to obtain eyeball data collected by an eyeball camera.
And the determining module 20 is configured to determine corresponding sight characteristic information based on a preset eyeball model of the driver according to the eyeball data.
The obtaining module 10 is further configured to obtain distance information between the driver and the vehicle glass.
And the sight line conversion module 30 is used for determining the position information of the sight line of the driver on the vehicle glass according to the sight line characteristic information and the distance information.
The determining module 20 is further configured to determine a target area corresponding to the location information.
And the mapping module 40 is configured to determine an auxiliary camera sensing area corresponding to the target area according to a preset area mapping relationship, so as to implement man-machine synchronous sensing.
It should be understood that the above is only an example, and the technical solution of the present invention is not limited in any way, and in a specific application, a person skilled in the art may set the technical solution as needed, and the present invention is not limited thereto.
According to the embodiment, eyeball data collected by an eyeball camera is obtained; determining corresponding sight line characteristic information based on a preset driver eyeball model according to eyeball data; acquiring distance information between a driver and vehicle glass; determining the position information of the driver sight on the vehicle glass according to the sight characteristic information and the distance information; determining a target area corresponding to the position information; and determining an auxiliary camera perception area corresponding to the target area according to the preset area mapping relation so as to realize man-machine synchronous perception. By the mode, sight tracking is carried out on human eyes of a driver, the perception area of the auxiliary camera is determined according to sight characteristic information of the human eyes of the driver, the auxiliary camera is fused with the human eyes of the driver, the permeability of man-machine interaction is improved, and the problems that human beings and machines are mutually independent, perception interaction is not carried out and the compatibility is low in man-machine driving are solved.
It should be noted that the above-described work flows are only exemplary, and do not limit the scope of the present invention, and in practical applications, a person skilled in the art may select some or all of them to achieve the purpose of the solution of the embodiment according to actual needs, and the present invention is not limited herein.
In addition, the technical details that are not described in detail in this embodiment may refer to the driving assistance sensing method provided in any embodiment of the present invention, and are not described herein again.
In one embodiment, the driving assistance sensing apparatus further includes: simplifying the module;
and the simplifying module is used for simplifying the image data acquired by the auxiliary camera according to the perception area of the auxiliary camera when the fact that the number of times that the driver looks at the target area in a preset time period is larger than a preset threshold value is detected.
In one embodiment, the driving assistance sensing apparatus further includes: a compatible module;
the compatible module is used for determining an occluded area according to the current image data when the current image data acquired by the auxiliary camera is detected to be occluded; determining a current area of the vehicle glass corresponding to the shielded area according to the preset area mapping relation; and prompting the driver to pay attention to the current area of the vehicle glass.
In one embodiment, the compatible module is further configured to determine whether the current area of the vehicle glass is consistent with the target area; and when the current area of the vehicle glass is inconsistent with the target area, prompting a driver to pay attention to the current area of the vehicle glass.
In an embodiment, the determining module 20 is further configured to determine, based on a preset driver eyeball model, lateral position information and longitudinal position information of the pupil relative to an eyeball center according to the relative position information of the pupil and the inner corner of the eye;
the sight line conversion module 30 is further configured to determine a transverse coordinate of the driver's sight line on the vehicle glass according to the transverse position information, a preset eyeball radius and the distance information; and determining the longitudinal coordinate of the sight of the driver on the vehicle glass according to the longitudinal position information, the preset eyeball radius and the distance information.
In one embodiment, the driving assistance sensing apparatus further includes: building a module;
the building module is used for determining a vehicle glass reference center corresponding to the eyeball center according to the installation position of the eyeball camera and the distance information; establishing a two-dimensional coordinate system by taking the vehicle glass reference center as a coordinate center;
the sight line conversion module 30 is further configured to determine a horizontal coordinate of the driver's sight line on the vehicle glass according to the horizontal position information, a preset eyeball radius, the distance information, and the two-dimensional coordinate system; and determining the longitudinal coordinate of the sight of the driver on the vehicle glass according to the longitudinal position information, the preset eyeball radius, the distance information and the two-dimensional coordinate system.
In one embodiment, the driving assistance sensing apparatus further includes: a coordinate conversion module;
the coordinate conversion module is used for converting the eyeball camera and the auxiliary camera into a world coordinate system; acquiring eyeball data based on the converted eyeball camera; and acquiring image data based on the converted auxiliary camera.
Further, it is to be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk), and includes several instructions for enabling a terminal device (e.g. a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. A driving assistance perception method, characterized by comprising:
acquiring eyeball data collected by an eyeball camera;
determining corresponding sight line characteristic information based on a preset driver eyeball model according to the eyeball data;
acquiring distance information between a driver and vehicle glass;
determining the position information of the sight of the driver on the vehicle glass according to the sight characteristic information and the distance information;
determining a target area corresponding to the position information;
and determining an auxiliary camera perception area corresponding to the target area according to a preset area mapping relation so as to realize man-machine synchronous perception.
2. The driving assistance perception method according to claim 1, wherein after determining the auxiliary camera perception area corresponding to the target area according to a preset area mapping relationship, the method further includes:
and when detecting that the number of times that the driver looks at the target region in a preset time period is greater than a preset threshold value, simplifying the image data acquired by the auxiliary camera according to the perception region of the auxiliary camera.
3. The driving assistance perception method according to claim 1, wherein after determining the auxiliary camera perception area corresponding to the target area according to a preset area mapping relationship, the method further includes:
when the fact that the current image data acquired by the auxiliary camera is shielded is detected, determining a shielded area according to the current image data;
determining a current area of the vehicle glass corresponding to the shielded area according to the preset area mapping relation;
and prompting the driver to pay attention to the current area of the vehicle glass.
4. The driving-assisted perception method according to claim 3, wherein after determining the current area of the vehicle glass corresponding to the occluded area according to the preset area mapping relationship, the method further includes:
judging whether the current area of the vehicle glass is consistent with the target area;
and when the current area of the vehicle glass is inconsistent with the target area, prompting a driver to pay attention to the current area of the vehicle glass.
5. The driving assistance perception method according to claim 1, wherein the determining of the corresponding sight line characteristic information based on a preset driver eyeball model according to the eyeball data includes:
determining transverse position information and longitudinal position information of the pupil relative to the center of the eyeball based on a preset eyeball model of the driver according to the relative position information of the pupil and the inner canthus;
the determining the position information of the sight line of the driver on the vehicle glass according to the sight line characteristic information and the distance information comprises the following steps:
determining the transverse coordinate of the sight of the driver on the vehicle glass according to the transverse position information, the preset eyeball radius and the distance information;
and determining the longitudinal coordinate of the sight of the driver on the vehicle glass according to the longitudinal position information, the preset eyeball radius and the distance information.
6. The driving-assistance perception method according to claim 5, wherein before determining the position information of the driver's sight line on the vehicle glass based on the sight line feature information and the distance information, the method further includes:
determining a vehicle glass reference center corresponding to the eyeball center according to the installation position of the eyeball camera and the distance information;
establishing a two-dimensional coordinate system by taking the vehicle glass reference center as a coordinate center;
the determining the position information of the sight line of the driver on the vehicle glass according to the sight line characteristic information and the distance information comprises the following steps:
determining the transverse coordinate of the sight of the driver on the vehicle glass according to the transverse position information, the preset eyeball radius, the distance information and the two-dimensional coordinate system;
and determining the longitudinal coordinate of the sight of the driver on the vehicle glass according to the longitudinal position information, the preset eyeball radius, the distance information and the two-dimensional coordinate system.
7. The driving-assistance perception method according to any one of claims 1-6, wherein before the obtaining of eyeball data collected by an eyeball camera, the method further includes:
converting the eyeball camera and the auxiliary camera into a world coordinate system;
acquiring eyeball data based on the converted eyeball camera;
and acquiring image data based on the converted auxiliary camera.
8. A driving assistance sensing apparatus, characterized by comprising:
the acquisition module is used for acquiring eyeball data acquired by the eyeball camera;
the determining module is used for determining corresponding sight characteristic information based on a preset driver eyeball model according to the eyeball data;
the acquisition module is also used for acquiring distance information between a driver and the vehicle glass;
the sight line conversion module is used for determining the position information of the sight line of the driver on the vehicle glass according to the sight line characteristic information and the distance information;
the determining module is further configured to determine a target area corresponding to the location information;
and the mapping module is used for determining an auxiliary camera perception area corresponding to the target area according to a preset area mapping relation so as to realize man-machine synchronous perception.
9. A driving-assistance perception apparatus, characterized in that the apparatus comprises: a memory, a processor and a driving-assistance perception program stored on the memory and executable on the processor, the driving-assistance perception program being configured to implement the driving-assistance perception method according to any one of claims 1 to 7.
10. A storage medium having a driving assistance perception program stored thereon, the driving assistance perception program implementing the driving assistance perception method according to any one of claims 1 to 7 when executed by a processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111034889.1A CN113837027B (en) | 2021-09-03 | 2021-09-03 | Driving assistance sensing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111034889.1A CN113837027B (en) | 2021-09-03 | 2021-09-03 | Driving assistance sensing method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113837027A true CN113837027A (en) | 2021-12-24 |
CN113837027B CN113837027B (en) | 2024-06-25 |
Family
ID=78962284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111034889.1A Active CN113837027B (en) | 2021-09-03 | 2021-09-03 | Driving assistance sensing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113837027B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115468580A (en) * | 2022-09-22 | 2022-12-13 | 星河智联汽车科技有限公司 | Vehicle navigation route indication method, device and system |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008280026A (en) * | 2007-04-11 | 2008-11-20 | Denso Corp | Driving assistance device |
JP2010033106A (en) * | 2008-07-24 | 2010-02-12 | Fujitsu Ten Ltd | Driver support device, driver support method, and driver support processing program |
JP2016170688A (en) * | 2015-03-13 | 2016-09-23 | 株式会社東海理化電機製作所 | Driving support device and driving support system |
US20160313562A1 (en) * | 2015-04-24 | 2016-10-27 | Kenichiroh Saisho | Information provision device, information provision method, and recording medium |
EP3263406A1 (en) * | 2016-06-29 | 2018-01-03 | ALSTOM Transport Technologies | Driving assistance system for a vehicle, related railway vehicle and use method |
JP2018185654A (en) * | 2017-04-26 | 2018-11-22 | 日本精機株式会社 | Head-up display device |
WO2020029444A1 (en) * | 2018-08-10 | 2020-02-13 | 初速度(苏州)科技有限公司 | Method and system for detecting attention of driver while driving |
CN110962746A (en) * | 2019-12-12 | 2020-04-07 | 上海擎感智能科技有限公司 | Driving assisting method, system and medium based on sight line detection |
US20200148112A1 (en) * | 2018-11-13 | 2020-05-14 | Toyota Jidosha Kabushiki Kaisha | Driver-assistance device, driver-assistance system, method of assisting driver, and computer readable recording medium |
CN112380935A (en) * | 2020-11-03 | 2021-02-19 | 深圳技术大学 | Man-machine cooperative perception method and system for automatic driving |
CN112758099A (en) * | 2020-12-31 | 2021-05-07 | 福瑞泰克智能系统有限公司 | Driving assistance method and device, computer equipment and readable storage medium |
JP2021077134A (en) * | 2019-11-11 | 2021-05-20 | マツダ株式会社 | Vehicle control device and driver state determination method |
CN112965502A (en) * | 2020-05-15 | 2021-06-15 | 东风柳州汽车有限公司 | Visual tracking confirmation method, device, equipment and storage medium |
-
2021
- 2021-09-03 CN CN202111034889.1A patent/CN113837027B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008280026A (en) * | 2007-04-11 | 2008-11-20 | Denso Corp | Driving assistance device |
JP2010033106A (en) * | 2008-07-24 | 2010-02-12 | Fujitsu Ten Ltd | Driver support device, driver support method, and driver support processing program |
JP2016170688A (en) * | 2015-03-13 | 2016-09-23 | 株式会社東海理化電機製作所 | Driving support device and driving support system |
US20160313562A1 (en) * | 2015-04-24 | 2016-10-27 | Kenichiroh Saisho | Information provision device, information provision method, and recording medium |
EP3263406A1 (en) * | 2016-06-29 | 2018-01-03 | ALSTOM Transport Technologies | Driving assistance system for a vehicle, related railway vehicle and use method |
JP2018185654A (en) * | 2017-04-26 | 2018-11-22 | 日本精機株式会社 | Head-up display device |
WO2020029444A1 (en) * | 2018-08-10 | 2020-02-13 | 初速度(苏州)科技有限公司 | Method and system for detecting attention of driver while driving |
US20200148112A1 (en) * | 2018-11-13 | 2020-05-14 | Toyota Jidosha Kabushiki Kaisha | Driver-assistance device, driver-assistance system, method of assisting driver, and computer readable recording medium |
JP2021077134A (en) * | 2019-11-11 | 2021-05-20 | マツダ株式会社 | Vehicle control device and driver state determination method |
CN110962746A (en) * | 2019-12-12 | 2020-04-07 | 上海擎感智能科技有限公司 | Driving assisting method, system and medium based on sight line detection |
CN112965502A (en) * | 2020-05-15 | 2021-06-15 | 东风柳州汽车有限公司 | Visual tracking confirmation method, device, equipment and storage medium |
CN112380935A (en) * | 2020-11-03 | 2021-02-19 | 深圳技术大学 | Man-machine cooperative perception method and system for automatic driving |
CN112758099A (en) * | 2020-12-31 | 2021-05-07 | 福瑞泰克智能系统有限公司 | Driving assistance method and device, computer equipment and readable storage medium |
Non-Patent Citations (4)
Title |
---|
HUEI-YUNG LIN 等: "A Vision-Based Driver Assistance System with Forward Collision and Overtaking Detection", 《INTELLIGENT SENSING SYSTEMS FOR VEHICLE》, pages 1 - 19 * |
RIZWAN ALI NAQVI 等: "Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor", 《SENSORS SIGNAL PROCESSING AND VISUAL COMPUTING》, pages 1 - 34 * |
蔡晓洁: "驾驶员注视区域估计算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 09, pages 138 - 450 * |
邹武合: "基于视觉的驾驶员视点提取系统关键技术研究", 《中国博士学位论文全文数据库 信息科技辑》, no. 01, pages 138 - 129 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115468580A (en) * | 2022-09-22 | 2022-12-13 | 星河智联汽车科技有限公司 | Vehicle navigation route indication method, device and system |
Also Published As
Publication number | Publication date |
---|---|
CN113837027B (en) | 2024-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112965504B (en) | Remote confirmation method, device and equipment based on automatic driving and storage medium | |
KR101842811B1 (en) | Driver assistance system for displaying surroundings of a vehicle | |
EP2860664B1 (en) | Face detection apparatus | |
KR101534742B1 (en) | System and method for gesture recognition of vehicle | |
KR20130112550A (en) | Apparatus for setting parking position based on around view image and method thereof | |
CN111366168A (en) | AR navigation system and method based on multi-source information fusion | |
CN107665508B (en) | Method and system for realizing augmented reality | |
KR20240130682A (en) | Alignment method and alignment device for display device, vehicle-mounted display system | |
EP3575929B1 (en) | Eye tracking application in virtual reality and augmented reality | |
CN112525147B (en) | Distance measurement method for automatic driving equipment and related device | |
WO2012140782A1 (en) | Eyelid-detection device, eyelid-detection method, and program | |
WO2015093130A1 (en) | Information processing device, information processing method, and program | |
WO2012144020A1 (en) | Eyelid detection device, eyelid detection method, and program | |
JP4374850B2 (en) | Moving object periphery monitoring device | |
CN104954747A (en) | Video monitoring method and device | |
CN102508548A (en) | Operation method and system for electronic information equipment | |
JP2018181338A (en) | Method for operating a self-travelling vehicle | |
CN113837027A (en) | Driving assistance sensing method, device, equipment and storage medium | |
CN115525152A (en) | Image processing method, system, device, electronic equipment and storage medium | |
CN115493614B (en) | Method and device for displaying flight path line, storage medium and electronic equipment | |
CN113673493B (en) | Pedestrian perception and positioning method and system based on industrial vehicle vision | |
WO2021243693A1 (en) | Method and apparatus for collecting image of driver | |
JP2014174880A (en) | Information processor and information program | |
JP2014174091A (en) | Information providing device and information providing program | |
JP6990874B2 (en) | Parking aids and vehicles |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |