CN113837027B - Driving assistance sensing method, device, equipment and storage medium - Google Patents
Driving assistance sensing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN113837027B CN113837027B CN202111034889.1A CN202111034889A CN113837027B CN 113837027 B CN113837027 B CN 113837027B CN 202111034889 A CN202111034889 A CN 202111034889A CN 113837027 B CN113837027 B CN 113837027B
- Authority
- CN
- China
- Prior art keywords
- driver
- determining
- eyeball
- area
- preset
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 238000003860 storage Methods 0.000 title claims abstract description 16
- 210000005252 bulbus oculi Anatomy 0.000 claims abstract description 138
- 239000011521 glass Substances 0.000 claims abstract description 106
- 210000001508 eye Anatomy 0.000 claims abstract description 37
- 238000013507 mapping Methods 0.000 claims abstract description 35
- 230000001360 synchronised effect Effects 0.000 claims abstract description 11
- 210000001747 pupil Anatomy 0.000 claims description 29
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 230000001953 sensory effect Effects 0.000 claims 2
- 230000008447 perception Effects 0.000 abstract description 20
- 230000003993 interaction Effects 0.000 abstract description 14
- 241000282414 Homo sapiens Species 0.000 abstract description 11
- 230000008569 process Effects 0.000 description 10
- 230000004438 eyesight Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 230000035515 penetration Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 102100034112 Alkyldihydroxyacetonephosphate synthase, peroxisomal Human genes 0.000 description 1
- 101000799143 Homo sapiens Alkyldihydroxyacetonephosphate synthase, peroxisomal Proteins 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000000848 angular dependent Auger electron spectroscopy Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005282 brightening Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000005357 flat glass Substances 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000004297 night vision Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000035699 permeability Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention belongs to the technical field of automobiles, and discloses a perception method, a device, equipment and a storage medium for assisting driving. The method comprises the following steps: acquiring eyeball data acquired by an eyeball camera; determining corresponding sight feature information based on a preset driver eyeball model according to eyeball data; acquiring distance information between a driver and vehicle glass; determining the position information of the driver's sight on the vehicle glass according to the sight feature information and the distance information; determining a target area corresponding to the position information; and determining an auxiliary camera sensing area corresponding to the target area according to the preset area mapping relation so as to realize man-machine synchronous sensing. By the method, the auxiliary camera sensing area is determined according to the sight feature information of the eyes of the driver, the auxiliary camera is fused with the eyes of the driver, and the problems that the human and the machine are mutually independent, do not perform sensing interaction and have low compatibility in the co-driving of the man and the machine are solved.
Description
Technical Field
The present invention relates to the field of automotive technologies, and in particular, to a driving assistance sensing method, apparatus, device, and storage medium.
Background
In the prior art, people and machines in the man-machine co-driving are mutually independent, namely, when a vehicle is not driven by a person, the person is prompted to take over driving when the machine cannot be driven independently, or the person is mainly responsible for driving, the machine assists the person and the like, the people and the machines are not fused deeply under the various conditions, and the man-machine co-driving compatibility is low. Especially in vehicle camera perception scene, machine and human do not carry out the interaction, and compatibility is low.
At present, most of mass production vehicle types, on-vehicle camera coverage rate is high, if drive supplementary ADAS camera, panorama look around camera, driver fatigue detection camera etc. all realize the function through the camera perception, and do not have redundant sensor and do safe backup, when the camera is sheltered from by foreign matter, dirty, rainwater etc. the camera can not effectively perception information, leads to the function inefficacy or withdraws from, has the problem that the camera fault tolerance is low. Chinese patent application: the invention discloses a night target detection and tracking method based on millimeter wave radar and vision fusion (publication number: CN 111967498A), which adopts a camera original image to obtain richer image dark part information, utilizes a deep-learning image brightening algorithm to restore image dark part details, enhances night vision capability of an unmanned vehicle, and enables a perception system to still work normally when one sensor fails. This patent suffers from the following drawbacks: the fault tolerance is improved by stacking the sensors, but the different sensors play different roles, the sensors are in complementary relation, and are not mutually replaceable, when one sensor fails, the other sensor cannot completely replace the failed sensor, and if the sensors can be completely replaced, namely, two identical sensors are assembled on one trolley, the cost is doubled.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a perception method, a device, equipment and a storage medium for driving assistance, and aims to solve the technical problems that in the prior art, human beings and machines are mutually independent, do not carry out perception interaction and have low compatibility.
To achieve the above object, the present invention provides a driving assistance sensing method, the method comprising the steps of:
Acquiring eyeball data acquired by an eyeball camera;
determining corresponding sight feature information based on a preset driver eyeball model according to the eyeball data;
Acquiring distance information between a driver and vehicle glass;
determining the position information of the sight line of the driver on the vehicle glass according to the sight line characteristic information and the distance information;
determining a target area corresponding to the position information;
and determining an auxiliary camera sensing area corresponding to the target area according to a preset area mapping relation so as to realize man-machine synchronous sensing.
Optionally, after determining the auxiliary camera sensing area corresponding to the target area according to the preset area mapping relationship, the method further includes:
And when the frequency of the driver watching the target area in the preset time period is detected to be larger than a preset threshold value, simplifying the image data acquired by the auxiliary camera according to the auxiliary camera sensing area.
Optionally, after determining the auxiliary camera sensing area corresponding to the target area according to the preset area mapping relationship, the method further includes:
When the current image data acquired by the auxiliary camera is detected to be shielded, determining a shielded area according to the current image data;
determining a current vehicle glass area corresponding to the shielded area according to the preset area mapping relation;
Prompting a driver to pay attention to the current area of the vehicle glass.
Optionally, after determining the current area of the vehicle glass corresponding to the blocked area according to the preset area mapping relationship, the method further includes:
judging whether the current area of the vehicle glass is consistent with the target area;
and prompting a driver to pay attention to the current area of the vehicle glass when the current area of the vehicle glass is inconsistent with the target area.
Optionally, the determining corresponding sight feature information according to the eyeball data based on a preset driver eyeball model includes:
determining transverse position information and longitudinal position information of the pupil relative to the center of the eyeball based on a preset driver eyeball model according to the relative position information of the pupil and the inner corner of the eye;
The determining the position information of the driver's sight on the vehicle glass according to the sight feature information and the distance information includes:
Determining the transverse coordinates of the sight of the driver on the vehicle glass according to the transverse position information, the preset eyeball radius and the distance information;
and determining the longitudinal coordinates of the sight line of the driver on the vehicle glass according to the longitudinal position information, the preset eyeball radius and the distance information.
Optionally, before the determining the position information of the driver's sight line on the vehicle glass according to the sight line characteristic information and the distance information, the method further includes:
Determining a vehicle glass reference center corresponding to the eyeball center according to the eyeball camera mounting position and the distance information;
establishing a two-dimensional coordinate system by taking the vehicle glass reference center as a coordinate center;
The determining the position information of the driver's sight on the vehicle glass according to the sight feature information and the distance information includes:
determining the transverse coordinates of the sight line of the driver on the vehicle glass according to the transverse position information, the preset eyeball radius, the distance information and the two-dimensional coordinate system;
and determining the longitudinal coordinates of the sight line of the driver on the vehicle glass according to the longitudinal position information, the preset eyeball radius, the distance information and the two-dimensional coordinate system.
Optionally, before the acquiring the eyeball data acquired by the eyeball camera, the method further includes:
converting the eyeball camera and the auxiliary camera into a world coordinate system;
acquiring eyeball data based on the converted eyeball camera;
And acquiring image data based on the converted auxiliary camera.
In addition, in order to achieve the above object, the present invention also provides a driving assistance sensing device, including:
the acquisition module is used for acquiring eyeball data acquired by the eyeball camera;
The determining module is used for determining corresponding sight feature information based on a preset driver eyeball model according to the eyeball data;
the acquisition module is also used for acquiring distance information between a driver and vehicle glass;
the sight line conversion module is used for determining the position information of the sight line of the driver on the vehicle glass according to the sight line characteristic information and the distance information;
The determining module is further used for determining a target area corresponding to the position information;
And the mapping module is used for determining an auxiliary camera sensing area corresponding to the target area according to a preset area mapping relation so as to realize man-machine synchronous sensing.
In addition, to achieve the above object, the present invention also proposes a driving-assisted sensing apparatus including: the system comprises a memory, a processor and a driving assistance awareness program stored on the memory and executable on the processor, wherein the driving assistance awareness program is configured to implement the driving assistance awareness method.
In addition, in order to achieve the above object, the present invention also proposes a storage medium having stored thereon a driving-assisted perception program which, when executed by a processor, implements the driving-assisted perception method as described above.
According to the invention, eyeball data acquired by an eyeball camera are acquired; determining corresponding sight feature information based on a preset driver eyeball model according to eyeball data; acquiring distance information between a driver and vehicle glass; determining the position information of the driver's sight on the vehicle glass according to the sight feature information and the distance information; determining a target area corresponding to the position information; and determining an auxiliary camera sensing area corresponding to the target area according to the preset area mapping relation so as to realize man-machine synchronous sensing. Through the mode, the vision tracking is carried out on the eyes of the driver, the perception area of the auxiliary camera is determined according to the vision characteristic information of the eyes of the driver, the auxiliary camera is fused with the eyes of the driver, the penetration force of man-machine interaction is improved, the problem that the man and the machine are mutually independent, do not carry out perception interaction and have low compatibility in man-machine co-driving is solved.
Drawings
FIG. 1 is a schematic diagram of a driving assistance awareness apparatus of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flow chart of a first embodiment of a driving assistance sensing method according to the present invention;
FIG. 3 is a schematic view of an eyeball three-dimensional model according to an embodiment of the driving assistance sensing method of the present invention;
FIG. 4 is a flow chart of a second embodiment of a driving assistance sensing method according to the present invention;
FIG. 5 is a flow chart of a third embodiment of a driving assistance sensing method according to the present invention;
fig. 6 is a block diagram of a first embodiment of a driving assistance sensing device according to the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a driving-assisting sensing device of a hardware running environment according to an embodiment of the present invention.
As shown in fig. 1, the driving-assisting sensing device may include: a processor 1001, such as a central processing unit (Central Processing Unit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a high-speed random access Memory (Random Access Memory, RAM) or a stable nonvolatile Memory (NVM), such as a disk Memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
It will be appreciated by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation of the driving assistance sensing apparatus, and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a perception program for assisting driving may be included in the memory 1005 as one storage medium.
In the driving assistance sensing device shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the driving-assisted sensing device of the present invention may be disposed in the driving-assisted sensing device, and the driving-assisted sensing device invokes the driving-assisted sensing program stored in the memory 1005 through the processor 1001 and executes the driving-assisted sensing method provided by the embodiment of the present invention.
An embodiment of the invention provides a driving assistance sensing method, referring to fig. 2, fig. 2 is a schematic flow chart of a first embodiment of the driving assistance sensing method.
In this embodiment, the driving assistance sensing method includes the following steps:
step S10: and acquiring eyeball data acquired by an eyeball camera.
It may be understood that the execution main body of the embodiment is a driving-assisting sensing device, and the driving-assisting sensing device may be a vehicle-mounted computer, a controller connected with a vehicle control end, or a domain controller, or other devices with the same or similar functions, where the domain controller is used as an example for explanation, and the domain controller is connected with an eyeball camera for collecting human eye information of a driver and an auxiliary camera for collecting environment information in front of the vehicle.
It should be noted that, the eyeball camera may collect eyeball data at regular time according to a preset collection period, and may also collect eyeball data under the control of the domain controller, specifically, when it is detected that the current image data collected by the auxiliary camera is blocked, the eyeball camera is controlled to collect the current eyeball data. The eyeball data are mainly eyeball image data, and are analyzed according to the eyeball image data to determine eyeball characteristic information carried by the eyeball image data.
Further, in order to improve the accuracy of the data fusion between the human eye and the auxiliary camera, before the step S10, the method further includes: converting the eyeball camera and the auxiliary camera into a world coordinate system; acquiring eyeball data based on the converted eyeball camera; and acquiring image data based on the converted auxiliary camera.
It should be appreciated that the eyeball camera and the auxiliary camera are converted to the world coordinate system according to equation (1):
The camera coordinate system and the real coordinate system are converted through a rotation matrix R and a translation matrix T, wherein the rotation matrix R is 3*3, the translation matrix T is 3*1, and the rotation matrix and the translation matrix are adjusted according to the arrangement positions of the two cameras so as to realize data fusion and perception interaction on a reference coordinate system (world coordinate system).
Step S20: and determining corresponding sight feature information based on a preset driver eyeball model according to the eyeball data.
Before step S20, the method further includes: and constructing a preset driver eyeball model. Referring to fig. 3, fig. 3 is a schematic view of an eyeball three-dimensional space model according to an embodiment of a perception method for driving assistance of the present invention, and a process for constructing a preset driver eyeball model is as follows: the eyeball center O is used as a sphere center to establish a three-dimensional space model, O 1 is a point of the eyeball center when the eyes of a driver face the front of a vehicle, a straight line where OO 1 is located is used as a z-axis to establish a three-dimensional coordinate system, the sphere surface AO 1 B is a part of the eyeball exposed on the surface, C is a position where the eyes move to when looking at a certain object in front of the vehicle, and the angle theta is an angle of the driver's sight line deviating from the initial sight line. There is a spherical model: x 2+y2+z2=r2.
Specifically, because the three-dimensional model has large data operand and high real-time requirement of the vehicle, the three-dimensional spherical model is mapped to a two-dimensional space in the embodiment, and the simplified preset driver eyeball model is obtained. The line of sight movement from the initial point O 1(x0,y0,z0 to C (x c,yc,zc) when the pupil views an object, a straight line perpendicular to the z-axis from point C (x c,yc,zc) intersects the z-axis at point D (0, z c), and the magnitude of the angle θ is determined according to equation (2):
Wherein θ is small so sin θ approximates θ, and the distance of arcs O 1 to C is determined according to equation (2):
And constructing a simplified preset driver eyeball model according to the formula (2) and the formula (3).
It can be understood that the sight line characteristic information is mainly coordinate position information of the pupil relative to the center of the eyeball, and the relative position information between the pupil and the inner corner of the eye is determined according to the acquired eyeball data, thereby determining the relative position information of the pupil relative to the center of the eyeball, namelyAnd determining coordinate information of the pupil in the preset driver eyeball model based on the preset driver eyeball model.
Step S30: distance information between the driver and the vehicle glass is acquired.
It should be noted that, the distance information between the driver and the vehicle glass may be a fixed value, and stored in the preset storage area in advance, and the distance information is retrieved from the preset storage area when performing man-machine interaction synchronization. In addition, the process of acquiring the distance information between the driver and the vehicle glass may be: the method comprises the steps of determining the transverse distance between the mounting position of an eyeball camera and vehicle glass, determining the transverse distance between a driver and the camera according to the eyeball size shot by the eyeball camera and the preset standard eyeball size, and determining the distance information between the driver and the vehicle glass according to the two transverse distances.
Step S40: and determining the position information of the driver sight on the vehicle glass according to the sight characteristic information and the distance information.
It can be understood that the vehicle glass refers to a front window glass of the vehicle, and the position information corresponding to the sight line falling into the vehicle glass is determined according to the eyeball center O of the eyeball model of the preset driver, the coordinate position information of the pupil relative to the eyeball center and the distance information.
Step S50: and determining a target area corresponding to the position information.
In this embodiment, the vehicle glass is divided into regions in advance, and the specific process may be to construct 4 regions by using the intersection point of the pupil center of the driver parallel to the ground and the vehicle glass as the origin, and determine the target region according to the position information of the sight line of the driver on the vehicle glass.
Step S60: and determining an auxiliary camera sensing area corresponding to the target area according to a preset area mapping relation so as to realize man-machine synchronous sensing.
It may be understood that in this embodiment, the sensing area division is performed on the auxiliary camera in advance, and the specific process may be to construct 4 areas by taking the center point of the sensing area as the origin, map the 4 areas divided by the vehicle glass with the 4 sensing areas divided by the auxiliary camera, and construct a preset area mapping relationship.
Further, in order to simplify the perception data of the auxiliary camera, the camera recognition effect is improved, and after step S60, the method further includes: and when the frequency of the driver watching the target area in the preset time period is detected to be larger than a preset threshold value, simplifying the image data acquired by the auxiliary camera according to the auxiliary camera sensing area.
It should be noted that, the preset time period is a detection time period set in advance, a counting mechanism is set in the preset time period, so as to determine the number of times that the driver looks at each area of the vehicle glass, for example, the preset time period is set to 2 minutes, the preset threshold is a preset critical value for distinguishing the number of looking times, and when the eyeball camera senses that the driver's sight stays in the area 1 for many times in the preset time period, the driving assistance camera does not recognize the sensing content of the area 1, so that 1/4 of calculation force can be saved, and the method is used for processing the image data of other areas.
In the embodiment, eyeball data acquired by an eyeball camera are acquired; determining corresponding sight feature information based on a preset driver eyeball model according to eyeball data; acquiring distance information between a driver and vehicle glass; determining the position information of the driver's sight on the vehicle glass according to the sight feature information and the distance information; determining a target area corresponding to the position information; and determining an auxiliary camera sensing area corresponding to the target area according to the preset area mapping relation so as to realize man-machine synchronous sensing. Through the mode, the vision tracking is carried out on the eyes of the driver, the perception area of the auxiliary camera is determined according to the vision characteristic information of the eyes of the driver, the auxiliary camera is fused with the eyes of the driver, the penetration force of man-machine interaction is improved, the problem that the man and the machine are mutually independent, do not carry out perception interaction and have low compatibility in man-machine co-driving is solved.
Referring to fig. 4, fig. 4 is a flowchart illustrating a sensing method for driving assistance according to a second embodiment of the present invention.
Based on the first embodiment, the driving assistance sensing method of the present embodiment further includes, after the step S60:
Step S601: and when the occlusion of the current image data acquired by the auxiliary camera is detected, determining an occluded area according to the current image data.
It can be understood that image analysis is performed on the current image data acquired by the auxiliary camera, specifically, whether the current image data is blocked or not can be determined by adopting a foreground image communication analysis mode, and if the current image data is blocked, the area where the blocked pixels are located is determined.
Step S602: and determining the current area of the vehicle glass corresponding to the shielded area according to the preset area mapping relation.
It should be noted that, the preset area mapping relationship maps with the auxiliary camera acquisition areas and the vehicle glass areas which are in one-to-one correspondence, and after determining the shielded area, determines the corresponding vehicle glass current area according to the preset area mapping relationship.
Further, after the root step S602, the method further includes: judging whether the current area of the vehicle glass is consistent with the target area; and prompting a driver to pay attention to the current area of the vehicle glass when the current area of the vehicle glass is inconsistent with the target area.
It should be appreciated that if the area at the current moment of the driver's eye is consistent with the area of the blocked auxiliary camera, the driver need not be prompted to pay attention to the blocked area if the area at the current moment of the driver's eye is not consistent with the area of the blocked auxiliary camera.
Step S603: prompting a driver to pay attention to the current area of the vehicle glass.
It should be noted that, in this embodiment, the domain controller is further connected with a voice playing device for prompting the driver to pay attention to the area by voice, or is connected with a light device for prompting installed in each area, when the driving assistance camera is blocked and interfered by dirt, rainwater or a high beam, etc., in the driving process of the vehicle, the area blocked and interfered by the image data collected by the assistance camera will be identified, if the portion of the area 2 of the assistance camera is blocked by the rainwater currently, the domain controller will send a signal to prompt the driver to pay attention to the portion of the area 2 of the glass surface of the vehicle, and the camera senses that the sight of the driver stays in the area 2 for many times within a period of time, then the driver is considered to pay attention to the area 2, and the driving assistance camera will not process the blocked area 2 any more, and the function is normally performed.
In the embodiment, eyeball data acquired by an eyeball camera are acquired; determining corresponding sight feature information based on a preset driver eyeball model according to eyeball data; acquiring distance information between a driver and vehicle glass; determining the position information of the driver's sight on the vehicle glass according to the sight feature information and the distance information; determining a target area corresponding to the position information; determining an auxiliary camera sensing area corresponding to the target area according to a preset area mapping relation; when the current image data acquired by the auxiliary camera is detected to be shielded, determining a shielded area according to the current image data; determining a current area of the vehicle glass corresponding to the shielded area according to a preset area mapping relation; the driver is prompted to keep track of the current area of the vehicle glazing. Through the mode, the eye cameras are used for tracking the vision of the driver, the driving auxiliary cameras are mapped with the vision area of the driver, and when the driving auxiliary cameras are interfered by external bad factors, the driver is guided to focus attention on the shielded sensing area of the cameras, so that the fault tolerance of the cameras is greatly improved.
Referring to fig. 5, fig. 5 is a flowchart illustrating a third embodiment of a driving assistance sensing method according to the present invention.
Based on the above-described first embodiment, the step S20 of the driving assistance sensing method of the present embodiment includes:
Step S201: and determining the transverse position information and the longitudinal position information of the pupil relative to the center of the eyeball based on a preset driver eyeball model according to the relative position information of the pupil and the inner canthus.
It can be understood that the image information collected by the eyeball camera is analyzed, coordinate information of the pupil and the inner corner of the eye under the world coordinate system is obtained, relative position information is determined according to the coordinate information, the relative position of the pupil relative to the center of the eyeball, i.e. l O1C, is determined based on a preset driver eyeball model and the standard relative position of the pre-marked inner corner of the eye and the center of the eyeball, transverse position information and longitudinal position information are determined according to the formula (3) under the three-dimensional coordinate system of the preset driver eyeball model, and the angle theta of the sight line deviating from the initial sight line is determined according to the formula (2).
The step S40 includes:
step S401: and determining the transverse coordinates of the sight of the driver on the vehicle glass according to the transverse position information, the preset eyeball radius and the distance information.
It should be noted that, the preset eyeball radius is a standard eyeball radius set in advance, and is a radius r corresponding to a three-dimensional driver eyeball model, and in a specific implementation, the abscissa of the world coordinate system, where the line of sight falls on the glass of the vehicle, may be determined according to the angle θ of the line of sight deviating from the initial line of sight, the preset eyeball radius, and the distance information.
Step S402: and determining the longitudinal coordinates of the sight line of the driver on the vehicle glass according to the longitudinal position information, the preset eyeball radius and the distance information.
Further, before step S40, the method further includes: determining a vehicle glass reference center corresponding to the eyeball center according to the eyeball camera mounting position and the distance information; establishing a two-dimensional coordinate system by taking the vehicle glass reference center as a coordinate center;
The step S40 includes: determining the transverse coordinates of the sight line of the driver on the vehicle glass according to the transverse position information, the preset eyeball radius, the distance information and the two-dimensional coordinate system; and determining the longitudinal coordinates of the sight line of the driver on the vehicle glass according to the longitudinal position information, the preset eyeball radius, the distance information and the two-dimensional coordinate system.
It should be noted that, based on the simplified driver's human eye ball model, a line-of-sight model is established to analyze the line-of-sight direction, the point of fall of the driver's line-of-sight on the vehicle glass is determined according to the relationship between the center of the pupil and the inner corner of the eye and the distance between the driver and the vehicle glass, and the current concentration area of the driver is determined according to the distance, direction and residence time of the pupil moving relative to the inner corner of the eye.
Specifically, a mapping relation between a vehicle glass coordinate system and a driver eyeball coordinate system is established, and according to the mapping relation, the relative position of the pupil of the driver is mapped to a falling point of the sight line on the glass. The line-of-sight feature vector is represented as g= [ x c,yc,gx,gy]T, where (x c,yc) represents a vector from the center of the pupil of the driver to the inner corner of the eye, and (g x,gy) represents the position coordinates where the inner corner of the eye is located.
It can be understood that the vehicle glass is divided into 4 areas in advance, wherein the coordinate corresponding to the pupil when the driver gazes at the central point 1 of the area is (x 1,y1), the coordinate corresponding to the pupil when the driver gazes at the central point 2 of the area is (x 2,y2) … …, the movement distance of the pupil relative to the canthus is d (x 1,y1)、d(x2,y2) … … respectively at the central point of each area, and the mapping relation between the vehicle glass surface and the two-dimensional surface of the pupil of the driver is established, which is expressed as formula (4):
where l represents the horizontal distance of the driver from the vehicle glass.
The positional relationship between the driver's pupil and the inner corner of the eye corresponds to 4 regions on the vehicle glass. The mapping function is expressed as formula (5):
Wherein r represents a preset eyeball radius, about 15mm, l represents a horizontal distance between a driver and the glass of the vehicle, c represents a constant, and the formula (6) is obtained after simplification:
similarly, the longitudinal coordinates are determined by equation (7):
And (3) obtaining the position (x, y) of the sight line of the driver on the glass surface of the vehicle according to the coordinate (x c,yc) corresponding to the pupil according to the formula (6) and the formula (7), and determining the corresponding region on the glass of the vehicle according to the position.
In the embodiment, eyeball data acquired by an eyeball camera are acquired; determining transverse position information and longitudinal position information of the pupil relative to the center of the eyeball based on a preset driver eyeball model according to the relative position information of the pupil and the inner corner of the eye; acquiring distance information between a driver and vehicle glass; determining the transverse coordinates of the sight of the driver on the glass of the vehicle according to the transverse position information, the preset eyeball radius and the distance information; determining the longitudinal coordinates of the sight of the driver on the glass of the vehicle according to the longitudinal position information, the preset eyeball radius and the distance information; determining a target area corresponding to the transverse coordinate and the longitudinal coordinate; and determining an auxiliary camera sensing area corresponding to the target area according to the preset area mapping relation so as to realize man-machine synchronous sensing. Through the mode, the eye of the driver is tracked through the eyeball camera and the eyeball model of the preset driver, the transverse position information and the longitudinal position information of the pupil relative to the eyeball center are determined, the eye of the driver is mapped onto the vehicle glass based on the distance information between the driver and the vehicle glass, the auxiliary camera and the eye of the driver are fused according to the mapping relation of the preset area, the human-computer interaction permeability is improved, and the problems that the human and the machine are mutually independent, do not perform perception interaction and have low compatibility in man-machine co-driving are solved.
In addition, the embodiment of the invention also provides a storage medium, wherein the storage medium stores a driving-assisting sensing program, and the driving-assisting sensing program realizes the driving-assisting sensing method when being executed by a processor.
Because the storage medium adopts all the technical schemes of all the embodiments, the storage medium has at least all the beneficial effects brought by the technical schemes of the embodiments, and the description is omitted here.
Referring to fig. 6, fig. 6 is a block diagram illustrating a first embodiment of a driving assistance sensing device according to the present invention.
As shown in fig. 6, the driving assistance sensing device provided by the embodiment of the invention includes:
The acquisition module 10 is used for acquiring eyeball data acquired by the eyeball camera.
The determining module 20 is configured to determine corresponding gaze feature information based on a preset driver eye model according to the eye data.
The acquisition module 10 is further configured to acquire distance information between a driver and a glass of the vehicle.
And a sight line conversion module 30 for determining the position information of the driver's sight line on the vehicle glass according to the sight line characteristic information and the distance information.
The determining module 20 is further configured to determine a target area corresponding to the location information.
And the mapping module 40 is used for determining an auxiliary camera sensing area corresponding to the target area according to a preset area mapping relation so as to realize man-machine synchronous sensing.
It should be understood that the foregoing is illustrative only and is not limiting, and that in specific applications, those skilled in the art may set the invention as desired, and the invention is not limited thereto.
In the embodiment, eyeball data acquired by an eyeball camera are acquired; determining corresponding sight feature information based on a preset driver eyeball model according to eyeball data; acquiring distance information between a driver and vehicle glass; determining the position information of the driver's sight on the vehicle glass according to the sight feature information and the distance information; determining a target area corresponding to the position information; and determining an auxiliary camera sensing area corresponding to the target area according to the preset area mapping relation so as to realize man-machine synchronous sensing. Through the mode, the vision tracking is carried out on the eyes of the driver, the perception area of the auxiliary camera is determined according to the vision characteristic information of the eyes of the driver, the auxiliary camera is fused with the eyes of the driver, the penetration force of man-machine interaction is improved, the problem that the man and the machine are mutually independent, do not carry out perception interaction and have low compatibility in man-machine co-driving is solved.
It should be noted that the above-described working procedure is merely illustrative, and does not limit the scope of the present invention, and in practical application, a person skilled in the art may select part or all of them according to actual needs to achieve the purpose of the embodiment, which is not limited herein.
In addition, technical details not described in detail in the present embodiment may refer to the sensing method for assisting driving provided in any embodiment of the present invention, which is not described herein.
In an embodiment, the driving assistance sensing device further includes: simplifying the module;
And the simplification module is used for simplifying the image data acquired by the auxiliary camera according to the auxiliary camera sensing area when the frequency of the driver watching the target area in the preset time period is detected to be larger than a preset threshold value.
In an embodiment, the driving assistance sensing device further includes: a compatible module;
The compatible module is used for determining an occluded area according to the current image data when the current image data acquired by the auxiliary camera is detected to be occluded; determining a current vehicle glass area corresponding to the shielded area according to the preset area mapping relation; prompting a driver to pay attention to the current area of the vehicle glass.
In an embodiment, the compatibility module is further configured to determine whether the current area of the vehicle glass is consistent with the target area; and prompting a driver to pay attention to the current area of the vehicle glass when the current area of the vehicle glass is inconsistent with the target area.
In an embodiment, the determining module 20 is further configured to determine, based on the preset driver eyeball model, lateral position information and longitudinal position information of the pupil relative to the center of the eyeball according to the relative position information of the pupil and the inner corner of the eye;
The sight line conversion module 30 is further configured to determine a lateral coordinate of a driver's sight line on the vehicle glass according to the lateral position information, a preset eyeball radius, and the distance information; and determining the longitudinal coordinates of the sight line of the driver on the vehicle glass according to the longitudinal position information, the preset eyeball radius and the distance information.
In an embodiment, the driving assistance sensing device further includes: constructing a module;
The construction module is used for determining a vehicle glass reference center corresponding to the eyeball center according to the eyeball camera installation position and the distance information; establishing a two-dimensional coordinate system by taking the vehicle glass reference center as a coordinate center;
the sight line conversion module 30 is further configured to determine a lateral coordinate of the driver's sight line on the vehicle glass according to the lateral position information, the preset eyeball radius, the distance information, and the two-dimensional coordinate system; and determining the longitudinal coordinates of the sight line of the driver on the vehicle glass according to the longitudinal position information, the preset eyeball radius, the distance information and the two-dimensional coordinate system.
In an embodiment, the driving assistance sensing device further includes: a coordinate conversion module;
The coordinate conversion module is used for converting the eyeball camera and the auxiliary camera into a world coordinate system; acquiring eyeball data based on the converted eyeball camera; and acquiring image data based on the converted auxiliary camera.
Furthermore, it should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. Read Only Memory)/RAM, magnetic disk, optical disk) and including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.
Claims (9)
1. A driving assistance sensing method, characterized in that the driving assistance sensing method comprises:
Acquiring eyeball data acquired by an eyeball camera;
determining corresponding sight feature information based on a preset driver eyeball model according to the eyeball data;
Acquiring distance information between a driver and vehicle glass;
determining the position information of the sight line of the driver on the vehicle glass according to the sight line characteristic information and the distance information;
determining a target area corresponding to the position information;
Determining an auxiliary camera sensing area corresponding to the target area according to a preset area mapping relation so as to realize man-machine synchronous sensing;
after determining the auxiliary camera sensing area corresponding to the target area according to the preset area mapping relation, the method further comprises:
And when the frequency of the driver watching the target area in the preset time period is detected to be larger than a preset threshold value, simplifying the image data acquired by the auxiliary camera according to the auxiliary camera sensing area.
2. The driving assistance sensing method according to claim 1, wherein after determining the auxiliary camera sensing area corresponding to the target area according to the preset area mapping relationship, the method further comprises:
When the current image data acquired by the auxiliary camera is detected to be shielded, determining a shielded area according to the current image data;
determining a current vehicle glass area corresponding to the shielded area according to the preset area mapping relation;
Prompting a driver to pay attention to the current area of the vehicle glass.
3. The driving assistance sensing method according to claim 2, wherein after determining the current area of the vehicle glass corresponding to the blocked area according to the preset area mapping relationship, the method further comprises:
judging whether the current area of the vehicle glass is consistent with the target area;
and prompting a driver to pay attention to the current area of the vehicle glass when the current area of the vehicle glass is inconsistent with the target area.
4. The driving assistance sensing method according to claim 1, wherein said determining corresponding sight line characteristic information based on a preset driver eye model from said eye data comprises:
determining transverse position information and longitudinal position information of the pupil relative to the center of the eyeball based on a preset driver eyeball model according to the relative position information of the pupil and the inner corner of the eye;
The determining the position information of the driver's sight on the vehicle glass according to the sight feature information and the distance information includes:
Determining the transverse coordinates of the sight of the driver on the vehicle glass according to the transverse position information, the preset eyeball radius and the distance information;
and determining the longitudinal coordinates of the sight line of the driver on the vehicle glass according to the longitudinal position information, the preset eyeball radius and the distance information.
5. The driving assistance sensing method according to claim 4, wherein before determining the position information of the driver's sight line on the vehicle glass based on the sight line characteristic information and the distance information, the method further comprises:
Determining a vehicle glass reference center corresponding to the eyeball center according to the eyeball camera mounting position and the distance information;
establishing a two-dimensional coordinate system by taking the vehicle glass reference center as a coordinate center;
The determining the position information of the driver's sight on the vehicle glass according to the sight feature information and the distance information includes:
determining the transverse coordinates of the sight line of the driver on the vehicle glass according to the transverse position information, the preset eyeball radius, the distance information and the two-dimensional coordinate system;
and determining the longitudinal coordinates of the sight line of the driver on the vehicle glass according to the longitudinal position information, the preset eyeball radius, the distance information and the two-dimensional coordinate system.
6. The driving assistance sensing method as claimed in any one of claims 1-5, wherein prior to said acquiring eyeball data collected by said eyeball camera, said method further comprises:
converting the eyeball camera and the auxiliary camera into a world coordinate system;
acquiring eyeball data based on the converted eyeball camera;
And acquiring image data based on the converted auxiliary camera.
7. A driving assistance sensing device, characterized in that the driving assistance sensing device comprises:
the acquisition module is used for acquiring eyeball data acquired by the eyeball camera;
The determining module is used for determining corresponding sight feature information based on a preset driver eyeball model according to the eyeball data;
the acquisition module is also used for acquiring distance information between a driver and vehicle glass;
the sight line conversion module is used for determining the position information of the sight line of the driver on the vehicle glass according to the sight line characteristic information and the distance information;
The determining module is further used for determining a target area corresponding to the position information;
The mapping module is used for determining an auxiliary camera sensing area corresponding to the target area according to a preset area mapping relation so as to realize man-machine synchronous sensing;
and the mapping module is also used for simplifying the image data acquired by the auxiliary camera according to the auxiliary camera sensing area when the frequency of the driver watching the target area in the preset time period is detected to be larger than a preset threshold value.
8. A driving assistance sensing apparatus, the apparatus comprising: a memory, a processor, and a driving assistance awareness program stored on the memory and executable on the processor, the driving assistance awareness program configured to implement the driving assistance awareness method of any one of claims 1 to 6.
9. A storage medium having stored thereon a driving assistance sensory program which when executed by a processor implements the driving assistance sensory method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111034889.1A CN113837027B (en) | 2021-09-03 | 2021-09-03 | Driving assistance sensing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111034889.1A CN113837027B (en) | 2021-09-03 | 2021-09-03 | Driving assistance sensing method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113837027A CN113837027A (en) | 2021-12-24 |
CN113837027B true CN113837027B (en) | 2024-06-25 |
Family
ID=78962284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111034889.1A Active CN113837027B (en) | 2021-09-03 | 2021-09-03 | Driving assistance sensing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113837027B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115468580B (en) * | 2022-09-22 | 2024-12-13 | 星河智联汽车科技有限公司 | Vehicle navigation route indication method, device and system |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008280026A (en) * | 2007-04-11 | 2008-11-20 | Denso Corp | Driving assistance device |
JP2010033106A (en) * | 2008-07-24 | 2010-02-12 | Fujitsu Ten Ltd | Driver support device, driver support method, and driver support processing program |
JP2016170688A (en) * | 2015-03-13 | 2016-09-23 | 株式会社東海理化電機製作所 | Driving support device and driving support system |
EP3093194B1 (en) * | 2015-04-24 | 2021-09-22 | Ricoh Company, Ltd. | Information provision device |
FR3053293B1 (en) * | 2016-06-29 | 2019-06-07 | Alstom Transport Technologies | DRIVER ASSISTANCE SYSTEM FOR A VEHICLE, RAILWAY VEHICLE AND METHOD OF USE THEREOF |
JP2018185654A (en) * | 2017-04-26 | 2018-11-22 | 日本精機株式会社 | Head-up display device |
CN110826369A (en) * | 2018-08-10 | 2020-02-21 | 北京魔门塔科技有限公司 | Driver attention detection method and system during driving |
JP7163732B2 (en) * | 2018-11-13 | 2022-11-01 | トヨタ自動車株式会社 | Driving support device, driving support system, driving support method and program |
JP7342636B2 (en) * | 2019-11-11 | 2023-09-12 | マツダ株式会社 | Vehicle control device and driver condition determination method |
CN110962746B (en) * | 2019-12-12 | 2022-07-19 | 上海擎感智能科技有限公司 | Driving assisting method, system and medium based on sight line detection |
CN111580522A (en) * | 2020-05-15 | 2020-08-25 | 东风柳州汽车有限公司 | Control method for unmanned vehicle, and storage medium |
CN112380935B (en) * | 2020-11-03 | 2023-05-26 | 深圳技术大学 | Man-machine collaborative sensing method and system for automatic driving |
CN112758099B (en) * | 2020-12-31 | 2022-08-09 | 福瑞泰克智能系统有限公司 | Driving assistance method and device, computer equipment and readable storage medium |
-
2021
- 2021-09-03 CN CN202111034889.1A patent/CN113837027B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113837027A (en) | 2021-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112965504B (en) | Remote confirmation method, device and equipment based on automatic driving and storage medium | |
KR101534742B1 (en) | System and method for gesture recognition of vehicle | |
KR20130112550A (en) | Apparatus for setting parking position based on around view image and method thereof | |
EP4137914B1 (en) | Air gesture-based control method and apparatus, and system | |
CN112298039A (en) | A-column imaging method | |
CN103625477A (en) | Method and system for operating vehicle | |
EP2293588A1 (en) | Method for using a stereovision camera arrangement | |
CN114290998B (en) | Skylight display control device, method and equipment | |
JP2018181338A (en) | Method for operating a self-travelling vehicle | |
CN114387587A (en) | A fatigue driving monitoring method | |
CN113837027B (en) | Driving assistance sensing method, device, equipment and storage medium | |
CN108376384B (en) | Method and device for correcting disparity map and storage medium | |
CN115649094A (en) | Intelligent control method for driving position, domain controller and related device | |
KR20190040797A (en) | Appartus and method for tracking eye of user | |
CN118545081A (en) | Lane departure warning method and system | |
CN112339771A (en) | Parking process display method and device and vehicle | |
CN113011212B (en) | Image recognition method and device and vehicle | |
CN115493614A (en) | Method and device for displaying flight path line, storage medium and electronic equipment | |
CN116802595A (en) | Method and apparatus for pose determination in data glasses | |
JP2009184638A (en) | Virtual image display device | |
Wang et al. | Towards wide range tracking of head scanning movement in driving | |
CN114715175B (en) | Method, device, electronic device and storage medium for determining target object | |
CN118537761B (en) | Obstacle detection method, device, equipment and readable storage medium | |
CN109050404A (en) | The big visual angle image supervisory control of automobile with mechanical arm and automatic pedestrian system | |
CN116597425B (en) | Method and device for determining sample tag data of driver and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |