CN115130045A - Method and device for calculating blind area of driver's visual field, vehicle and medium - Google Patents

Method and device for calculating blind area of driver's visual field, vehicle and medium Download PDF

Info

Publication number
CN115130045A
CN115130045A CN202210774816.4A CN202210774816A CN115130045A CN 115130045 A CN115130045 A CN 115130045A CN 202210774816 A CN202210774816 A CN 202210774816A CN 115130045 A CN115130045 A CN 115130045A
Authority
CN
China
Prior art keywords
driver
vehicle
blind area
starting point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210774816.4A
Other languages
Chinese (zh)
Inventor
张振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng New Energy Vehicle Co Ltd
Original Assignee
Guangzhou Xiaopeng New Energy Vehicle Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng New Energy Vehicle Co Ltd filed Critical Guangzhou Xiaopeng New Energy Vehicle Co Ltd
Priority to CN202210774816.4A priority Critical patent/CN115130045A/en
Publication of CN115130045A publication Critical patent/CN115130045A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/29Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area inside the vehicle, e.g. for viewing passengers or cargo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness

Abstract

The present specification provides a method, an apparatus, a vehicle, and a medium for calculating a blind area of a driver's field of vision, characterized in that the method includes: acquiring a driver image acquired by a camera module facing a driver in a vehicle, identifying an eye associated block and an in-vehicle reference object block containing a preset feature point, and determining a driver sight starting point set corresponding to the eye associated block; after the actual coordinates of the feature points in a vehicle custom coordinate system are obtained, calculating a second relative position relation between the feature points and the driver sight starting point set; calculating a coordinate set of the driver sight starting point set according to the actual coordinates of the feature points and the second relative position relation; and calculating the view blind area of the driver according to the coordinate set and the vehicle contour parameters corresponding to the driver sight starting point set. By adopting the technical scheme, the positioning of the sight starting point of the driver and the accuracy of the determination of the front view blind area of the vehicle can be improved.

Description

Method and device for calculating blind area of driver's visual field, vehicle and medium
Technical Field
The specification relates to the technical field of automatic driving, in particular to a method and a device for calculating a blind area of a driver's visual field, a vehicle and a medium.
Background
At present, automobiles become an indispensable tool for riding instead of walk in daily life of people. Vehicle driving safety issues are also of great concern. The adjustment of the front view of the driver during driving is very important for the driving safety of the driver, and when the judgment of the front view blind area by the driver is wrong, safety accidents are easily caused.
In general, people can estimate the front visual field in common driving through the driving visual field of a large number of passing drivers, and feed back the estimated front visual field blind area to the drivers, thereby achieving the purpose of assisting the drivers to adjust the front visual field.
However, as the posture, driving posture and driving habit of each driver are different, the sight line starting point of each driver in the driving process is different, so that the front view and the blind view area of each driver are different. On the other hand, when the driver changes the driving posture due to fatigue or the like while driving, the front view and the blind area of the driver may be changed. If only the traditional method for determining the front view and the blind area of the view is adopted, the driving view suitable for the current driver cannot be accurately deduced, so that the accuracy of the adjustment of the front view and the adjustment of the blind area of the view by the driver is low, and potential safety hazards are generated.
Disclosure of Invention
In view of this, the present specification provides a method and an apparatus for calculating a blind field of view of a driver.
Specifically, the specification is realized through the following technical scheme:
in a first aspect, the present specification provides a method of calculating a driver's blind field of view, the method comprising:
acquiring a driver image collected by a camera module facing a driver in a vehicle;
determining an eye-related block and a preset in-vehicle reference object block in the driver image based on the driver image and a preset recognition algorithm, wherein the in-vehicle reference object block comprises preset feature points;
determining a driver sight line starting point set corresponding to the eye part association block based on a preset corresponding relation;
acquiring the actual coordinates of the feature points in a vehicle custom coordinate system;
calculating a first relative position relation between a driver sight starting point set in the driver image and the feature points, and mapping the first relative position relation into a second relative position relation in the vehicle custom coordinate system;
calculating a coordinate set of the driver sight starting point set according to the actual coordinates of the feature points and the second relative position relation;
and calculating the view blind area of the driver according to the coordinate set and the vehicle contour parameters corresponding to the driver sight starting point set.
Optionally, the obtaining the actual coordinates of the feature points in the vehicle-defined coordinate system includes:
acquiring displacement information of the characteristic points detected by a reference object sensor;
and determining the actual coordinates of the characteristic points according to the initial coordinates of the characteristic points and the displacement information.
Optionally, the determining the preset in-vehicle reference object block further includes:
and determining an unobstructed in-vehicle reference object block from the plurality of identified preset in-vehicle reference object blocks according to a preset selection rule.
Optionally, the method further comprises:
and under the condition that the eye associated block of the available driver is determined not to be included in the driver image, guiding the driver to adjust the posture, and sending a collecting instruction to the camera module so as to instruct the camera module to collect the driver image again.
Optionally, the in-vehicle reference block comprises one or more of a driving seat, a vehicle steering wheel, a vehicle B-pillar.
Optionally, the method further comprises:
storing the vision blind area corresponding to the driver with known identity according to a preset algorithm;
and aiming at the driver with known identity, when the view blind area cannot be calculated in real time, acquiring the saved view blind area.
Optionally, calculating a blind field of view of the driver according to the coordinate set and vehicle contour parameters corresponding to the set of start points of driver's sight line includes:
calculating corresponding sub-field blind areas according to each coordinate in the coordinate set and corresponding vehicle contour parameters;
wherein the view blind zones are a collection of sub-view blind zones.
Optionally, the vehicle contour parameter is a set of coordinates of a number of points on the vehicle front hatch that are tangent to the line of sight.
Optionally, the method further comprises:
outputting a corresponding visual field blind area image prompt on a display screen in the vehicle according to the visual field blind area;
the visual field blind area image prompt is a visual field blind area image intercepted from an image collected by a vehicle forward camera based on the visual field blind area, or a visual field blind area schematic image constructed based on a preset rule.
In a second aspect, the present specification also provides a device for calculating a blind area of a driver's field of view, the device comprising:
the image acquisition module is used for acquiring a driver image acquired by the camera module facing the driver in the vehicle;
the position relation determining module is used for determining an eye related block and a preset in-vehicle reference object block in the driver image based on the driver image and a preset recognition algorithm, wherein the in-vehicle reference object block comprises preset feature points;
the sight line starting point determining module is used for determining a driver sight line starting point set corresponding to the eye part associated block based on a preset corresponding relation;
the characteristic point coordinate determination module is used for acquiring the actual coordinates of the characteristic points in the vehicle custom coordinate system;
the reference object position determining module is used for calculating a first relative position relation between a driver sight starting point set in the driver image and the feature points and mapping the first relative position relation into a second relative position relation in the vehicle custom coordinate system;
the sight line starting point positioning module is used for calculating a coordinate set of the driver sight line starting point set according to the actual coordinates of the feature points and the second relative position relation;
and the view blind area determining module is used for calculating the view blind area of the driver according to the coordinate set and the vehicle contour parameters corresponding to the driver sight starting point set.
By adopting the technical scheme, the position of the current driver sight starting point in the current vehicle is determined, and the vision blind area of the current driver in front of the current vehicle is determined according to the position, so that the positioning of the driver sight starting point and the accuracy of the determination of the vision blind area in front of the vehicle are improved.
Drawings
Fig. 1 is a flowchart illustrating a method for calculating a blind driver's field of view in an exemplary embodiment of the present disclosure.
Fig. 2 is a second relative positional relationship diagram of a method of calculating a blind driver's field of view according to an exemplary embodiment of the present disclosure.
Fig. 3 is a schematic diagram of a coordinate set for determining a set of start points of driver's sight-line in a calculation method of a driver's blind field area according to an exemplary embodiment of the present disclosure.
Fig. 4 is a schematic diagram of a method for calculating a blind field of view of a driver, which is shown in an exemplary embodiment of the present specification, with a driving seat as an in-vehicle reference object.
Fig. 5 is another schematic diagram illustrating a method for calculating a blind driver's field of view in a manner that determines a set of coordinates of a set of start driver's gaze points according to an exemplary embodiment of the present disclosure.
Fig. 6 is a schematic diagram illustrating a method for calculating a blind area of a driver's field of view according to an exemplary embodiment of the present disclosure.
Fig. 7 is a hardware configuration diagram of an electronic device in a vehicle according to an exemplary embodiment of the present specification.
Fig. 8 is a block diagram of a computing device for a driver blind spot in a field of view according to an exemplary embodiment of the present disclosure.
Detailed Description
The driver's view establishment and adjustment is essential for driving safety. The driver needs to adjust the visual field of the driver in the driving process and accurately grasp the distance between the vehicle and the surrounding objects, so that the surrounding objects of the vehicle can be avoided in time, and accidents are avoided. In order to assist the driver in establishing and adjusting the visual field, people usually collect a large number of driving visual field samples of the driver, calculate and estimate the front visual field and the visual field blind area commonly owned by the driver during driving, set the prompting output of the vehicle according to the popular visual field blind area data, and feed the visual field blind area data back to the driver to assist the driver in safe driving.
However, the form, driving habit and driving posture of each driver are different, so that the front view of each driver is different. In addition, changes in vehicle parameters can also increase the difficulty of establishing a driver's forward field of view due to changes in driving conditions, such as a new vehicle. The popular front view and view blind area obtained only by inference cannot be perfectly suitable for each driver, and the unmatched front view and view blind area can cause the time for establishing the view of the driver to be too long and the view accuracy to be low.
In view of the above, the present specification provides a method for calculating a blind field of view of a driver.
Referring to fig. 1, fig. 1 is a flow chart illustrating a method for calculating a blind area of a driver according to an exemplary embodiment of the present disclosure, the method including the following steps:
102, acquiring a driver image collected by a camera module facing a driver in a vehicle;
104, determining an eye associated block and a preset in-vehicle reference object block in the driver image based on the driver image and a preset recognition algorithm, wherein the in-vehicle reference object block comprises preset characteristic points;
step 106, determining a driver sight starting point set corresponding to the eye associated block based on a preset corresponding relation;
step 108, acquiring actual coordinates of the feature points in a vehicle custom coordinate system;
step 110, calculating a first relative position relationship between the driver sight line starting point set in the driver image and the feature points, and mapping the first relative position relationship to a second relative position relationship in the vehicle custom coordinate system;
step 112, calculating a coordinate set of the driver sight starting point set according to the actual coordinates of the feature points and the second relative position relation;
and step 114, calculating the view blind area of the driver according to the coordinate set and the vehicle contour parameters corresponding to the driver sight starting point set.
In summary, based on the processing from step 102 to step 114, in short, the system knows the position of the starting point set of the driver's sight line after a series of processing, and the system knows the positions of the points on the front hatch of the vehicle that are tangential to the driver's sight line, please refer to fig. 6, and based on the customized coordinate system and the plane geometry of the vehicle, the system can relatively accurately obtain the blind field of view of the current driver through a plurality of calculations, which is essentially the range of the area represented by a series of coordinates in the coordinate system. The above-described processing and calculation processes are described in detail below.
In this description, the system can obtain the set of starting points of the driver's line of sight by image processing, which requires the use of a camera module facing the driver inside the vehicle, which in a preferred embodiment can be a camera with positioning function, such as a TOF (Time of Flight) type camera, which can be controlled by the cockpit controller. The present specification does not limit what type of camera is used, and regarding the hardware level of the current mainstream camera module, different cameras selected by different vehicles in the whole vehicle system design can usually meet the basic capability requirement, and the image basic data output by different camera bodies are different, for example, some images only have two-dimensional data, some have three-dimensional data, but this only affects the algorithm selection and final precision of the developer, and does not affect the technical scheme for implementing the present invention.
Under normal conditions, the driver image acquired by the camera module comprises a driver eye related block and an in-vehicle reference object block, and when the system identifies the eye related block and the in-vehicle reference object block, the position of the driver sight line starting point set and the position of the feature point in the driver image are determined, and the first relative position relationship between the driver sight line starting point set and the feature point can be obtained through analysis and calculation. Here, it is first necessary to explain the definition of the eye-related block and the in-vehicle reference block and the substantial meaning of the processing thereof.
The block refers to an image block that has a certain characteristic and can be identified by an algorithm, for example, the block may be an eyeball part in an image. In a preferred embodiment, the set of driver's sight line starting points may be a set of respective center points of two eyeballs. In some scenes, in consideration of the practical problems such as hardware capability and computational overhead, there may be only one sight line starting point in the driver sight line starting point set, for example, one eyeball center point is selected as the sight line starting point, or a midpoint of a connecting line of two eyeball center points may be selected as the sight line starting point. Regardless of how the line of sight starting point is selected, it is clear from the above description that the line of sight starting point and the eye-associated block have a correspondence relationship therebetween.
Further, considering the uncertainty of the driver's behavior, for example, the driver may wear sunglasses, so that the correlation algorithm cannot locate the eyes of the driver from the image, the eyebrows or nose may be used as the eye correlation block. In the same way, when eyebrows or noses are selected as the eye associated blocks, the eye associated blocks have clear corresponding relations with the various driver sight line starting point sets, and a relatively accurate driver sight line starting point set A can be obtained through simple analysis and calculation.
Theoretically, the system can know the actual coordinates of any one feature point in the vehicle in the customized coordinate system, but not every feature point appears in the driver image, and the feature point is located in the area which can be collected by the camera. Preferably, the predetermined feature point is selected at a position relatively close to the driver, and the image feature of the block where the feature point is located is preferably easily recognizable, for example, an image of the shoulder area of the driving seat may be selected as the block of the reference object in the vehicle, and the two vertexes of the shoulder area may be used as the feature point.
However, in practical applications, since the shape and driving posture of the driver cannot be determined in advance, a situation that the preset in-vehicle reference object block is blocked by the driver may occur, so that one or more in-vehicle reference objects can be selected, and the problem that the system cannot identify the preset feature point when one in-vehicle reference object block in the obtained driver image is blocked is solved. Further, in order to improve the robustness of the system and adapt to more application scenarios, the preset in-vehicle reference object may be an object relatively well-defined in other positions in the vehicle, such as a head region of a driving seat.
The advantage of selecting the area on the driver's seat as an in-vehicle reference is that it is close to the driver, but may also be obscured, so that other in-vehicle object seats may also be selected as in-vehicle references, such as the steering wheel of the vehicle, the B-pillar of the vehicle, etc. When a certain in-vehicle reference object is shielded by a driver, other in-vehicle objects can be replaced to serve as the in-vehicle reference object, so that the subsequent positioning step of the in-vehicle reference object block is carried out.
Theoretically, only the actual coordinates of the feature points on one in-vehicle reference object can be used, if practical conditions such as computing resources allow, in practical development, several in-vehicle reference objects can be selected as much as possible, and the problem that part of in-vehicle reference object blocks cannot be identified in the driver image can be prevented. And if a plurality of in-vehicle reference object blocks are identified, determining an unblocked in-vehicle reference object block from the identified plurality of preset in-vehicle reference object blocks according to a preset selection rule. This rule may be a simple precedence rule such as shoulder first or head first, etc.
At this time, the system only determines the first relative position relationship between the driver's sight line starting point set and the feature point B in the driver image, please refer to fig. 2, and then needs to obtain the second relative position relationship between the driver's sight line starting point set and the vehicle in the vehicle-defined coordinate system, that is, the actual position relationship between the driver's sight line starting point set and the vehicle, so the system needs to perform a mapping process, mainly a mapping process in scale, that is, mapping the first relative position relationship between each sight line starting point (such as a) in the driver's sight line starting point set in the driver image and the feature point B in the image to the second relative position relationship in the vehicle-defined coordinate system, that is, the second relative position relationship between the driver's sight line starting point a is (X) B 1,Y B 1,Z B 1)。
In this specification, after the second relative positional relationship and the actual coordinates of the feature points are determined, a coordinate set of the starting point of the driver's sight line may be calculated according to the two data, and the blind field of view of the driver may be calculated according to the coordinate set and the vehicle contour parameter, where the vehicle contour parameter may be coordinates of a plurality of points on the front deck cover that are tangent to the sight line. Referring to fig. 6, the system can calculate a plurality of corresponding points (e.g. E) on the boundary of the blind vision area, and these points are connected together to form a curve, and the blind vision area can be defined according to the curve under the vehicle-defined coordinate system, that is, the area toward the vehicle is the blind vision area, and the area toward the opposite direction of the vehicle is the driver's vision area. As mentioned above, there may be a plurality of points in the driver sight starting point set, and the sub-visual field blind areas obtained by each calculation in the above process, for example, two sub-visual field blind areas corresponding to the center points of the left and right eyeballs, are fused, and the fused set is the visual field blind area of the driver.
Regarding the main calculation processes in the above processing flows, a more detailed embodiment is provided below with reference to the accompanying drawings.
Referring to fig. 2 and 3, in the present specification, the number of vehicle structures may be selected when constructing the vehicle custom coordinate systemEstablishing a self-defined coordinate system, such as a vehicle front cross beam, by using a fixed component as a vehicle origin C, wherein the component can be set according to the requirements of a user, the position of the vehicle origin C is based on, the actual coordinates of the characteristic point B in the vehicle self-defined coordinate system can be directly obtained as (X3, Y3, Z3) due to the fact that various size parameters in the vehicle are known, and the actual coordinates of the characteristic point B and the second relative position relation (X3, Y3, Z3) can be obtained according to the actual coordinates of the characteristic point B and the second relative position relation B 1,Y B 1,Z B 1) And calculating the coordinates of the driver sight line starting point A as (X6, Y6, Z6), and similarly calculating the coordinates of each sight line starting point in the set of driver sight line starting points, thereby obtaining the set of coordinates of the set of driver sight line starting points.
The formula of the X-axis direction position X6 of the driver sight line starting point A in the vehicle self-defined coordinate system is as follows (1):
X6=X B 1+ X3- -formula (1)
The formula for determining the Y-axis direction position Y6 of the driver sight line starting point A in the vehicle custom coordinate system is the following formula (2):
Y6=Y B 1+ Y3- -formula (2)
The formula for determining the position Z6 of the driver sight line starting point A in the Z-axis direction in the vehicle custom coordinate system is the following formula (3):
Z6=Z B 1+ Z3- -formula (3)
It should be noted that in practical applications, the driving seat and the steering wheel of the vehicle are both multi-directionally adjustable, but do not affect the system to obtain the actual coordinates of the feature points, and in the case of the driving seat, the driving seat usually has an initial position (also referred to as "origin"), and then each feature point has a corresponding initial coordinate. Once the driver adjusts the driving seat, the system can calculate the current actual coordinates of each feature point only according to the displacement parameters generated by adjustment and the initial coordinates of the initial position.
With respect to the main calculation processes in the above processing flows, another exemplary embodiment is provided below with reference to the drawings.
Please refer to fig. 4 and figIn practical application, the in-vehicle reference object block is an object which can move along with the driving posture of the driver, for example, the upper left vertex of the shoulder area of the vehicle seat is taken as a feature point B, the movement can be the movement of the vehicle seat in the horizontal direction or the rotation of the back of the vehicle seat in the front and back direction, a fixed object in the vehicle seat, namely a vehicle seat framework, is selected, the fixed object can be set according to the requirements of a user and taken as a vehicle seat origin point D to establish a vehicle seat coordinate system, and the displacement parameter generated by the feature point B relative to the vehicle seat origin point D is determined as (X) through the position calculation of the vehicle seat coordinate system D 2,Y D 2,Z D 2)。
The moving distance of the vehicle seat in the horizontal direction is Xs, the Xs can be obtained through a vehicle seat sensor, the front-back rotation angle of the vehicle seat is theta, the theta can be obtained through a vehicle waist support sensor, and the X of the characteristic point B in a vehicle seat coordinate system is determined D Axial position parameter X D The formula of 2 is the following formula (4):
X D 2- -Xs- -Zs. cos θ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -4)
Determining Y of the characteristic point B in a vehicle seat coordinate system according to the cushion width of the vehicle seat D Axial direction position parameter Y D 2, said Y D 2 may be acquired by the structural data of the vehicle.
The distance between the characteristic point B and the original point D of the vehicle seat is Zs, the Zs can be obtained through vehicle structure data preset in the system, and the Z of the characteristic point B in a vehicle seat coordinate system is determined D Axial position parameter Z D The formula of 2 is the following formula (5):
Z D 2 Zs sin θ -formula (5)
Theoretically, the dimensional parameters between the vehicle seat origin D and the vehicle origin C are known to the system, so the displacement parameter (X) of the feature point B can be determined based on the above dimensional parameters D 2,Y D 2,Z D 2) Mapping the feature point B into a vehicle custom coordinate system, and thus determining the actual coordinate of the adjusted feature point B in the vehicle custom coordinate system as (X4)Y4, Z4) for calculating the coordinates (X5, Y5, Z5) of the driver's sight-line start point a based on the second relative positional relationship in the calculation result and the adjusted actual coordinates (X4, Y4, Z4) of the feature point B. And calculating the coordinates of each sight line starting point in the driver sight line starting point set to obtain the coordinate set of the driver sight line starting point set.
The formula of the X-axis direction position X5 of the driver sight line starting point A in the vehicle custom coordinate system is as follows (6):
X5=X B 1+ X4- -formula (6)
The formula for determining the Y-axis direction position Y5 of the driver's sight line start point a in the vehicle is the following formula (7):
Y5=Y B 1+ Y4- -formula (7)
The formula for determining the Z-axis direction position Z5 of the driver's sight-line start point a in the vehicle is the following formula (8):
Z5=Z B 1+ Z4- -formula (8)
Generally speaking, if a vehicle has a vehicle forward camera capable of covering a blind field of view, an image of the vehicle forward camera may be retrieved at this time, an image corresponding to the blind field of view is captured and output to a display screen, and the output image may be a plane image or a three-dimensional image, where the image may be a side view of the vehicle, a top view of the vehicle, or a combination view of multi-view switching. If the vehicle is not provided with an external camera, the visual field blind area schematic image can be displayed based on the simulation model of the vehicle.
In practical applications, considering that some rare situations may still occur, such as poor light conditions or a problem of the posture of the driver, which may cause the system to fail to calculate the relatively accurate blind area of the driver's vision in real time from the eye-related area or the reference object area in the vehicle in the driver image, may last for a long time. When the above occurs, it may be caused by a problem in the posture of the driver, such as a closed-eye image of the driver or a head-down image of the driver. Therefore, in this specification, when the camera module facing the driver in the vehicle detects that the acquired image of the driver does not include the available eye-related block of the driver, the in-vehicle audio system may be used to guide the driver to adjust the driving posture, for example, to remind the driver to visually observe the front or hold the steering wheel to keep the driving posture until the system can recognize the eye-related area.
In addition, if the reference object block in the vehicle cannot be identified, the blind area of the field of vision of the driver cannot be calculated, and corresponding remedial optimization can be performed for the situation. For a driver with a known identity, such as the driver identified by a mobile phone or a key or a human face, when the system can calculate the blind area of the driver, the typical blind area of the driver can be saved. When the driver gets on the vehicle subsequently, if the driver encounters the extreme situation, the stored vision blind area can be called for use, and the driver is prompted to keep the normal sitting posture as much as possible. The typical view blind area may be determined based on some algorithms, for example, a view blind area obtained by averaging a plurality of view blind areas with high occurrence frequency may be selected, or other processing methods may be used, the requirement is not high, and the main purpose of the method is to solve the problem that the view blind area cannot be calculated in real time in rare cases.
Corresponding to the above embodiment of the method for calculating the blind field of vision of the driver, the present specification also provides an embodiment of a calculation device for the blind field of vision of the driver.
The embodiment of the calculation device for the blind area of the driver vision can be applied to electronic equipment in a vehicle. The device embodiments may be implemented by software, or by hardware, or by a combination of hardware and software. Taking a software implementation as an example, as a logical device, the device is formed by reading, by a processor of the electronic device where the device is located, a corresponding computer program instruction in the nonvolatile memory into the memory for operation. From a hardware aspect, as shown in fig. 7, a hardware structure diagram of an electronic device in a vehicle of a computing apparatus for a blind area of a driver's view in this specification is shown, except for a processor, a memory, a network interface, and a nonvolatile memory shown in fig. 7, an electronic device where the apparatus is located in the embodiment may also include other hardware according to an actual function of the electronic device, which is not described again.
Fig. 8 is a block diagram of a computing device for a driver blind spot in a field of view according to an exemplary embodiment of the present disclosure.
Referring to fig. 8, the device for positioning the starting point of driver's sight line may be applied to the electronic device shown in fig. 7, and includes:
the image acquisition module 302 is used for acquiring a driver image acquired by a camera module facing a driver in the vehicle;
a position relation determining module 304, configured to determine an eye-related block and a preset in-vehicle reference block in the driver image based on the driver image and a preset recognition algorithm, where the in-vehicle reference block includes a preset feature point;
a sight starting point determining module 306, configured to determine, based on a preset corresponding relationship, a driver sight starting point set corresponding to the eye-related block;
the feature point coordinate determination module 308 is configured to obtain an actual coordinate of the feature point in the vehicle custom coordinate system;
a reference object position determining module 310, configured to calculate a first relative position relationship between the driver sight starting point set in the driver image and the feature point, and map the first relative position relationship to a second relative position relationship in the vehicle-defined coordinate system;
a sight line starting point positioning module 312, configured to calculate a coordinate set of the driver sight line starting point set according to the actual coordinates of the feature points and the second relative position relationship;
and the view blind area determining module 314 is configured to calculate a view blind area of the driver according to the coordinate set and the vehicle contour parameter corresponding to the driver sight starting point set.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiment, since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the solution in the specification. One of ordinary skill in the art can understand and implement it without inventive effort.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium, that may be used to store information that may be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Corresponding to the aforementioned embodiments of the method for calculating a driver's blind vision zone, the present description also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring a driver image collected by a camera module facing a driver in a vehicle;
determining an eye-related block and a preset in-vehicle reference object block in the driver image based on the driver image and a preset recognition algorithm, wherein the in-vehicle reference object block comprises preset feature points;
determining a driver sight line starting point set corresponding to the eye part association block based on a preset corresponding relation;
acquiring the actual coordinates of the feature points in a vehicle custom coordinate system;
calculating a first relative position relation between a driver sight line starting point set and the feature points in the driver image, and mapping the first relative position relation into a second relative position relation in the vehicle custom coordinate system;
calculating a coordinate set of the driver sight starting point set according to the actual coordinates of the feature points and the second relative position relation;
and calculating the view blind area of the driver according to the coordinate set and the vehicle contour parameters corresponding to the driver sight starting point set.
Optionally, the obtaining actual coordinates of the feature points in the vehicle custom coordinate system includes:
acquiring displacement information of the characteristic points detected by a reference object sensor;
and determining the actual coordinates of the characteristic points according to the initial coordinates of the characteristic points and the displacement information.
Optionally, the determining the preset in-vehicle reference object block further includes:
and determining an unobstructed in-vehicle reference object block from the plurality of identified preset in-vehicle reference object blocks according to a preset selection rule.
Optionally, the method further comprises:
and under the condition that the available driver eye associated block is determined not to be included in the driver image, guiding the driver to adjust the posture, and sending a collecting instruction to the camera module to instruct the camera module to collect the driver image again.
Optionally, the in-vehicle reference block comprises one or more of a driving seat, a vehicle steering wheel, a vehicle B-pillar.
Optionally, the method further comprises:
storing the vision blind area corresponding to the driver with known identity according to a preset algorithm;
and aiming at the driver with known identity, when the view blind area cannot be calculated in real time, acquiring the saved view blind area.
Optionally, calculating a blind field of view of the driver according to the coordinate set and vehicle contour parameters corresponding to the set of start points of driver's sight line includes:
calculating corresponding sub-field blind areas according to each coordinate in the coordinate set and corresponding vehicle contour parameters;
wherein the view dead zones are a collection of sub-view dead zones.
Optionally, the vehicle contour parameter is a set of coordinates of a number of points on the vehicle front hatch that are tangent to the line of sight.
Optionally, the method further comprises:
outputting a corresponding view blind area image prompt on a display screen in the vehicle according to the view blind area;
the visual field blind area image prompt is a visual field blind area image which is captured from an image collected by a vehicle forward camera based on the visual field blind area, or a visual field blind area schematic image which is constructed based on a preset rule.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above description is only a preferred embodiment of the present disclosure, and should not be taken as limiting the present disclosure, and any modifications, equivalents, improvements, etc. made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (12)

1. A method of calculating a driver blind spot, the method comprising:
acquiring a driver image acquired by a camera module facing a driver in a vehicle;
determining an eye-related block and a preset in-vehicle reference object block in the driver image based on the driver image and a preset recognition algorithm, wherein the in-vehicle reference object block comprises preset feature points;
determining a driver sight line starting point set corresponding to the eye part association block based on a preset corresponding relation;
acquiring the actual coordinates of the feature points in a vehicle custom coordinate system;
calculating a first relative position relation between a driver sight line starting point set and the feature points in the driver image, and mapping the first relative position relation into a second relative position relation in the vehicle custom coordinate system;
calculating a coordinate set of the driver sight starting point set according to the actual coordinates of the feature points and the second relative position relation;
and calculating the view blind area of the driver according to the coordinate set and the vehicle contour parameters corresponding to the driver sight starting point set.
2. The method of claim 1, wherein the in-vehicle reference object block is a movable object, and the obtaining actual coordinates of the feature point in the vehicle custom coordinate system comprises:
acquiring displacement information of the characteristic points detected by a reference object sensor;
and determining the actual coordinates of the characteristic points according to the initial coordinates of the characteristic points and the displacement information.
3. The method of claim 1, wherein said determining said predetermined in-vehicle reference block further comprises:
and determining an unobstructed in-vehicle reference object block from the plurality of identified preset in-vehicle reference object blocks according to a preset selection rule.
4. The method of claim 3, further comprising:
and under the condition that the eye associated block of the available driver is determined not to be included in the driver image, guiding the driver to adjust the posture, and sending a collecting instruction to the camera module so as to instruct the camera module to collect the driver image again.
5. The method of claim 1, wherein the in-vehicle reference block comprises one or more of a driver seat, a vehicle steering wheel, and a vehicle B-pillar.
6. The method of claim 1, further comprising:
storing the vision blind area corresponding to the driver with known identity according to a preset algorithm;
and aiming at the driver with known identity, when the view blind area cannot be calculated in real time, acquiring the saved view blind area.
7. The method of claim 1, wherein calculating a blind field of view for the driver based on the set of coordinates and vehicle contour parameters corresponding to the set of start driver gaze points comprises:
calculating corresponding sub-view blind areas according to each coordinate in the coordinate set and corresponding vehicle contour parameters;
wherein the view blind zones are a collection of sub-view blind zones.
8. The method of claim 1, wherein the vehicle profile parameter is a set of coordinates of a number of points on the vehicle front hatch that are tangent to the line of sight.
9. The method of claim 8, further comprising:
outputting a corresponding view blind area image prompt on a display screen in the vehicle according to the view blind area;
the visual field blind area image prompt is a visual field blind area image intercepted from an image collected by a vehicle forward camera based on the visual field blind area, or a visual field blind area schematic image constructed based on a preset rule.
10. A device for calculating a blind area of a driver's field of view, the device comprising:
the image acquisition module is used for acquiring a driver image acquired by the camera module facing the driver in the vehicle;
the position relation determining module is used for determining an eye related block and a preset in-vehicle reference object block in the driver image based on the driver image and a preset recognition algorithm, wherein the in-vehicle reference object block comprises preset feature points;
the sight line starting point determining module is used for determining a driver sight line starting point set corresponding to the eye part associated block based on a preset corresponding relation;
the characteristic point coordinate determination module is used for acquiring the actual coordinates of the characteristic points in the vehicle custom coordinate system;
the reference object position determining module is used for calculating a first relative position relation between the driver sight starting point set in the driver image and the feature points and mapping the first relative position relation into a second relative position relation in the vehicle self-defined coordinate system;
the sight line starting point positioning module is used for calculating a coordinate set of the driver sight line starting point set according to the actual coordinates of the feature points and the second relative position relation;
and the view blind area determining module is used for calculating the view blind area of the driver according to the coordinate set and the vehicle outline parameters corresponding to the driver sight starting point set.
11. A vehicle, characterized by comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 1-9 by executing the executable instructions.
12. A computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, carry out the steps of the method according to any one of claims 1-9.
CN202210774816.4A 2022-07-01 2022-07-01 Method and device for calculating blind area of driver's visual field, vehicle and medium Pending CN115130045A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210774816.4A CN115130045A (en) 2022-07-01 2022-07-01 Method and device for calculating blind area of driver's visual field, vehicle and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210774816.4A CN115130045A (en) 2022-07-01 2022-07-01 Method and device for calculating blind area of driver's visual field, vehicle and medium

Publications (1)

Publication Number Publication Date
CN115130045A true CN115130045A (en) 2022-09-30

Family

ID=83381450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210774816.4A Pending CN115130045A (en) 2022-07-01 2022-07-01 Method and device for calculating blind area of driver's visual field, vehicle and medium

Country Status (1)

Country Link
CN (1) CN115130045A (en)

Similar Documents

Publication Publication Date Title
EP2893867B1 (en) Detecting visual inattention based on eye convergence
US11068704B2 (en) Head pose and distraction estimation
JP2021533462A (en) Depth plane selection for multi-depth plane display systems by user categorization
CN104573623B (en) Face detection device and method
JP2021532405A (en) A display system and method for determining the alignment between the display and the user's eye.
EP3575926B1 (en) Method and eye tracking system for providing an approximate gaze convergence distance of a user
EP3545818B1 (en) Sight line direction estimation device, sight line direction estimation method, and sight line direction estimation program
JP6479272B1 (en) Gaze direction calibration apparatus, gaze direction calibration method, and gaze direction calibration program
JP7369184B2 (en) Driver attention state estimation
JP2020048971A (en) Eyeball information estimation device, eyeball information estimation method, and eyeball information estimation program
JP6948688B2 (en) Line-of-sight measurement device, line-of-sight measurement method, and line-of-sight measurement program
WO2012144020A1 (en) Eyelid detection device, eyelid detection method, and program
KR20200086742A (en) Method and device for determining gaze distance
CN114407903A (en) Cabin system adjusting device and method for adjusting a cabin system
JP2015085719A (en) Gazing object estimation device
JP5644414B2 (en) Awakening level determination device, awakening level determination method, and program
JP2017091013A (en) Driving support device
JP2018101212A (en) On-vehicle device and method for calculating degree of face directed to front side
JP2017191367A (en) Safe driving support device and safe driving support program
CN115130045A (en) Method and device for calculating blind area of driver's visual field, vehicle and medium
JP2018183249A (en) Visual line detection device, visual line detection program, and visual line detection method
JP5568761B2 (en) Visual field estimation apparatus, visual field estimation method, computer program, and recording medium
JP2018034716A (en) Operation system for vehicle and computer program
JP2017138645A (en) Sight-line detection device
CN113827244B (en) Method for detecting and monitoring driver's sight line direction, system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination