CN117152411A - Sight line calibration method, control device and storage medium - Google Patents

Sight line calibration method, control device and storage medium Download PDF

Info

Publication number
CN117152411A
CN117152411A CN202311441032.0A CN202311441032A CN117152411A CN 117152411 A CN117152411 A CN 117152411A CN 202311441032 A CN202311441032 A CN 202311441032A CN 117152411 A CN117152411 A CN 117152411A
Authority
CN
China
Prior art keywords
sight
line
current
driver
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311441032.0A
Other languages
Chinese (zh)
Inventor
陈卓筠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Weilai Zhijia Technology Co Ltd
Original Assignee
Anhui Weilai Zhijia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Weilai Zhijia Technology Co Ltd filed Critical Anhui Weilai Zhijia Technology Co Ltd
Priority to CN202311441032.0A priority Critical patent/CN117152411A/en
Publication of CN117152411A publication Critical patent/CN117152411A/en
Pending legal-status Critical Current

Links

Abstract

The invention relates to the technical field of sight line calibration, in particular to a sight line calibration method, a control device and a storage medium, and aims to solve the technical problem that the user experience is poor due to the fact that the existing sight line calibration method requires additional operation of a user. To this end, the invention provides a method comprising: acquiring a current actual sight area of a target user in the process of executing a preset activity; acquiring the detected current sight falling point position information of the target user; and calibrating the sight-line falling point position of the target user detected in the preset activity process based on the current actual sight-line area and the current sight-line falling point position information. According to the technical scheme provided by the invention, the real-time sight calibration can be performed under the condition that a user does not feel, so that the user experience and the sight detection accuracy are greatly improved.

Description

Sight line calibration method, control device and storage medium
Technical Field
The invention relates to the technical field of sight line calibration, and particularly provides a sight line calibration method, a control device and a storage medium.
Background
In the field of vision detection of a user, in order to accurately acquire vision information of the user, such as which direction, which object the user is currently looking at, the vision detection system needs to be calibrated and calibrated to eliminate system detection errors.
Existing gaze calibration schemes are typically performed in addition, i.e. require the user to perform gaze error calibration in a stationary state and with a specific pose being exclusively aimed at a few reference points. In practical applications, this calibration method is not only cumbersome for the user, but also may deviate from the calibration result once the posture of the user or the position of some related objects changes, and thus may need to be calibrated again. Therefore, the existing sight line calibration scheme has the technical problems of complex operation, low user experience and inaccurate calibration result.
Based on this, there is a need in the art for a new gaze calibration scheme to address the above-described issues.
Disclosure of Invention
In order to solve the technical problems, the invention provides a sight line calibration method, a control device and a storage medium, which can perform real-time sight line calibration under the condition that a user does not feel, thereby greatly improving user experience and sight line detection accuracy.
In order to achieve the above purpose, the technical scheme of the invention is realized as follows:
in a first aspect, the present invention provides a gaze calibration method, the method comprising:
acquiring a current actual sight area of a target user in the process of executing a preset activity;
Acquiring the detected current sight falling point position information of the target user;
and calibrating the sight-line falling point position of the target user detected in the preset activity process based on the current actual sight-line area and the current sight-line falling point position information.
In one aspect of the above gaze calibration method, the acquiring the current actual gaze area of the target user during the execution of the predetermined activity includes:
the current actual sight area of the driver in the process of driving the vehicle is obtained.
In one aspect of the above gaze calibration method, the current actual gaze area includes: the spatial position area where the target vehicle currently located right in front of the host vehicle is located; the obtaining the current actual sight line area of the driver in the process of driving the vehicle comprises the following steps:
acquiring size information, shape information and current position information of the target vehicle;
determining a plurality of corner points of the target vehicle based on the size information, the shape information, and the current position information;
and generating a first target frame used for representing a spatial position area where the target vehicle is located based on the plurality of corner points, and taking the first target frame as the current actual sight area.
In one aspect of the above gaze calibration method, before the obtaining the current actual gaze area of the driver during driving the host vehicle, the method further includes:
judging whether the current actual sight line area of the driver is the space position area or not;
the obtaining the current actual sight line area of the driver in the process of driving the vehicle comprises the following steps:
and when the current actual sight line area of the driver is the space position area, acquiring the space position area.
In one aspect of the above gaze calibration method, the determining whether the current actual gaze area of the driver is the spatial location area includes:
judging whether a vehicle exists in a preset distance right in front of the vehicle;
judging whether the steering wheel angle of the vehicle is smaller than a preset angle threshold value or not;
judging whether the vehicle continuously outputs a brake signal in a preset time period;
judging whether the head pose angle of the driver is positioned in a preset angle range;
when a vehicle exists in the preset distance right in front of the vehicle, the steering wheel angle of the vehicle is smaller than the preset angle threshold, the vehicle continuously outputs a brake signal in the preset time period, and the head gesture angle of the driver is located in the preset angle range, the current actual sight line area of the driver is determined to be the space position area.
In one aspect of the above gaze calibration method, the obtaining the detected current gaze location information of the target user includes:
acquiring a plurality of detected sight falling points of the target user in a time period when the sight of the target user is positioned in the current actual sight area;
clustering the plurality of sight falling points by adopting a K-means clustering algorithm to obtain at least one sight falling point cluster; each sight line falling point cluster is provided with a cluster center point corresponding to the sight line falling point cluster;
selecting a cluster center point closest to the current actual sight area from the cluster center points as a sight drop point mark point;
and acquiring coordinates of the sight falling point mark point as the current sight falling point position information.
In one technical scheme of the sight line calibration method, the current sight line drop point position information includes current sight line drop point coordinates; the method further comprises the steps of:
judging whether the current sight falling point coordinates are located outside the current actual sight area or not;
the calibrating the sight-line drop point position of the target user detected in the preset activity process based on the current actual sight-line area and the current sight-line drop point position information comprises the following steps:
And when the current sight-line falling point coordinate is positioned outside the current actual sight-line area, calibrating the sight-line falling point position of the target user detected in the preset activity process based on the current actual sight-line area and the current sight-line falling point coordinate.
In one technical scheme of the sight line calibration method, the current sight line drop point position information includes current sight line drop point coordinates; the calibrating the sight-line drop point position of the target user detected in the preset activity process based on the current actual sight-line area and the current sight-line drop point position information comprises the following steps:
acquiring the center point coordinates of the current actual sight area;
calculating a deviation value between the current line-of-sight falling point coordinate and the center point coordinate;
and calibrating the sight falling point position of the target user detected in the process of the preset activity based on the deviation value.
In one aspect of the above gaze calibration method, the method further comprises:
generating a second target frame used for representing the spatial position area where the vehicle is located in front of the vehicle in the time period of the spatial position area where the driver's sight line is located next time, and acquiring the detected next sight line falling point position information of the driver;
And/or, the calibrating the sight-line drop point position of the target user detected in the preset activity process based on the current actual sight-line area and the current sight-line drop point position information includes:
and calibrating the sight-line drop point position of the driver detected by the driver in the process of driving the vehicle based on the first target frame, the second target frame, the current sight-line drop point position information and the next sight-line drop point position information.
In one technical scheme of the sight line calibration method, the current sight line drop point position information includes current sight line drop point coordinates; the next sight falling point position information comprises next sight falling point coordinates; the calibrating the line of sight landing position of the driver detected by the driver in the driving process of the host vehicle based on the first target frame, the second target frame, the current line of sight landing position information and the next line of sight landing position information comprises the following steps:
acquiring the center point coordinates of the first target frame;
calculating a deviation value between the current line-of-sight falling point coordinate and the central point coordinate of the first target frame as a first deviation value;
Acquiring the center point coordinates of the second target frame;
calculating a deviation value between the next sight falling point coordinate and the center point coordinate of the second target frame as a second deviation value;
calculating an average value of the first deviation value and the second deviation value as an average deviation value;
and calibrating the sight falling point position of the driver detected by the driver in the process of driving the vehicle based on the average deviation value.
In one aspect of the above vision calibration method, the calibrating the vision drop point position of the driver detected by the driver during driving the host vehicle based on the first target frame, the second target frame, the current vision drop point position information and the next vision drop point position information further includes:
calculating an absolute value of a difference between the first deviation value and the second deviation value;
judging whether the absolute value of the difference value is smaller than a preset deviation threshold value or not;
the calculating the average value of the first deviation value and the second deviation value as an average deviation value includes:
and when the absolute value of the difference value is smaller than the preset deviation threshold value, calculating the average value of the first deviation value and the second deviation value as an average deviation value.
In one technical scheme of the sight line calibration method, the current sight line drop point position information includes current sight line drop point coordinates; the method further comprises the steps of:
generating target frames corresponding to each time period and used for representing the spatial position area of the vehicle in front of the vehicle in each time period in the time period of the spatial position area of the vehicle in front of the vehicle, so as to obtain a plurality of target frames, and obtaining detected line-of-sight falling point coordinates corresponding to each time period so as to obtain a plurality of line-of-sight falling point coordinates;
and calibrating the line-of-sight falling point position of the driver detected in a preset starting time period in the next driving process of the driver based on the plurality of target frames and the plurality of line-of-sight falling point coordinates.
In one aspect of the above vision calibration method, the calibrating the vision landing position of the driver detected in a preset starting time period during driving of the host vehicle next time based on the plurality of target frames and the plurality of vision landing coordinates includes:
calculating the deviation between each line of sight falling point coordinate and the corresponding target frame to obtain a plurality of third deviation values;
Calculating absolute values of differences between every two of the plurality of third deviation values;
judging whether the absolute values of all the difference values are smaller than a preset deviation threshold value or not;
when the absolute values of all the difference values are smaller than the preset deviation threshold value, calculating an average value of the plurality of third deviation values;
and calibrating the line-of-sight falling point position of the driver, which is detected in a preset starting time period in the process of driving the vehicle by the driver next time, based on the average value of the plurality of third deviation values.
In a second aspect, the present invention provides a control device comprising a processor and a storage device, the storage device being adapted to store a plurality of program codes, the program codes being adapted to be loaded and run by the processor to perform the gaze calibration method of any one of the above-mentioned gaze calibration methods.
In a third aspect, the present invention provides a computer readable storage medium having stored therein a plurality of program codes adapted to be loaded and run by a processor to perform a gaze calibration method according to any one of the above-mentioned gaze calibration methods.
According to the sight line calibration method, the control device and the storage medium, the sight line deviation calibration can be carried out in the preset activity process of the user, and the condition that the user adopts a specific posture to specially carry out the sight line calibration operation in the static state is avoided by acquiring the current actual sight line area of the target user in the preset activity process and acquiring the detected current sight line drop point position information of the target user and calibrating the sight line drop point position of the target user detected in the preset activity process based on the current actual sight line area and the current sight line drop point position information. In addition, the invention calibrates the line of sight deviation based on the current actual line of sight area of the user and the detected current line of sight drop position information, namely, the line of sight deviation calibration can be performed in real time in the process of executing the preset activity by the user, and in the process, even if the gesture of the user changes, the final line of sight detection accuracy is not influenced. Therefore, the technical scheme provided by the embodiment of the invention can calibrate the sight line in real time under the condition that a user does not feel, so that the user experience and the sight line detection accuracy are greatly improved.
Drawings
The present disclosure will become more readily understood with reference to the accompanying drawings. As will be readily appreciated by those skilled in the art: the drawings are only for the purpose of illustrating the invention and are not intended to limit the scope of the invention. Moreover, like numerals in the figures are used to designate like parts, wherein:
FIG. 1 is a flow chart of main steps of a sight line calibration method according to an embodiment of the present invention;
FIG. 2 is a schematic view of a front frame and a driver's eye drop point in an embodiment of the present invention;
FIG. 3 is a flow chart of another method for calibrating a line of sight according to an embodiment of the present invention;
fig. 4 is a main structural block diagram of a sight line calibration apparatus according to an embodiment of the present invention.
List of reference numerals
11: a first acquisition unit; 12: a second acquisition unit; 13: a calibration unit;
1: corner points; 2: a target frame; 3: a line-of-sight landing point; 4: a line-of-sight landing cluster; 5: the line of sight drop point marks the point.
Detailed Description
Some embodiments of the invention are described below with reference to the accompanying drawings. It should be understood by those skilled in the art that these embodiments are merely for explaining the technical principles of the present invention, and are not intended to limit the scope of the present invention.
In the description of the present invention, a "module," "processor" may include hardware, software, or a combination of both. A module may comprise hardware circuitry, various suitable sensors, communication ports, memory, or software components, such as program code, or a combination of software and hardware. The processor may be a central processor, a microprocessor, an image processor, a digital signal processor, or any other suitable processor. The processor has data and/or signal processing functions. The processor may be implemented in software, hardware, or a combination of both. Non-transitory computer readable storage media include any suitable medium that can store program code, such as magnetic disks, hard disks, optical disks, flash memory, read-only memory, random access memory, and the like. The term "a and/or B" means all possible combinations of a and B, such as a alone, B alone or a and B. The term "at least one A or B" or "at least one of A and B" has a meaning similar to "A and/or B" and may include A alone, B alone or A and B. The singular forms "a", "an" and "the" include plural referents.
In the field of vision detection of a user, in order to accurately acquire vision information of the user, such as which direction, which object the user is currently looking at, the vision detection system needs to be calibrated and calibrated to eliminate system detection errors.
For example, in the field of automatic driving, accurate detection of the line of sight of a driver is the basis for driving assistance, and with the development of driving safety techniques, detection of the falling point of the line of sight of the driver plays an increasingly important role. Through the acquisition of the sight drop point of the driver, the knowledge and grasp degree of the driver on the road condition in front can be obtained, so that references are provided for driving safety reminding and alarming. Therefore, a sufficiently accurate driver's line-of-sight drop point determination is required to avoid false negatives. The current mainstream line-of-sight calibration schemes are static, i.e., the driver achieves calibration by dropping the line of sight at several designated rest points. The method requires the driver to additionally act, influences driving experience, and the calibration result is only effective on the current human body posture and the current vehicle condition, and once the driver adjusts the posture or the positions of the seat and the steering wheel, the calibration result is likely to deviate.
In order to solve the above technical problems, the present invention provides a line-of-sight calibration method, as shown in fig. 1, which mainly includes the following steps S101 to S103.
Step S101, acquiring the current actual sight line area of a target user in the process of executing a preset activity;
in this embodiment, the target user is a user whose line of sight needs to be detected by the system, for example, a driver of a vehicle, an operator of a device, and the like. The predetermined activity performed by the target user may be any activity requiring line-of-sight detection of the target user, for example, a process in which the driver drives the vehicle, a process in which the operator operates the device, or the like.
It should be noted that, the target user and the predetermined activity described in this embodiment are not limited to the above listed items, and specific target users and specific predetermined activities may be determined according to actual requirements, which is not particularly limited in this embodiment.
Taking a driver driving a vehicle as an example, the method for acquiring a current actual sight line area of a target user in a process of executing a predetermined activity according to the embodiment includes: the current actual sight area of the driver in the process of driving the vehicle is obtained.
In order to accurately obtain a current actual sight line area of a driver in the process of driving a host vehicle, the current actual sight line area according to the embodiment includes: the spatial location area where the target vehicle currently located directly in front of the host vehicle is located. Under the premise, the method for acquiring the current actual sight line area of the driver in the driving process of the vehicle according to the embodiment includes: acquiring size information, shape information and current position information of the target vehicle; determining a plurality of corner points of the target vehicle based on the size information, the shape information, and the current position information; and generating a first target frame used for representing a spatial position area where the target vehicle is located based on the plurality of corner points, and taking the first target frame as the current actual sight area.
Specifically, an image and point cloud information of a target vehicle in front of the host vehicle are acquired through an outside cabin sensing system of the host vehicle, and size information, shape information and current position information of the target vehicle can be acquired based on the information. Then, the size information, the shape information and the current position information of the target vehicle are all converted into a vehicle body coordinate system of the vehicle, so that the system can sense the target vehicle from the view angle of a driver. Based on the size information, the shape information and the current position information, a plurality of corner points 1 of the target vehicle relative to the vehicle can be determined, and in this embodiment, 8 corner points are taken, that is, coordinate information of the 8 corner points of the target vehicle in a vehicle body coordinate system of the vehicle can be determined, so that a target frame 2 of the target vehicle is generated, as shown in fig. 2. Wherein the target frame 2 is used for representing a spatial position area where the target vehicle is located.
Further, in one embodiment, in order to more accurately acquire the above spatial location area, before the step of acquiring the current actual line-of-sight area of the driver during driving the host vehicle, the line-of-sight calibration method further includes: and judging whether the current actual sight line area of the driver is the space position area or not. Under the premise, the method for acquiring the current actual sight line area of the driver in the driving process of the vehicle according to the embodiment includes: and when the current actual sight line area of the driver is the space position area, acquiring the space position area.
Further, in one embodiment, in order to accurately determine that the current actual sight-line area of the driver is the spatial location area where the target vehicle is located, determining whether the current actual sight-line area of the driver is the spatial location area according to the embodiment includes: judging whether a vehicle exists in a preset distance right in front of the vehicle; judging whether the steering wheel angle of the vehicle is smaller than a preset angle threshold value or not; judging whether the vehicle continuously outputs a brake signal in a preset time period; judging whether the head pose angle of the driver is positioned in a preset angle range; when a vehicle exists in the preset distance right in front of the vehicle, the steering wheel angle of the vehicle is smaller than the preset angle threshold, the vehicle continuously outputs a brake signal in the preset time period, and the head gesture angle of the driver is located in the preset angle range, the current actual sight line area of the driver is determined to be the space position area.
Specifically, the present embodiment realizes a gaze calibration operation that is not perceived by the driver, using a period of time in which the driver is paying attention to the preceding vehicle. Therefore, it is necessary to recognize a scene in which the driver focuses on the preceding vehicle by using the input information. These entered information include: the size, shape and position information of the front vehicle, the steering wheel angle information of the vehicle, the information of whether the brake pedal of the vehicle is stepped on, the head and face information of the driver acquired by a camera in the cabin and the like are input by the cabin outer sensing system. The cabin camera is a DMS (Driver Monitor System, driver monitoring system) camera, the system utilizes images acquired by the camera, extracts eye, face and head features in the images based on a pre-trained head pose and sight line recognition model, and outputs head pose and sight line information of a driver. The head pose information includes head pose angle information of the driver, and the sight line information includes information such as a sight line direction and a sight line drop point position of the driver, so that information such as a current actual sight line area and current sight line drop point position information described in this embodiment can be obtained. The input information may include information on the vehicle speed of the vehicle, and the system can determine the state of the driver using the information, for example, if the vehicle speed is too slow, the possibility that the driver continues to look at the vehicle in front of the vehicle is reduced. When the head pose information and the sight line information are used, the head pose information and the sight line information are also converted into reference coordinates of unified data under the vehicle body coordinate system of the vehicle.
Based on some of the above-described input information, the present embodiment can automatically detect whether the driver is focusing on the target vehicle in front. In the present embodiment, the following conditions are satisfied, that is, it is considered that the driver is focusing on the target vehicle in front of this time:
(1) The vehicle exists in a preset distance right in front of the vehicle;
(2) The steering wheel angle of the vehicle is smaller than a preset angle threshold;
(3) The vehicle continuously outputs a brake signal within a preset time period;
(4) The head pose angle of the driver is located in a preset angle range.
In one embodiment, the predetermined distance may be set to 30m, so as to ensure that the size of the target frame formed by the front vehicle is large enough to facilitate the line of sight calibration; the preset turning angle threshold value can be set to be 5 degrees and is used for excluding a turning scene of the vehicle; the predetermined period of time may be set to 1s; the preset angle range may be set to a range of-5 deg. to 5 deg. with the driver's visual straight ahead direction being a 0 deg. heading angle. That is, when the system detects that a vehicle exists in front of the vehicle for 30m, the steering wheel angle of the vehicle is smaller than 5 degrees, the vehicle continuously outputs a brake signal (namely, the driver makes a continuous brake reaction) within 30s, and the head gesture heading angle of the driver is located at-5 degrees to 5 degrees, the driver is determined to be focusing on the vehicle in front, namely, the current actual sight area of the driver is determined to be the spatial position area of the target vehicle.
Step S102, obtaining the detected current sight falling point position information of the target user;
in this embodiment, the current sight-line drop point position information is automatically detected by the sight-line detection system. That is, the current actual line of sight area according to the present embodiment refers to the actual line of sight landing area of the user, and the current line of sight landing position information according to the present embodiment refers to the line of sight landing position of the user detected by the line of sight detection system. Based on the true value and the detection value, the detection error of the line-of-sight detection system can be calibrated, and the finally detected line-of-sight position can be calibrated.
In order to accurately acquire the current line of sight landing position information, the acquiring the detected current line of sight landing position information of the target user according to the embodiment includes: acquiring a plurality of detected sight falling points of the target user in a time period when the sight of the target user is positioned in the current actual sight area; clustering the plurality of sight falling points by adopting a K-means clustering algorithm to obtain at least one sight falling point cluster; each sight line falling point cluster is provided with a cluster center point corresponding to the sight line falling point cluster; selecting a cluster center point closest to the current actual sight area from the cluster center points as a sight drop point mark point; and acquiring coordinates of the sight falling point mark point as the current sight falling point position information.
Specifically, taking the activity of driving the vehicle by the driver as an example, as shown in fig. 2, during the period of time when the driver pays attention to the preceding vehicle, a plurality of detected sight-line drop points 3 of the driver are acquired, and the sight-line drop points 3 are clustered by adopting a K-means clustering algorithm to obtain at least one sight-line drop point cluster 4, two sight-line drop point clusters being shown in fig. 2. And then, acquiring a cluster center point of each line-of-sight falling point cluster 4, selecting the cluster center point closest to the target frame 2 from the cluster center points as a line-of-sight falling point mark point 5 when the driver pays attention to the front vehicle, and acquiring coordinates of the line-of-sight falling point mark point 5 as the current line-of-sight falling point position information in the embodiment.
And step S103, calibrating the sight-line drop point position of the target user detected in the preset activity process based on the current actual sight-line area and the current sight-line drop point position information.
In this embodiment, the current line of sight drop point location information includes current line of sight drop point coordinates. In order to obtain a specific deviation value of the line-of-sight landing point, the line-of-sight calibration method according to the embodiment further includes: judging whether the current line-of-sight falling point coordinate is located outside the current actual line-of-sight area, on the premise that the calibration of the line-of-sight falling point position of the target user detected in the predetermined activity process based on the current actual line-of-sight area and the current line-of-sight falling point position information includes: and when the current sight-line falling point coordinate is positioned outside the current actual sight-line area, calibrating the sight-line falling point position of the target user detected in the preset activity process based on the current actual sight-line area and the current sight-line falling point coordinate.
That is, in the present embodiment, the line-of-sight deviation is calibrated only if the current line-of-sight falling point coordinates are outside the current actual line-of-sight region. In particular, in the driving vehicle activity of the driver, the relative positions of the sight-line falling point marking point 5 and the target frame 2 need to be compared. As shown in fig. 2, only when the sight-line drop point flag point 5 of the driver is located outside the target frame 2, the currently detected sight-line drop point is considered to be inaccurate, and a sight-line calibration operation is required; when the sight-line drop point marking point 5 of the driver is located in the target frame 2, the currently detected sight-line drop point is considered to be accurate, and sight calibration is not required.
As described above, the current gaze location information includes current gaze location coordinates. In order to calibrate the line of sight landing position of the target user more accurately, the calibrating the line of sight landing position of the target user detected during the predetermined activity based on the current actual line of sight area and the current line of sight landing position information according to the embodiment includes: acquiring the center point coordinates of the current actual sight area; calculating a deviation value between the current line-of-sight falling point coordinate and the center point coordinate; and calibrating the sight falling point position of the target user detected in the process of the preset activity based on the deviation value.
That is, in this embodiment, the current line of sight drop point coordinate detected by the system is calibrated to the center point coordinate of the current actual line of sight area of the target user, so that the line of sight drop point position of the target user detected by the system is located in the actual line of sight drop point area of the target user.
Further, in one embodiment, in order to calibrate the line of sight landing position of the target user more accurately, the line of sight calibration method in this embodiment further includes: and generating a second target frame used for representing the spatial position area where the vehicle is located in the time period of the spatial position area where the driver's sight line is located next time in front of the vehicle, and acquiring the detected next sight line falling point position information of the driver. On this premise, the calibrating the line-of-sight landing position of the target user detected during the predetermined activity based on the current actual line-of-sight area and the current line-of-sight landing position information according to the present embodiment includes: and calibrating the sight-line drop point position of the driver detected by the driver in the process of driving the vehicle based on the first target frame, the second target frame, the current sight-line drop point position information and the next sight-line drop point position information.
In practical applications, taking a driver driving a vehicle as an example, the current actual sight line area refers to a real-time position area where a vehicle directly in front of the vehicle is located. And acquiring the detected sight falling point position information of the user again in the time period of the position area where the vehicle in front of the vehicle is located next time when the sight of the target user is located next time, namely acquiring the sight falling point position information of the target user when the target user pays attention to the vehicle in front of the vehicle next time.
In this embodiment, the current line of sight drop point location information includes a current line of sight drop point coordinate; the next line of sight landing position information comprises next line of sight landing coordinates. In order to further accurately calibrate the line of sight landing position of the target user, the calibration of the line of sight landing position of the driver detected by the driver in driving the host vehicle based on the first target frame, the second target frame, the current line of sight landing position information and the next line of sight landing position information according to the embodiment includes: acquiring the center point coordinates of the first target frame; calculating a deviation value between the current line-of-sight falling point coordinate and the central point coordinate of the first target frame as a first deviation value; acquiring the center point coordinates of the second target frame; calculating a deviation value between the next sight falling point coordinate and the center point coordinate of the second target frame as a second deviation value; calculating an average value of the first deviation value and the second deviation value as an average deviation value; and calibrating the sight falling point position of the driver detected by the driver in the process of driving the vehicle based on the average deviation value.
Further, in one embodiment, to calibrate the line of sight landing position of the target user more accurately, the calibrating the line of sight landing position of the driver detected by the driver during driving the host vehicle based on the first target frame, the second target frame, the current line of sight landing position information and the next line of sight landing position information further includes: calculating an absolute value of a difference between the first deviation value and the second deviation value; and judging whether the absolute value of the difference value is smaller than a preset deviation threshold value. Under the foregoing circumstances, the calculating, as an average deviation value, the average value of the first deviation value and the second deviation value according to the present embodiment includes: and when the absolute value of the difference value is smaller than the preset deviation threshold value, calculating the average value of the first deviation value and the second deviation value as an average deviation value.
That is, in the present embodiment, the line-of-sight deviation is calculated by the driver paying attention to two line-of-sight landing mark points obtained by the front target vehicle twice during driving, and further line-of-sight calibration is performed. Of course, the line of sight deviation may also be calculated using a plurality of line of sight landing mark points obtained 3 times or 4 times or other times.
Specifically, the above-described method of calculating the line of sight deviation by the driver focusing on a plurality of line of sight landing mark points obtained from the front target vehicle a plurality of times and then performing the line of sight calibration is called an incremental learning line of sight calibration method. Taking the driver driving the vehicle as an example, the system may detect in real time whether the driver is focusing on the front vehicle, and when detecting that the driver is focusing on the front vehicle, according to the above technical solution of the present embodiment, a line-of-sight drop point marking point P1 and a front vehicle target frame K1 as shown in fig. 2 may be obtained. Next, when the driver again begins to pay attention to the preceding vehicle, the system again can obtain a line-of-sight drop point marker point P2 and the preceding vehicle target frame K2 for that period of time. Calculating a deviation value between the line of sight falling point marking point P1 and the center point coordinate of the target frame K1 as a first deviation value, calculating a deviation value between the line of sight falling point marking point P2 and the center point coordinate of the target frame K2 as a second deviation value, and calculating the absolute value of the difference value between the first deviation value and the second deviation value, wherein if the absolute value of the difference value is smaller than a preset deviation threshold value, the first deviation value and the second deviation value indicate that the line of sight falling point deviation of the driver is similar, the line of sight falling point deviation of the driver is considered to exist stably, and at the moment, the average value of the first deviation value and the second deviation value is adopted to calibrate the line of sight falling point position of the driver detected subsequently, namely, the average value is compensated to the detected line of sight falling point position, and the calibrated line of sight falling point position is obtained.
If the absolute value of the difference is greater than or equal to the preset deviation threshold, which indicates that the first deviation value and the second deviation value are greater, the sight line drop point deviation of the driver is considered to be random, at the moment, the calibration operation is not performed, the sight line drop point marking point P1 and the sight line drop point marking point P2 are emptied, and the next time the driver pays attention to the front vehicle, the sight line drop point marking point is obtained again.
In this embodiment, the current line of sight drop point location information includes current line of sight drop point coordinates. Further, in one embodiment, in order to quickly and accurately calibrate the sight line of the target user, the sight line calibration method further includes: generating target frames corresponding to each time period and used for representing the spatial position area of the vehicle in front of the vehicle in each time period in the time period of the spatial position area of the vehicle in front of the vehicle, so as to obtain a plurality of target frames, and obtaining detected line-of-sight falling point coordinates corresponding to each time period so as to obtain a plurality of line-of-sight falling point coordinates; and calibrating the line-of-sight falling point position of the driver detected in a preset starting time period in the next driving process of the driver based on the plurality of target frames and the plurality of line-of-sight falling point coordinates.
Further, in one embodiment, to calibrate the line of sight landing position of the target user more accurately, the calibrating the line of sight landing position of the driver detected by the driver in a preset starting time period during driving the host vehicle next time based on the plurality of target frames and the plurality of line of sight landing coordinates includes: calculating the deviation between each line of sight falling point coordinate and the corresponding target frame to obtain a plurality of third deviation values; calculating absolute values of differences between every two of the plurality of third deviation values; judging whether the absolute values of all the difference values are smaller than a preset deviation threshold value or not; when the absolute values of all the difference values are smaller than the preset deviation threshold value, calculating an average value of the plurality of third deviation values; and calibrating the line-of-sight falling point position of the driver detected in a preset starting time period in the next driving process of the driver based on the average value of the third deviation values.
In this embodiment, calculating the deviation between the line of sight falling point coordinate of each time and the target frame corresponding thereto may be calculating the deviation between the line of sight falling point coordinate of each time and the center point coordinate of the target frame corresponding thereto to obtain a plurality of third deviation values. If the values of the third deviation values do not change greatly, the average value of the third deviation values can be used for calibrating the subsequent sight falling point position.
Specifically, still taking the driver driving the vehicle as an example, each vehicle can be distinguished from each driver by the recognition of the face and the vehicle. For a certain vehicle or a certain driver, if the line of sight landing deviation of the driver (i.e., the above-described third deviation value) exists for a long time and the value is stable, the deviation can be regarded as the line of sight landing system error inherent to the vehicle or the driver. A historical gaze drop point system error is stored for the vehicle, the driver. When the vehicle is started next time or the driver uses the vehicle next time, the sight line falling point system error can be used for calibrating the sight line of the driver at the initial moment when the vehicle starts to run, so that the first sight line self-calibration time is saved, and the sight line self-calibration efficiency is improved. In addition, the accuracy of the system error accumulated for a long time is larger than that determined by single calibration, so that the accuracy of the sight self-calibration can be improved.
In this embodiment, the preset deviation threshold may be specifically determined according to different vehicle types, and different values may be set in practical application, which is not specifically limited in this embodiment.
By utilizing the scheme, the self calibration of the vision of the driver based on the increment learning type of the object outside the cabin can be realized. As shown in fig. 3, the DMS camera is used to obtain the head pose and sight information of the user, and the vehicle-mounted sensing system is used to obtain the position and state information of the vehicle and the front vehicle. The time period of the driver paying attention to the front vehicle is identified through the identification of the head gesture and the sight line of the driver, the driving behavior and the states of the self vehicle and the front vehicle, and the sight line of the driver is calibrated and calibrated by using an incremental learning method based on the position of the front vehicle. In addition, through the recognition of the vehicles and the human faces, the long-term sight system errors of different drivers and different vehicles are stored, recorded and calculated, and personalized storage and optimization of the sight of each user can be realized. Compared with the current mainstream fixed point vision calibration scheme, the real-time calibration method and device can avoid the problem that the original calibration result is deviated after the static calibration is adjusted in position, is noninductive to the driver, does not need the driver to perform additional static vision calibration operation, and improves the accuracy and efficiency of vision calibration and calibration.
Based on the scheme, the method and the device can acquire the distribution of the sight falling points of the driver looking forward in real time, realize the calibration of the sight falling points under the condition that the driver has no sense, can perform personalized optimization for the user, have the advantages of real time, accuracy and good experience compared with the current main stream fixed point sight calibration, and lay a foundation for the development of the follow-up driving safety function.
Based on the steps S101-S103, the method and the device can solve the technical problems that the existing sight line calibration method requires additional operation of the user in a static state, so that the user experience is poor, and the sight line calibration needs to be performed again under the condition that the posture of the user is changed.
According to the technical scheme provided by the embodiment of the invention, the current actual sight line area of the target user in the process of executing the preset activity is obtained, the detected current sight line drop point position information of the target user is obtained, and the sight line drop point position of the target user detected in the process of executing the preset activity is calibrated based on the current actual sight line area and the current sight line drop point position information, so that the calibration of the sight line deviation can be performed in the process of executing the preset activity by the user, and the condition that the user is required to adopt a specific posture to perform the sight line calibration operation in the static state is avoided. In addition, the invention calibrates the line of sight deviation based on the current actual line of sight area of the user and the detected current line of sight drop position information, namely, the line of sight deviation calibration can be performed in real time in the process of executing the preset activity by the user, and in the process, even if the gesture of the user changes, the final line of sight detection accuracy is not influenced. Therefore, the technical scheme provided by the embodiment of the invention can calibrate the sight line in real time under the condition that a user does not feel, so that the user experience and the sight line detection accuracy are greatly improved.
It should be noted that, although the foregoing embodiments describe the steps in a specific order, it will be understood by those skilled in the art that, in order to achieve the effects of the present invention, the steps are not necessarily performed in such an order, and may be performed simultaneously (in parallel) or in other orders, and these variations are within the scope of the present invention.
The user information (including but not limited to user equipment information, user personal information, object information corresponding to vehicle usage data, etc.) and data (including but not limited to data for analysis, stored data, displayed data, vehicle usage data, etc.) according to the present embodiment are both information and data authorized by the user or sufficiently authorized by each party. The data acquisition, collection and other actions involved in the embodiment are all executed after the authorization of the user and the object or after the full authorization of all the parties.
Furthermore, the invention also provides a sight line calibration device.
Referring to fig. 4, fig. 4 is a main block diagram of a sight line calibration apparatus according to an embodiment of the present invention. As shown in fig. 4, the sight line calibration apparatus in the embodiment of the present invention mainly includes a first acquisition unit 11, a second acquisition unit 12, and a calibration unit 13. Wherein,
A first acquisition unit 11 for acquiring a current actual line-of-sight area of a target user in performing a predetermined activity;
a second acquiring unit 12, configured to acquire detected current line of sight landing position information of the target user;
and a calibration unit 13, configured to calibrate the line of sight landing position of the target user detected during the predetermined activity based on the current actual line of sight area and the current line of sight landing position information.
In this embodiment, the first obtaining unit 11 is configured to obtain a current actual sight line area of the driver during driving the host vehicle.
In this embodiment, the current actual sight line area includes: the spatial position area where the target vehicle currently located right in front of the host vehicle is located; the first acquisition unit 11 includes:
a target vehicle information acquisition unit configured to acquire size information, shape information, and current position information of the target vehicle;
a corner point determining unit configured to determine a plurality of corner points of the target vehicle based on the size information, the shape information, and the current position information;
and the target frame generating unit is used for generating a first target frame used for representing the spatial position area where the target vehicle is located based on the plurality of corner points, and taking the first target frame as the current actual sight area.
Further, the apparatus described in this embodiment further includes:
the first judging unit is used for judging whether the current actual sight line area of the driver is the space position area before the current actual sight line area of the driver in the process of driving the vehicle is obtained;
the first obtaining unit 11 is further configured to obtain the spatial location area when the current actual line of sight area of the driver is the spatial location area.
In this embodiment, the first judging unit judges whether the current actual line-of-sight area of the driver is the spatial position area in the following manner:
judging whether a vehicle exists in a preset distance right in front of the vehicle;
judging whether the steering wheel angle of the vehicle is smaller than a preset angle threshold value or not;
judging whether the vehicle continuously outputs a brake signal in a preset time period;
judging whether the head pose angle of the driver is positioned in a preset angle range;
when a vehicle exists in the preset distance right in front of the vehicle, the steering wheel angle of the vehicle is smaller than the preset angle threshold, the vehicle continuously outputs a brake signal in the preset time period, and the head gesture angle of the driver is located in the preset angle range, the current actual sight line area of the driver is determined to be the space position area.
In this embodiment, the second obtaining unit 12 obtains the detected current line of sight landing position information of the target user in the following manner:
acquiring a plurality of detected sight falling points of the target user in a time period when the sight of the target user is positioned in the current actual sight area;
clustering the plurality of sight falling points by adopting a K-means clustering algorithm to obtain at least one sight falling point cluster; each sight line falling point cluster is provided with a cluster center point corresponding to the sight line falling point cluster;
selecting a cluster center point closest to the current actual sight area from the cluster center points as a sight drop point mark point;
and acquiring coordinates of the sight falling point mark point as the current sight falling point position information.
In this embodiment, the current line of sight drop point location information includes current line of sight drop point coordinates. Further, the apparatus described in this embodiment further includes:
the second judging unit is used for judging whether the current line-of-sight falling point coordinates are located outside the current actual line-of-sight area or not;
the calibration unit 13 is further configured to calibrate, when the current line of sight landing coordinate is located outside the current actual line of sight area, a line of sight landing position of the target user detected during the predetermined activity based on the current actual line of sight area and the current line of sight landing coordinate.
In this embodiment, the current line of sight drop point location information includes a current line of sight drop point coordinate; the calibration unit 13 calibrates the gaze drop position of the target user detected during the predetermined activity in the following manner:
acquiring the center point coordinates of the current actual sight area;
calculating a deviation value between the current line-of-sight falling point coordinate and the center point coordinate;
and calibrating the sight falling point position of the target user detected in the process of the preset activity based on the deviation value.
Further, the second obtaining unit 12 is further configured to generate, during a period of time in which the driver's line of sight is located in a spatial location area where the vehicle located immediately in front of the host vehicle is located next time, a second target frame for representing the spatial location area where the vehicle is located, and obtain detected next line of sight landing location information of the driver;
the calibration unit 13 also calibrates the gaze drop point position of the target user detected during the predetermined activity in the following manner:
and calibrating the sight-line drop point position of the driver detected by the driver in the process of driving the vehicle based on the first target frame, the second target frame, the current sight-line drop point position information and the next sight-line drop point position information.
In this embodiment, the current line of sight drop point location information includes a current line of sight drop point coordinate; the next sight falling point position information comprises next sight falling point coordinates; the calibration unit 13 comprises:
the first coordinate acquisition unit is used for acquiring the center point coordinate of the first target frame;
a calculating unit, configured to calculate, as a first deviation value, a deviation value between the current line of sight falling point coordinate and a center point coordinate of the first target frame;
the second coordinate acquisition unit is used for acquiring the center point coordinate of the second target frame;
the calculating unit is further used for calculating a deviation value between the next line of sight falling point coordinate and the center point coordinate of the second target frame as a second deviation value;
the calculating unit is further configured to calculate an average value of the first deviation value and the second deviation value as an average deviation value;
and the calibration subunit is used for calibrating the sight falling point position of the driver, which is detected by the driver in the process of driving the vehicle, based on the average deviation value.
Further, the calculating unit is further configured to calculate an absolute value of a difference value between the first deviation value and the second deviation value;
The calibration unit 13 further comprises:
a third judging unit, configured to judge whether an absolute value of the difference value is smaller than a preset deviation threshold;
the calculating unit is further configured to calculate, when the absolute value of the difference value is smaller than the preset deviation threshold, an average value of the first deviation value and the second deviation value as an average deviation value.
In this embodiment, the current line of sight drop point location information includes a current line of sight drop point coordinate; the second obtaining unit 12 is further configured to generate, in a time period of a spatial location area where the driver's sight line is located in front of the vehicle at each time, a target frame corresponding to each time period and used for representing the spatial location area where the vehicle is located in front of the vehicle at each time period, so as to obtain a plurality of target frames, and obtain detected sight line drop point coordinates corresponding to each time period, so as to obtain a plurality of sight line drop point coordinates;
the calibration unit 13 is further configured to calibrate, based on the plurality of target frames and the plurality of line-of-sight landing coordinates, a line-of-sight landing position of the driver detected in a preset starting time period during driving of the host vehicle by the driver next time.
In this embodiment, the calibration unit 13 calibrates the line-of-sight landing position of the driver detected in a preset starting time period during driving of the host vehicle by using the following method:
Calculating the deviation between each line of sight falling point coordinate and the corresponding target frame to obtain a plurality of third deviation values;
calculating absolute values of differences between every two of the plurality of third deviation values;
judging whether the absolute values of all the difference values are smaller than a preset deviation threshold value or not;
when the absolute values of all the difference values are smaller than the preset deviation threshold value, calculating an average value of the plurality of third deviation values;
and calibrating the line-of-sight falling point position of the driver, which is detected in a preset starting time period in the process of driving the vehicle by the driver next time, based on the average value of the plurality of third deviation values.
In some embodiments, one or more of the first acquisition unit 11, the second acquisition unit 12, and the calibration unit 13 may be combined together into one module. In one embodiment, the description of the specific implementation function thereof may be described with reference to steps S101-S103.
The above-mentioned line-of-sight calibration apparatus is used for executing the line-of-sight calibration method embodiment shown in fig. 1, and the technical principles of the two, the technical problems to be solved and the technical effects to be produced are similar, and those skilled in the art can clearly understand that, for convenience and brevity of description, the specific working process and the related description of the line-of-sight calibration apparatus can refer to the description of the line-of-sight calibration method embodiment, and will not be repeated here.
The device in the embodiment of the invention can be a control device formed by various electronic equipment. In some possible implementations, the device may include multiple storage devices and multiple processors. The program for executing the gaze calibration method of the above method embodiment may be divided into a plurality of sub-programs, each of which may be loaded and executed by a processor to perform the different steps of the gaze calibration method of the above method embodiment, respectively. Specifically, each of the subroutines may be stored in a different memory, respectively, and each of the processors may be configured to execute the programs in one or more memories to collectively implement the gaze calibration method of the above-described method embodiments, i.e., each of the processors executes different steps of the gaze calibration method of the above-described method embodiments, respectively, to collectively implement the gaze calibration method of the above-described method embodiments.
The plurality of processors may be processors disposed on the same device, for example, the computer device may be a high-performance device composed of a plurality of processors, and the plurality of processors may be processors configured on the high-performance device. In addition, the plurality of processors may be processors disposed on different devices, for example, the computer device may be a server cluster, and the plurality of processors may be processors on different servers in the server cluster.
It will be appreciated by those skilled in the art that the present invention may implement all or part of the above-described methods according to the above-described embodiments, or may be implemented by means of a computer program for instructing relevant hardware, where the computer program may be stored in a computer readable storage medium, and where the computer program may implement the steps of the above-described embodiments of the method when executed by a processor. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable storage medium may include: any entity or device, medium, usb disk, removable hard disk, magnetic disk, optical disk, computer memory, read-only memory, random access memory, electrical carrier wave signals, telecommunications signals, software distribution media, and the like capable of carrying the computer program code. It should be noted that the computer readable storage medium may include content that is subject to appropriate increases and decreases as required by jurisdictions and by jurisdictions in which such computer readable storage medium does not include electrical carrier signals and telecommunications signals.
Further, the invention also provides a control device. In one control device embodiment according to the present invention, the control device includes a processor and a storage device, the storage device may be configured to store a program for performing the gaze calibration method of the above-described method embodiment, and the processor may be configured to execute the program in the storage device, including, but not limited to, the program for performing the gaze calibration method of the above-described method embodiment. For convenience of explanation, only those portions of the embodiments of the present invention that are relevant to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention. The control device may be a control device formed of various electronic devices.
Further, the invention also provides a computer readable storage medium. In one computer-readable storage medium embodiment according to the present invention, the computer-readable storage medium may be configured to store a program that performs the gaze calibration method of the above-described method embodiment, which program may be loaded and executed by a processor to implement the gaze calibration method described above. For convenience of explanation, only those portions of the embodiments of the present invention that are relevant to the embodiments of the present invention are shown, and specific technical details are not disclosed, please refer to the method portions of the embodiments of the present invention. The computer readable storage medium may be a storage device including various electronic devices, and optionally, the computer readable storage medium in the embodiments of the present invention is a non-transitory computer readable storage medium.
Further, it should be understood that, since the respective modules are merely set to illustrate the functional units of the apparatus of the present invention, the physical devices corresponding to the modules may be the processor itself, or a part of software in the processor, a part of hardware, or a part of a combination of software and hardware. Accordingly, the number of individual modules in the figures is merely illustrative.
Those skilled in the art will appreciate that the various modules in the apparatus may be adaptively split or combined. Such splitting or combining of specific modules does not cause the technical solution to deviate from the principle of the present invention, and therefore, the technical solution after splitting or combining falls within the protection scope of the present invention.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related technical features may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will fall within the scope of the present invention.

Claims (15)

1. A line-of-sight calibration method, the method comprising:
acquiring a current actual sight area of a target user in the process of executing a preset activity;
acquiring the detected current sight falling point position information of the target user;
and calibrating the sight-line falling point position of the target user detected in the preset activity process based on the current actual sight-line area and the current sight-line falling point position information.
2. The gaze calibration method of claim 1, wherein the obtaining a current actual gaze area of the target user during performance of the predetermined activity comprises:
the current actual sight area of the driver in the process of driving the vehicle is obtained.
3. The gaze calibration method of claim 2, wherein the current actual gaze area comprises: the spatial position area where the target vehicle currently located right in front of the host vehicle is located; the obtaining the current actual sight line area of the driver in the process of driving the vehicle comprises the following steps:
acquiring size information, shape information and current position information of the target vehicle;
determining a plurality of corner points of the target vehicle based on the size information, the shape information, and the current position information;
And generating a first target frame used for representing a spatial position area where the target vehicle is located based on the plurality of corner points, and taking the first target frame as the current actual sight area.
4. A gaze calibration method according to claim 3, wherein prior to said obtaining a current actual gaze area of the driver during driving of the host vehicle, the method further comprises:
judging whether the current actual sight line area of the driver is the space position area or not;
the obtaining the current actual sight line area of the driver in the process of driving the vehicle comprises the following steps:
and when the current actual sight line area of the driver is the space position area, acquiring the space position area.
5. The sight line calibration method according to claim 4, wherein the determining whether the current actual sight line area of the driver is the spatial position area includes:
judging whether a vehicle exists in a preset distance right in front of the vehicle;
judging whether the steering wheel angle of the vehicle is smaller than a preset angle threshold value or not;
judging whether the vehicle continuously outputs a brake signal in a preset time period;
judging whether the head pose angle of the driver is positioned in a preset angle range;
When a vehicle exists in the preset distance right in front of the vehicle, the steering wheel angle of the vehicle is smaller than the preset angle threshold, the vehicle continuously outputs a brake signal in the preset time period, and the head gesture angle of the driver is located in the preset angle range, the current actual sight line area of the driver is determined to be the space position area.
6. The gaze calibration method of claim 1, wherein the obtaining detected current gaze point location information of the target user comprises:
acquiring a plurality of detected sight falling points of the target user in a time period when the sight of the target user is positioned in the current actual sight area;
clustering the plurality of sight falling points by adopting a K-means clustering algorithm to obtain at least one sight falling point cluster; each sight line falling point cluster is provided with a cluster center point corresponding to the sight line falling point cluster;
selecting a cluster center point closest to the current actual sight area from the cluster center points as a sight drop point mark point;
and acquiring coordinates of the sight falling point mark point as the current sight falling point position information.
7. The gaze calibration method of claim 1, wherein the current gaze point location information comprises current gaze point coordinates; the method further comprises the steps of:
Judging whether the current sight falling point coordinates are located outside the current actual sight area or not;
the calibrating the sight-line drop point position of the target user detected in the preset activity process based on the current actual sight-line area and the current sight-line drop point position information comprises the following steps:
and when the current sight-line falling point coordinate is positioned outside the current actual sight-line area, calibrating the sight-line falling point position of the target user detected in the preset activity process based on the current actual sight-line area and the current sight-line falling point coordinate.
8. The gaze calibration method of claim 1, wherein the current gaze point location information comprises current gaze point coordinates; the calibrating the sight-line drop point position of the target user detected in the preset activity process based on the current actual sight-line area and the current sight-line drop point position information comprises the following steps:
acquiring the center point coordinates of the current actual sight area;
calculating a deviation value between the current line-of-sight falling point coordinate and the center point coordinate;
and calibrating the sight falling point position of the target user detected in the process of the preset activity based on the deviation value.
9. A gaze calibration method according to claim 3, wherein the method further comprises:
generating a second target frame used for representing the spatial position area where the vehicle is located in front of the vehicle in the time period of the spatial position area where the driver's sight line is located next time, and acquiring the detected next sight line falling point position information of the driver;
and/or, the calibrating the sight-line drop point position of the target user detected in the preset activity process based on the current actual sight-line area and the current sight-line drop point position information includes:
and calibrating the sight-line drop point position of the driver detected by the driver in the process of driving the vehicle based on the first target frame, the second target frame, the current sight-line drop point position information and the next sight-line drop point position information.
10. The gaze calibration method of claim 9, wherein the current gaze point location information comprises current gaze point coordinates; the next sight falling point position information comprises next sight falling point coordinates; the calibrating the line of sight landing position of the driver detected by the driver in the driving process of the host vehicle based on the first target frame, the second target frame, the current line of sight landing position information and the next line of sight landing position information comprises the following steps:
Acquiring the center point coordinates of the first target frame;
calculating a deviation value between the current line-of-sight falling point coordinate and the central point coordinate of the first target frame as a first deviation value;
acquiring the center point coordinates of the second target frame;
calculating a deviation value between the next sight falling point coordinate and the center point coordinate of the second target frame as a second deviation value;
calculating an average value of the first deviation value and the second deviation value as an average deviation value;
and calibrating the sight falling point position of the driver detected by the driver in the process of driving the vehicle based on the average deviation value.
11. The sight line calibration method according to claim 10, wherein the calibrating the sight line landing position of the driver detected by the driver during driving of the host vehicle based on the first target frame, the second target frame, the current sight line landing position information, and the next sight line landing position information, further comprises:
calculating an absolute value of a difference between the first deviation value and the second deviation value;
judging whether the absolute value of the difference value is smaller than a preset deviation threshold value or not;
The calculating the average value of the first deviation value and the second deviation value as an average deviation value includes:
and when the absolute value of the difference value is smaller than the preset deviation threshold value, calculating the average value of the first deviation value and the second deviation value as an average deviation value.
12. A gaze calibration method according to claim 3, wherein the current gaze location information comprises current gaze location coordinates; the method further comprises the steps of:
generating target frames corresponding to each time period and used for representing the spatial position area of the vehicle in front of the vehicle in each time period in the time period of the spatial position area of the vehicle in front of the vehicle, so as to obtain a plurality of target frames, and obtaining detected line-of-sight falling point coordinates corresponding to each time period so as to obtain a plurality of line-of-sight falling point coordinates;
and calibrating the line-of-sight falling point position of the driver detected in a preset starting time period in the next driving process of the driver based on the plurality of target frames and the plurality of line-of-sight falling point coordinates.
13. The sight line calibration method according to claim 12, wherein the calibrating the sight line drop position of the driver detected next time within a preset start period of time during which the driver drives the host vehicle based on the plurality of target frames and the plurality of sight line drop coordinates includes:
Calculating the deviation between each line of sight falling point coordinate and the corresponding target frame to obtain a plurality of third deviation values;
calculating absolute values of differences between every two of the plurality of third deviation values;
judging whether the absolute values of all the difference values are smaller than a preset deviation threshold value or not;
when the absolute values of all the difference values are smaller than the preset deviation threshold value, calculating an average value of the plurality of third deviation values;
and calibrating the line-of-sight falling point position of the driver, which is detected in a preset starting time period in the process of driving the vehicle by the driver next time, based on the average value of the plurality of third deviation values.
14. A control device comprising a processor and a storage device, the storage device being adapted to store a plurality of program codes, characterized in that the program codes are adapted to be loaded and executed by the processor to perform the gaze calibration method of any one of claims 1 to 13.
15. A computer readable storage medium having stored therein a plurality of program codes, characterized in that the program codes are adapted to be loaded and executed by a processor to perform the gaze calibration method of any one of claims 1 to 13.
CN202311441032.0A 2023-11-01 2023-11-01 Sight line calibration method, control device and storage medium Pending CN117152411A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311441032.0A CN117152411A (en) 2023-11-01 2023-11-01 Sight line calibration method, control device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311441032.0A CN117152411A (en) 2023-11-01 2023-11-01 Sight line calibration method, control device and storage medium

Publications (1)

Publication Number Publication Date
CN117152411A true CN117152411A (en) 2023-12-01

Family

ID=88899300

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311441032.0A Pending CN117152411A (en) 2023-11-01 2023-11-01 Sight line calibration method, control device and storage medium

Country Status (1)

Country Link
CN (1) CN117152411A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009015533A (en) * 2007-07-03 2009-01-22 Toyota Motor Corp Gaze direction detecting device
CN107407977A (en) * 2015-03-05 2017-11-28 索尼公司 Message processing device, control method and program
CN112541632A (en) * 2020-12-15 2021-03-23 江苏大学 Driving behavior safety evaluation method based on multi-attribute decision
CN115188048A (en) * 2022-07-07 2022-10-14 Oppo广东移动通信有限公司 Sight line correction method and device, electronic equipment and storage medium
CN116185199A (en) * 2023-02-27 2023-05-30 杭州海康汽车软件有限公司 Method, device and system for determining gaze point and intelligent vehicle
CN116486386A (en) * 2023-04-24 2023-07-25 上海临港绝影智能科技有限公司 Sight line distraction range determination method and device
CN116797652A (en) * 2023-06-14 2023-09-22 杭州海康威视数字技术股份有限公司 Sight line calibration method and device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009015533A (en) * 2007-07-03 2009-01-22 Toyota Motor Corp Gaze direction detecting device
CN107407977A (en) * 2015-03-05 2017-11-28 索尼公司 Message processing device, control method and program
CN112541632A (en) * 2020-12-15 2021-03-23 江苏大学 Driving behavior safety evaluation method based on multi-attribute decision
CN115188048A (en) * 2022-07-07 2022-10-14 Oppo广东移动通信有限公司 Sight line correction method and device, electronic equipment and storage medium
CN116185199A (en) * 2023-02-27 2023-05-30 杭州海康汽车软件有限公司 Method, device and system for determining gaze point and intelligent vehicle
CN116486386A (en) * 2023-04-24 2023-07-25 上海临港绝影智能科技有限公司 Sight line distraction range determination method and device
CN116797652A (en) * 2023-06-14 2023-09-22 杭州海康威视数字技术股份有限公司 Sight line calibration method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107567412B (en) Object position measurement using vehicle motion data with automotive camera
CN106981082B (en) Vehicle-mounted camera calibration method and device and vehicle-mounted equipment
US8300889B2 (en) Method and apparatus for detecting target parking position by using two reference points, and parking assist system using the same
CN108508881B (en) Automatic driving control strategy adjusting method, device, equipment and storage medium
US10181196B2 (en) Trailer tracking apparatus and method
US20160288601A1 (en) Trailer coupling assistance system with vehicle video camera
US10068143B2 (en) Method for calibrating a camera for a gaze direction detection in a vehicle, device for a motor vehicle having a camera and at least one further element, and computer program product
US20200125861A1 (en) Road line detection device and road line detection method
US20190147274A1 (en) Driver state determination apparatus, method, and recording medium
US11847562B2 (en) Obstacle recognition assistance device, obstacle recognition assistance method, and storage medium
CN112834249B (en) Steering parameter detection method, device, equipment and storage medium
CN110962858B (en) Target identification method and device
US11080562B1 (en) Key point recognition with uncertainty measurement
CN114475593A (en) Travel track prediction method, vehicle, and computer-readable storage medium
CN111912414B (en) Vehicle pose verification method, device, equipment and storage medium
CN117152411A (en) Sight line calibration method, control device and storage medium
CN113569666B (en) Method for detecting continuous illegal lane change of vehicle and computer equipment
US11685384B2 (en) Driver alertness detection method, device and system
CN112356845B (en) Method, device and equipment for predicting motion state of target and vehicle
JP2010262478A (en) Vehicle control system and safety confirmation determination device
CN112304293A (en) Road height detection method and device, readable storage medium and electronic equipment
CN113008165B (en) Vehicle wheel positioning method, terminal equipment and system
CN116863124B (en) Vehicle attitude determination method, controller and storage medium
CN116597425B (en) Method and device for determining sample tag data of driver and electronic equipment
US20240029453A1 (en) Careless driving determination apparatus and careless driving determination method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination