CN116185199A - Method, device and system for determining gaze point and intelligent vehicle - Google Patents

Method, device and system for determining gaze point and intelligent vehicle Download PDF

Info

Publication number
CN116185199A
CN116185199A CN202310193173.9A CN202310193173A CN116185199A CN 116185199 A CN116185199 A CN 116185199A CN 202310193173 A CN202310193173 A CN 202310193173A CN 116185199 A CN116185199 A CN 116185199A
Authority
CN
China
Prior art keywords
cabin
gaze point
personnel
gaze
wake
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310193173.9A
Other languages
Chinese (zh)
Inventor
何航平
邝宏武
李兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Haikang Auto Software Co ltd
Original Assignee
Hangzhou Haikang Auto Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Haikang Auto Software Co ltd filed Critical Hangzhou Haikang Auto Software Co ltd
Priority to CN202310193173.9A priority Critical patent/CN116185199A/en
Publication of CN116185199A publication Critical patent/CN116185199A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Multimedia (AREA)
  • Navigation (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The embodiment of the application provides a method, a device and a system for determining a gaze point and an intelligent vehicle, wherein the method comprises the following steps: under the condition that the gaze point calibration condition is met, the gaze point position of the current personnel in the cabin when the personnel in the cabin gaze at the intelligent cabin is obtained and used as a first position, the difference between the first position and a pre-obtained second position is calculated, and the gaze point deviation is obtained, wherein the second position is a clustering position which is determined in advance based on the gaze point positions of the personnel in the test cabin when the personnel in the test cabin gaze at the intelligent cabin, and under the condition that a third position of the current personnel in the cabin gaze at the wake-up area is obtained, the calibrated gaze point position is determined based on the third position and the gaze point deviation. According to the method and the device for correcting the gaze point deviation, the gaze point deviation of the gaze wake-up area can be compensated, the existing sight line individuation difference can be corrected, and then the problem of inaccurate interaction operations such as screen wake-up caused by the gaze point deviation can be reduced.

Description

Method, device and system for determining gaze point and intelligent vehicle
Technical Field
The application relates to the technical field of sight line calibration, in particular to a method, a device and a system for determining a gaze point and an intelligent vehicle.
Background
The line-of-sight interaction technology can perform man-machine interaction by capturing the position of the gaze point of human eyes, and is widely applied. For example, when the driver cannot perform a touch operation on the screen of the car navigation while driving the vehicle, the driver may look at the screen of the car navigation with eyes to perform an operation such as a screen wakeup on the car navigation.
However, due to the personalized difference of the vision of different people, the deviation of the gaze point can be caused, and the screen can not be awakened in time when the user wants to awaken the screen, or can be awakened when the user's vision is not intended to be awakened and the user's vision is not intended to be on the screen. Therefore, in the current gaze tracking technology, there is a problem that the interactive operations such as screen wakeup due to gaze point deviation are inaccurate.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, an apparatus, a system, and an intelligent vehicle for determining a gaze point, so as to reduce the problem of inaccurate interaction operations such as screen awakening caused by gaze point deviation. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a method for determining a gaze point, where the method includes:
under the condition that the fixation point calibration condition is met, the fixation point position of the current personnel in the cabin when the personnel in the cabin watch the intelligent cabin is obtained and used as a first position;
Calculating the difference between the first position and a pre-acquired second position to obtain a gaze point deviation, wherein the second position is a clustering position determined in advance based on gaze point positions when a plurality of test cabin personnel gaze the intelligent shelter;
and under the condition that a third position of the current in-cabin personnel watching the wake-up area is acquired, determining the calibrated gaze point position based on the deviation between the third position and the gaze point.
Optionally, before the step of acquiring the gaze point position when the current personnel in the cabin gaze at the intelligent shelter, the method further comprises:
determining that the gaze point calibration condition is met, if at least one of the following conditions is met:
the speed of the vehicle is not less than a preset speed, and the steering wheel angle is not greater than a preset angle;
the gaze point corresponding to the vision of the personnel in the current cabin is concentrated in a preset range;
the vehicle is in a safe state.
Optionally, before the step of acquiring the gaze point position when the current personnel in the cabin gaze at the intelligent shelter, the method further comprises:
the method comprises the steps of obtaining the fixation point positions of a plurality of test cabin personnel when the intelligent cabin is fixated, and taking the fixation point positions as test positions;
and clustering the test positions to obtain a clustering center position serving as a second position.
Optionally, the step of obtaining, as the first position, a gaze point position of the current person in the cabin when gazing at the intelligent shelter includes:
under the condition that the current personnel in the cabin watch the intelligent cabin for a preset time, acquiring the watching point position of the current personnel in the cabin;
and clustering the gaze point positions of the current personnel in the cabin to obtain a clustering center position serving as a first position.
Optionally, the wake-up area is a preset bounded plane in a coordinate system of the camera; the method further comprises the steps of:
capturing human eye vision of the personnel in the current cabin through the camera;
under the condition that the human eye sight line intersects with the preset bounded plane, determining the intersection point position of the human eye sight line and the preset bounded plane as a third position of the current in-cabin personnel gazing at the wake-up area;
the step of determining the calibrated gaze point location based on the third location and the gaze point deviation comprises:
and calculating the sum of the third position and the gaze point deviation, and determining the corrected gaze point position.
Optionally, the method further comprises:
and executing the wake-up action of the wake-up area under the condition that the calibrated gaze point position is located in the wake-up area.
Optionally, the step of performing the wake-up action of the wake-up area includes:
and waking up a device corresponding to the wake-up area, wherein the device comprises at least one of a vehicle-mounted screen, a sound device, an air conditioner, a windscreen wiper and an electric heating device.
In a second aspect, an embodiment of the present application provides a gaze point determining apparatus, the apparatus including a processor, wherein:
the processor is used for acquiring the gaze point position of the current personnel in the cabin when the personnel in the cabin gaze the intelligent cockpit as a first position under the condition that the gaze point calibration condition is met; calculating the difference between the first position and a pre-acquired second position to obtain a gaze point deviation, wherein the second position is a clustering position determined in advance based on gaze point positions when a plurality of test cabin personnel gaze the intelligent shelter; and under the condition that a third position of the current in-cabin personnel watching the wake-up area is acquired, determining the calibrated gaze point position based on the deviation between the third position and the gaze point.
In a third aspect, embodiments of the present application provide a gaze point determination system, the system including a processor and a camera, wherein:
The camera is used for collecting the eye sight of a person in the cabin when the person looks at the intelligent cabin;
the processor is used for determining the gaze point position of the personnel in the cabin when the personnel in the cabin gaze at the intelligent shelter based on the human eye vision under the condition that the gaze point calibration condition is met, and taking the gaze point position as a first position; calculating the difference between the first position and a pre-acquired second position to obtain a gaze point deviation, wherein the second position is a clustering position determined in advance based on gaze point positions when a plurality of test cabin personnel gaze the intelligent shelter; and under the condition that a third position of the current in-cabin personnel watching the wake-up area is acquired, determining the calibrated gaze point position based on the deviation between the third position and the gaze point.
In a fourth aspect, embodiments of the present application provide an intelligent vehicle comprising an intelligent cabin comprising a processor, wherein:
the processor is configured to implement the method according to any one of the first aspect when executing the computer program.
The beneficial effects of the embodiment of the application are that:
in the scheme provided by the embodiment of the application, the electronic device can acquire the gaze point position of the current personnel in the cabin when the personnel in the cabin gaze on the intelligent shelter under the condition that the gaze point calibration condition is met, calculate the difference between the first position and the second position acquired in advance as the first position, and obtain the gaze point deviation, wherein the second position is a clustering position determined in advance based on the gaze point position of the personnel in the test cabin when the personnel in the test cabin gaze on the intelligent shelter, and determine the calibrated gaze point position based on the third position and the gaze point deviation under the condition that the third position of the current personnel in the cabin gaze on the wake-up area is acquired. The second positions are clustering positions of the gaze point positions when a plurality of test persons watch the intelligent shelter, so that the difference between the gaze point positions and the second positions when the current persons watch the intelligent shelter can be calculated, the gaze point deviation is obtained, and then the third positions of the current persons watch the wake-up area can be calibrated based on the gaze point deviation, so that the gaze point deviation of the gaze-up area is compensated, the existing personalized difference of the sights can be corrected, and the problem of inaccurate interaction operations such as screen wake-up caused by the gaze point deviation can be reduced. Of course, not all of the above-described advantages need be achieved simultaneously in practicing any one of the products or methods of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly introduce the drawings that are required to be used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other embodiments may also be obtained according to these drawings to those skilled in the art.
Fig. 1 is a flowchart of a method for determining a gaze point according to an embodiment of the present application;
FIG. 2 is a schematic view of a gaze point location distribution based on the embodiment shown in FIG. 1;
FIG. 3 is a flow chart for acquiring a second location based on the embodiment shown in FIG. 1;
FIG. 4 is a specific flowchart of step S101 in the embodiment shown in FIG. 1;
FIG. 5 is a flow chart of determining a third position based on the embodiment of FIG. 1;
FIG. 6 is a schematic diagram of gaze wake-up provided in an embodiment of the present application;
FIG. 7 is a flow chart of determining gaze point calibration conditions based on the embodiment of FIG. 1;
fig. 8 is a schematic structural diagram of a gaze point determining device according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of another gaze point determining device provided in an embodiment of the present application;
Fig. 10 is a schematic structural diagram of a fixation point determining system according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an intelligent vehicle with a gaze point according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. Based on the embodiments herein, a person of ordinary skill in the art would be able to obtain all other embodiments based on the disclosure herein, which are within the scope of the disclosure herein.
In order to reduce the problem of inaccurate interaction operations such as screen awakening caused by gaze point deviation, the embodiment of the application provides a gaze point determining method, device and system, an intelligent vehicle, a computer readable storage medium and a computer program product. The following first describes a method for determining a gaze point provided in an embodiment of the present application.
The method for determining the gaze point provided in the embodiment of the present application may be applied to any electronic device that needs to determine the gaze point, for example, a processor of an intelligent cabin in an intelligent vehicle, etc., which is not specifically limited herein, and is hereinafter referred to as an electronic device for clarity of description.
As shown in fig. 1, a method for determining a gaze point, the method comprising:
s101, under the condition that the fixation point calibration condition is met, acquiring the fixation point position of a person in the current cabin when the person looks at the intelligent seat cabin as a first position;
s102, calculating the difference between the first position and a pre-acquired second position to obtain a gaze point deviation;
the second positions are clustering positions which are determined in advance based on the gaze point positions of a plurality of persons in the test cabins when the persons gaze the intelligent seatings;
and S103, when the third position of the current in-cabin personnel watching the wake-up area is acquired, determining the calibrated gaze point position based on the deviation between the third position and the gaze point.
It can be seen that, in the solution provided in the embodiment of the present application, the electronic device may obtain, as the first position, a gaze point position when a person in the current cabin gazes at the intelligent shelter, and calculate, as the first position, a difference between the first position and a second position obtained in advance, to obtain a gaze point deviation, where the second position is a cluster position determined in advance based on gaze point positions when a plurality of test persons in the test cabin gazes at the intelligent shelter, and determine, based on the third position and the gaze point deviation, a corrected gaze point position when a third position when the person in the current cabin gazes at the wake-up area is obtained. The second positions are clustering positions of the gaze point positions when a plurality of persons in the test cabin gaze the intelligent shelter, so that the difference between the gaze point positions and the second positions when the persons in the test cabin gaze the intelligent shelter can be calculated, the gaze point deviation is obtained, the third positions of the persons in the test cabin gaze the wake-up area can be calibrated based on the gaze point deviation, the gaze point deviation of the gaze wake-up area is compensated, the existing personalized difference of the sights can be corrected, and the problem of inaccurate interaction operations such as screen wake-up caused by the gaze point deviation can be reduced.
Smart devices have become an important component of a user's life, and a user may be provided with a lot of convenience by looking at the screen to wake up the smart device. For example, an intelligent cabin is installed in an intelligent vehicle, and when an in-cabin person cannot perform touch screen operation on an in-vehicle screen while driving the vehicle, the in-cabin person can wake up the in-vehicle screen by looking at the wake-up.
However, there is a line of sight deviation between different users, which may cause some users to wake up the screen in time when they have gazed at the wake-up area, or the user's line of sight may unintentionally fall at the wake-up area to wake up the screen multiple times. In order to reduce the problem of inaccurate interactive operations such as screen awakening caused by gaze point deviation, the application provides a gaze point determining method.
In the case of a person driving a vehicle in the cabin, the line of sight may fall in front of the intelligent cabin or the like, while in waking up the on-vehicle screen by looking at the on-vehicle screen, the line of sight may fall in the wake-up area of the on-vehicle screen. The physical deviation of the existing gaze point position is the same whether the line of sight of the personnel in the cabin falls in a position such as the front of the intelligent cabin or in the wake-up area of the screen. Therefore, the gaze point position of the person in the cabin looking at the wake-up area can be calibrated based on the gaze point deviation when the person in the cabin looks at the intelligent cockpit during driving the vehicle.
When the person in the cabin concentrates on driving the vehicle or focuses on a certain area of the intelligent cabin, the sight line drop point position is concentrated, and then the gaze point calibration condition can be preset based on the condition that the person in the cabin focuses on the intelligent cabin, and the electronic device can acquire the gaze point position when the person in the cabin focuses on the intelligent cabin at present as the first position under the condition that the gaze point calibration condition is met, namely, step S101 is executed.
Under the condition that the gaze point calibration condition is met, the intelligent vehicle can start self-adaptive calibration on the gaze point position of personnel in the cabin.
In one embodiment, if the vehicle is driven by the person in the cabin at a slower speed, the concentration may be reduced, and the line of sight of the person in the cabin moves more frequently, then it is not easy to determine the gaze point position when the person in the cabin gazes at the intelligent shelter, and it is not easy to accurately determine the gaze point deviation when the person in the cabin gazes at the intelligent shelter. Therefore, under the condition that the speed of driving the vehicle and the rotation angle of the steering wheel of the personnel in the cabin meet the first preset condition, the sight falling points can be intensively distributed, the fixation point position of the personnel in the cabin can be automatically calibrated, and then the electronic equipment can acquire the fixation point position of the personnel in the cabin when the personnel in the cabin fixates the intelligent seat cabin. For example, the first preset condition may be that a vehicle speed of the vehicle is not less than a preset speed and a steering wheel angle is not greater than a preset angle.
For example, the preset gaze point calibration condition is that the vehicle speed is equal to or greater than 80km/h, the steering wheel rotation angle is equal to or less than 15 degrees, at the moment, the speed of a person in the cabin driving the vehicle is 90km/h, the steering wheel rotation angle is 10 degrees, the gaze point calibration condition is met, namely, the intelligent vehicle starting self-adaptive calibration condition is met, the gaze point position of the person in the cabin can be calibrated, and then the electronic equipment can acquire the gaze point position of the person in the cabin when the person in the cabin gazes at the intelligent seat cabin at present as the first position.
In another embodiment, if the personnel in the cabin drive the vehicle to meet safer conditions such as traffic red lights, the personnel in the cabin look at a certain position of the intelligent cabin to calibrate the point of regard, and the influence on the running safety of the vehicle is not great, so that the current position of the point of regard when the personnel in the cabin look at the intelligent cabin can be obtained. That is, the electronic device may acquire the gaze point position of the current in-cabin person looking at the intelligent shelter with the vehicle in a safe state.
In another embodiment, under the condition that the gaze point corresponding to the line of sight of the person in the current cabin is concentrated in the preset range, the gaze point position of the person in the current cabin when looking at the intelligent seat cabin is conveniently and accurately determined, and further, the gaze point deviation is accurately determined, so that the gaze point calibration condition can be determined to be met when the gaze point corresponding to the line of sight of the person in the current cabin is concentrated in the preset range.
For example, by sampling and analyzing actual driving data of a vehicle driven by an in-cabin person, it is found that most of the time is right ahead of the intelligent cabin in the process of driving on different roads, and therefore, the preset range can be a range of a certain preset size right ahead of the intelligent cabin, so that the electronic device can obtain the gaze point position of the current in-cabin person when the in-cabin person gazes right ahead of the intelligent cabin, and when the gaze point position is concentrated in the preset range, it can be determined that the gaze point calibration condition is met, and then the gaze point position of the in-cabin person gazes at the wake-up area is calibrated based on the gaze point deviation of the in-cabin person when the in-cabin person gazes right ahead of the intelligent cabin.
Illustratively, as shown in fig. 2, in the virtual plane of the cabin of the automobile, there are included a straight ahead region and a gazing wake region, wherein a solid oval region 201 represents a straight ahead theoretical region, and a dotted oval region 202 represents a straight ahead actual region, which may be the above-mentioned preset range. The solid rectangular area 203 represents the gaze wake theoretical area, and the dotted rectangular area 204 represents the gaze wake actual area. The straight ahead theoretical region and the gazing wake theoretical region are regions that are unchanged, and may represent true values, that is, reference values. The front actual area is the distribution value of the gazing points of the current personnel in the cabin gazing at the area in front of the automobile cabin, the gazing and waking actual area is the distribution value of the gazing points of the current personnel in the cabin gazing at the waking area, and different personnel in the cabin are different and have individuation difference characteristics.
For example, a plane rectangular coordinate system is established in a virtual plane of the automobile cabin, x is an abscissa, y is an ordinate, and a theoretical region center point 205 exists in an immediate front theoretical region, the coordinates are (x 0 ,y 0 ) The electronic equipment obtains that the person in the current cabin looks at the front of the automobile cabinThe gaze point position at the time of the square is located at the actual region center point 206 in the actual region immediately in front, and the coordinates are (x 1 ,y 1 ) The difference between the actual area center point 206 and the theoretical area center point 205 is then the line of sight deviation of the personnel currently in the cabin, i.e. the gaze point deviation. Then, the straight ahead actual region correction may be translated to the straight ahead theoretical region based on the gaze point deviation to correct for the existing gaze personalization discrepancy.
Likewise, there is a theoretical region center point 207 in the gaze wake theoretical region, with coordinates (x 0 ,y 0 ) If the current position of the person in the cabin gazes at the wake-up area, the coordinates are (x 1 ,y 1 ) Then, a gaze point deviation can be obtained. Since the gaze point deviation of the person in the current cabin in the gaze wake area coincides with the gaze point deviation in the area directly in front of the gaze vehicle cabin, the gaze point position of the gaze wake area can be calibrated based on the gaze point deviation when looking directly in front of the vehicle cabin.
Meanwhile, practical tests show that the individuation difference characteristics of the sight can be corrected by using the sight intersection point deviation of the area right in front and the sight intersection point deviation of the gazing awakening area, so that the gazing awakening effect performance is basically consistent. And the physical deviation of the two areas in the whole space is consistent, the only difference is a person, so that the correction values solved by the two areas are also consistent, and the system completeness design rule is met.
Next, in step S102, the electronic device may calculate a difference between the first position and the second position acquired in advance, resulting in a gaze point deviation.
The second positions are clustering positions which are determined in advance based on the fixation point positions when the person in the plurality of test cabins fixates the intelligent shelter. When the gaze point positions of the personnel in the plurality of test cabins are collected, the gaze point positions of the personnel in the cabins when the personnel in the cabins gaze at the intelligent shelter are collected for a plurality of times aiming at the conditions that the personnel in the cabins travel on different roads such as mountain roads, urban expressways and expressways, so that the obtained second position is more accurate.
In one embodiment, the difference between the first position and the second position acquired in advance is calculated, and the difference between the first position and the second position may be directly calculated according to the coordinates, resulting in (Δx, Δy) as the gaze point deviation.
In another embodiment, the difference between the first position and the pre-acquired second position is calculated, the distance between the first position and the second position can be calculated, i.e. calculated
Figure BDA0004109381560000071
And then calculating the direction of the first position relative to the second position, and further obtaining the gaze point deviation.
For example, as shown in fig. 2, the first position is a gaze point position when an in-cabin person gazes directly in front of the car cabin, and the position 209 located in the real area directly in front is the coordinates (x 2 ,y 2 ) The second position is the theoretical region center point 205 in the theoretical region directly in front, and the coordinates are (x 0 ,y 0 ) Then the electronic device can calculate the difference (x 2 -x 0 ,y 2 -y 0 ) The (Δx, Δy) and thus the gaze point deviation are obtained.
After obtaining the gaze point deviation, in step S103, the electronic device may determine the corrected gaze point position based on the third position and the gaze point deviation when the third position where the current personnel gazes at the wake area is obtained. The third position is the gazing point position of the current personnel in the cabin gazing at the wake-up area.
Because the gaze point position of the personnel in the cabin when the personnel in the cabin gaze at the intelligent shelter is the gaze point position of the actual area, the gaze point deviation can be obtained based on the gaze point position of the actual area and the gaze point position of the theoretical area, and the gaze point deviation of the personnel in the cabin when the personnel in the cabin gaze at the intelligent shelter is consistent with the gaze point deviation of the gaze wake area, the corrected gaze point position can be determined based on the third position of the personnel in the cabin when the personnel in the cabin gaze at the wake area and the gaze point deviation when the personnel in the cabin gaze at the intelligent shelter.
For example, as shown in fig. 2, a third position of the current in-cabin person gazes at the wake area, a gaze point position 210 located in the gazing wake actual area, coordinates (x 3 ,y 3 ) The gaze point deviation when looking at the intelligent shelter is (Δx, Δy), then the electronic device may be based on the third position (x 3 ,y 3 ) From the gaze point deviation (deltax, deltay), the corrected gaze point position is determined.
After the point of regard position calibration is performed on the present driving of the personnel in the cabin, the calibration process may not be started any more, and of course, a plurality of calibration processes may be started according to actual situations, which is not limited herein. The driving is performed in a period from unlocking of the vehicle to locking of the vehicle.
In the solution of this embodiment, since the second position is a cluster position of the gaze point positions when the person in the test cabin gazes at the intelligent shelter, the difference between the gaze point position and the second position when the person in the current cabin gazes at the intelligent shelter can be calculated, and the gaze point deviation is obtained, so that the third position when the person in the current cabin gazes at the wake-up area can be calibrated based on the gaze point deviation, so as to compensate the gaze point deviation of the gaze wake-up area, and thus, the existing personalized difference of the line of sight can be corrected, and the problem of inaccurate interaction operations such as screen wake-up caused by the gaze point deviation can be reduced.
In addition, the calibration technology on the market at present is based on the calibration of a true value system or a calibration scheme based on multiple cameras, and the schemes are complex to implement and have poor user experience. The method and the device simplify the whole line-of-sight calibration process based on the solution proposal of the actual line-of-sight deviation of personnel in the cabin, ensure that the user does not feel in the deviation calibration process, solve the line-of-sight deviation calibration problem, and meet the application needs of customers.
As an implementation manner of the embodiment of the present application, as shown in fig. 3, before the step of obtaining the gaze point position when the current personnel in the cabin gaze at the intelligent shelter, the method may further include:
s301, obtaining the gaze point positions of a plurality of personnel in the test cabins when the personnel gaze the intelligent cabins as test positions;
in order to enable the obtained gaze point positions of the personnel in the cabin, which gaze at the intelligent cabin, to be more accurate, individual differences exist among different personnel in the cabin, the gaze point positions of the personnel in a plurality of test cabins, which gaze at the intelligent cabin, can be obtained and used as the test positions.
In one embodiment, the gaze acquisition device may acquire gaze point positions of a preset number of different personnel in the cabin when the personnel in the cabin gaze at the intelligent shelter, and then the electronic device may acquire the gaze point positions of the preset number of different personnel in the cabin when the personnel in the cabin gaze at the intelligent shelter as the test position. The preset number may be 100 or 200, which is not particularly limited herein. The height, sex, vision degree and the like of personnel in the cabin can be different.
For example, the electronic device acquires the gaze point positions of 100 personnel in the cabin when the personnel in the cabin gaze directly in front of the intelligent cabin, the heights of the 100 personnel in the cabin can be distributed between 150cm and 190cm, including females and males, and the vision degree of the personnel in the cabin can be different, so that the individual variability can be corrected, and therefore, the 100 gaze point positions can be used as test positions.
S302, clustering the test positions to obtain a clustering center position serving as a second position.
After the electronic equipment acquires the gaze point positions of the personnel in each cabin when the personnel gaze the intelligent cockpit, the gaze point positions can be statistically analyzed.
In one embodiment, the electronic device may cluster the acquired test positions to obtain a cluster center position as the second position. The clustering method used may be k-means clustering, DBSCAN (Density-Based Spatial Clustering of Applications with Noise, density-based noise application spatial clustering), agglimerate aggregation hierarchical clustering algorithm, and the like, and is not particularly limited herein.
In another embodiment, the electronic device may calculate an average value of the acquired test positions, and take the obtained average value as the second position.
For example, after the electronic device obtains the gaze point positions of 100 personnel in the cabin when the personnel gazes right in front of the intelligent cabin, the k-means clustering algorithm may be used to cluster the 100 gaze point positions to obtain a cluster center position as the second position, and an average value of the 100 gaze point positions may also be calculated as the second position.
It can be seen that, in this embodiment, the electronic device may obtain, as the test positions, gaze point positions of a plurality of personnel in the test cabin when the personnel gaze the intelligent shelter, and cluster the test positions to obtain a cluster center position as the second position. The second position is obtained based on the gaze point positions of the plurality of test cabin personnel when the test cabin personnel gaze the intelligent cabin, and the individual variability can be corrected, so that the gaze point deviation obtained by subsequently calculating the first position and the second position can be more accurate.
As an implementation manner of the embodiment of the present application, as shown in fig. 4, the step of obtaining, as the first position, the gaze point position when the current personnel in the cabin gaze at the intelligent shelter may include:
s401, under the condition that a current person in the cabin looks at the intelligent cabin for a preset time, acquiring the position of a point of regard of the current person in the cabin;
S402, clustering the gaze point positions of the personnel in the current cabin to obtain a clustering center position serving as a first position.
When the gaze point calibration conditions are met, the electronic device can start calibrating the gaze point position of the personnel in the cabin, and then the sight line acquisition device can acquire the gaze point position of the personnel in the cabin when the personnel in the cabin gazes at the intelligent shelter. In order to enable the gaze point position of the personnel in the cabin obtained by the electronic equipment to be more accurate, errors are reduced, the gaze point position of the personnel in the cabin in a certain time can be collected by the sight line collection equipment, and then the electronic equipment performs statistical analysis on the obtained gaze point position to obtain an accurate gaze point position.
In one embodiment, the electronic device may obtain a gaze point position of the current personnel in the cabin when the current personnel in the cabin gaze at the intelligent cabin for a preset time period. The preset duration may be 10s, 20s, or 30s, which is not specifically limited herein.
Further, after the gaze point position of the current personnel in the cabin is obtained when the personnel in the cabin gaze the intelligent cabin within the preset time, the electronic device can perform statistical analysis on the gaze point position of the personnel in the cabin.
In one embodiment, the electronic device may cluster the acquired gaze point positions of the personnel in the current cabin to obtain a cluster center position, and further use the cluster center position as the first position. The clustering method used may be k-means clustering, mean-Shift clustering algorithm, k-means clustering, etc., and is not particularly limited herein.
In another embodiment, the electronic device may calculate an average value of the obtained gaze point positions of the current personnel in the cabin, and take the obtained average value as the first position.
For example, the preset duration is 30s, under the condition that the current cabin personnel watch the intelligent cabin for 30s, the electronic equipment can acquire the watching point position of the current cabin personnel watch the intelligent cabin within 30s, then, the k-means clustering algorithm can be adopted to cluster the watching point position of the current cabin personnel to obtain a clustering center position, the clustering center position is further used as a first position, and the average value of the watching point position of the current cabin personnel can be calculated and used as the first position.
It can be seen that, in this embodiment, when the current personnel in the cabin gaze the intelligent cabin for a preset period of time, the electronic device may acquire the gaze point position of the current personnel in the cabin, and cluster the gaze point position of the current personnel in the cabin to obtain a cluster center position as the first position. The first position is obtained based on the gaze point position of the current personnel in the cabin when the personnel gaze the intelligent cabin within the preset time period and is the clustering center position of the gaze point position, so that the gaze point deviation obtained by subsequently calculating the first position and the second position can be more accurate.
As shown in fig. 5, the wake-up area is a preset bounded plane in the coordinate system of the camera;
the step of determining the calibrated gaze point position based on the deviation between the third position and the gaze point may include:
s501, capturing human eye vision of the personnel in the current cabin through the camera;
the current process that personnel in the cabin wake up the screen of the intelligent device by looking at the wake-up area adopts a sight tracking technology, mainly calculates the falling point position of the sight, can convert the falling point position into a mathematical model, and essentially solves the problem of intersection point of a straight line and a plane. The straight line is in a mathematical expression form of a sight line direction, and a straight line equation of the sight line can be constructed according to the position of eyes of a person in the cabin in a three-dimensional coordinate system of the camera and the vector direction of the sight line. The gazing area may be defined as a bounded plane under the camera coordinate system, and if the intersection point of the straight line of the line of sight and the bounded plane falls within a preset bounded plane, the gazing wake-up is successful.
When the current personnel in the cabin watch the wake-up area, the electronic equipment can capture the human eye vision of the current personnel in the cabin through the camera.
For example, as shown in fig. 6, the three-dimensional coordinate system 601 is a three-dimensional coordinate system of a camera, the eye axis direction 603 of the human eye 602 is shown by a solid arrow in the figure, the line-of-sight direction 604 is shown by a dashed arrow in the figure, and the bounded plane 605 is a preset bounded plane under the coordinate system of the camera. From the position of the human eye 602 in the three-dimensional coordinate system 601, and the vector direction of the line of sight, a straight line equation of the line of sight can be constructed. When the straight line of the line of sight 604 intersects the bounded plane 605, the intersection point position, that is, the falling point position of the line of sight can be obtained. In order to determine the position of the line-of-sight falling point of the current personnel in the cabin, which is watched by the wake-up area, the electronic equipment can acquire the human eye line of the current personnel in the cabin, which is captured by the camera.
S502, determining the intersection point position of the human eye sight line and the preset bounded plane as a third position of the current in-cabin personnel gazing at the wake-up area under the condition that the human eye sight line intersects with the preset bounded plane.
The electronic device captures the eye line of sight of the person in the current cabin, and the line of sight falling point position can be calculated to determine whether the eye line of sight intersects with a preset bounded plane under the coordinate system of the camera. Under the condition that the human eye sight line intersects with a preset bounded plane of the camera, the intersection point position of the human eye sight line and the preset bounded plane can be determined to be the gazing point position of the current personnel gazing at the wake-up area in the cabin, namely, the third position.
For example, in the example of step S501, the electronic device may capture the eye line of sight of the current personnel in the cabin through the camera, and the line of sight direction 604 intersects the bounded plane 605 of the camera, so that the intersection point 606 may be obtained, and then the intersection point 606 may be determined as the gaze point position of the current personnel in the cabin looking at the wake area, that is, as the third position.
The step of determining the corrected gaze point position based on the deviation of the third position from the gaze point may include:
and calculating the sum of the third position and the gaze point deviation, and determining the corrected gaze point position.
Because the gaze point deviation of the person in the cabin gazes at the wake-up area is consistent with the gaze point deviation when gazing at the intelligent shelter, the calibrated gaze point position can be determined based on the gaze point deviation when the person in the cabin gazes at the third position of the wake-up area and at the intelligent shelter.
In one embodiment, the electronic device may sum the third position of the current personnel in the cabin looking at the wake-up area and the gaze point deviation, so that the gaze point deviation of the personnel in the cabin looking at the wake-up area may be compensated, and the calibrated gaze point position may be determined.
For example, as shown in fig. 2, a third position of the current in-cabin person gazes at the wake area, a gaze point position 210 located in the gazing wake actual area, coordinates (x 3 ,y 3 ) The resulting gaze point deviation is (Δx, Δy), then the electronic device may associate the third location with the gaze pointThe deviations are summed to determine the corrected gaze point position (x 3 +Δx,y 3 +Δy). And under the condition that the calibrated gaze point position falls in the wake-up area, the current personnel in the cabin gaze wake-up is successful.
It can be seen that, in this embodiment, the camera captures the eye line of sight of the person in the current cabin, and under the condition that the eye line of sight intersects with the preset bounded plane, the intersection point position of the eye line of sight and the preset bounded plane is determined as the third position of the person in the current cabin looking at the wake-up area, and then the sum of the third position and the gaze point deviation is calculated, and the corrected gaze point position is determined. Because the third position can be determined when the eye sight intersects with the preset bounded plane, when the third position is deviated, the third position of the current in-cabin personnel watching the wake-up area can be calibrated based on the watching point deviation, and further, the existing sight individuation difference can be corrected, and the problem of inaccurate interaction operations such as screen wake-up caused by the watching point deviation can be reduced.
As shown in fig. 7, for a case where the gaze point calibration condition is that the speed of the vehicle is not less than a preset speed and the steering wheel angle is not greater than a preset angle, before the step of obtaining the gaze point position when the current personnel in the cabin gaze at the intelligent shelter, the method may further include:
s701, acquiring the speed of the vehicle and the steering wheel angle;
s702, determining that the fixation point calibration condition is met under the condition that the speed is not smaller than a preset speed and the steering wheel rotation angle is not larger than a preset angle.
In the process that the personnel in the cabin drive the vehicle, the electronic equipment can calibrate the gaze point deviation of the personnel in the cabin only under the condition that certain gaze point calibration conditions are met. That is, the electronic device needs to meet certain adaptive conditions, so that adaptive calibration of gaze point deviation can be started.
In order to make the conditions for starting the calibration more reasonable, multiple tests and analyses can be performed on the calibration conditions according to actual conditions. For example, driving data of driving vehicles by a preset number of in-cabin personnel may be collected, and each in-cabin personnel may drive vehicles on various roads such as mountain roads, urban expressways, and expressways.
Further, after the driving data of the preset number of personnel in the cabin are obtained, the line-of-sight drop point position data of the preset number of personnel in the cabin are subjected to centralized analysis, the line-of-sight drop point positions of most of the personnel in the cabin in most of the time can be found to be positioned right in front of the intelligent cabin, the data show Gaussian distribution characteristics, and the faster the personnel in the cabin can travel on roads (such as expressways and urban expressways) capable of accelerating, the more concentrated the line-of-sight drop point position distribution is. Therefore, the sight line drop point positions of the personnel in the cabin, which are right in front of the intelligent cabin, can be utilized to cluster, so as to obtain a cluster center position, and then the gaze point deviation is obtained according to the cluster center position and the current gaze point position of the personnel in the cabin, and the gaze point deviation of the gazing wake-up area is calibrated.
In one embodiment, according to the above data characteristics, the condition for starting calibration may be set such that the speed of the vehicle is not less than a preset speed and the steering wheel angle is not greater than a preset angle, that is, the condition for satisfying the gaze point calibration may be set such that the speed of the vehicle is not less than a preset speed and the steering wheel angle is not greater than a preset angle.
In the process that personnel in the cabin drive the vehicle, the electronic equipment can acquire the speed of the vehicle and the steering wheel corner, and under the condition that the speed of the vehicle is not less than the preset speed and the steering wheel corner is not greater than the preset angle, the meeting of the fixation point calibration condition is determined, and then the fixation point position when the personnel in the cabin watch the intelligent cockpit can be acquired.
For example, the condition for meeting the gaze point calibration is set to be that the vehicle speed is equal to or greater than 80km/h, the steering wheel rotation angle is equal to or less than 15 degrees, the speed of the vehicle is 100km/h, the steering wheel rotation angle is 12 degrees, at the moment, the speed of the vehicle is greater than the preset vehicle speed, and the steering wheel rotation angle is smaller than the preset angle, so that the condition for meeting the gaze point calibration can be determined, and the gaze point position of the current personnel in the cabin when the personnel gaze the intelligent cockpit can be further obtained.
It can be seen that, in this embodiment, the electronic device may acquire the speed of the vehicle and the steering wheel angle, and determine that the gaze point calibration condition is satisfied when the speed is not less than the preset speed and the steering wheel angle is not greater than the preset angle. Therefore, the gaze point position can be determined under the condition that the gaze point calibration condition is met, the existing personalized difference of the vision can be corrected, and the problem of inaccurate interaction operations such as screen awakening caused by the gaze point deviation can be further reduced.
As an implementation manner of the embodiment of the present application, the method may further include:
and executing the wake-up action of the wake-up area under the condition that the calibrated gaze point position is located in the wake-up area.
The electronic device can determine whether the calibrated gaze point position is located in the wake-up area after calibrating the gaze point position of the current personnel in the cabin based on the gaze point deviation when the current personnel in the cabin gaze the intelligent cockpit. When the calibrated gaze point position is located in the wake-up area, the electronic device may execute the wake-up operation of the wake-up area, or else, may not execute the wake-up operation of the wake-up area.
The wake-up area may be a circular area with a preset length as a radius, a rectangular area with a preset width and a preset height, or a preset shape area based on actual setting, which is not particularly limited herein.
For example, as shown in fig. 2, the wake-up area is a preset bounded plane within the vehicle screen range, that is, a bounded plane represented by a solid rectangular area 203, and the gaze point position 210 of the current in-cabin person gazing at the wake-up area is not located in the wake-up area, and the gaze point position 210 of the in-cabin person may be calibrated based on the gaze point deviation (Δx, Δy) to obtain the calibrated gaze point position. If the calibrated gaze point position is located in the wake-up area, the electronic device may perform a wake-up action of the wake-up area, i.e. wake-up the vehicle-mounted screen.
It can be seen that, in this embodiment, the electronic device may execute the wake-up action of the wake-up area when the calibrated gaze point position is located in the wake-up area. Because the electronic equipment can calibrate the gaze point position of the current personnel in the cabin looking at the wake-up area based on the gaze point deviation so as to compensate the gaze point deviation of the looking at the wake-up area, the existing sight line individuation difference can be corrected, and the problem of inaccurate interaction operations such as screen wake-up and the like caused by the gaze point deviation can be further reduced. And the vehicle-mounted screen is awakened by calibrating the fixation point position, so that the user experience can be improved.
As an implementation manner of the embodiment of the present application, the step of performing the wake-up action of the wake-up area may include:
and waking up a device corresponding to the wake-up area, wherein the device comprises at least one of a vehicle-mounted screen, a sound device, an air conditioner, a windscreen wiper and an electric heating device.
If the current gaze point position calibrated by the personnel in the cabin is located in the wake-up area, the electronic device may execute a wake-up action of the wake-up area, and wake-up a device corresponding to the wake-up area.
In one embodiment, the intelligent cabin comprises a vehicle-mounted screen, the vehicle-mounted screen corresponds to a plurality of wake-up areas, and when the gaze point position calibrated by the personnel in the cabin is located in the corresponding wake-up area, the device corresponding to the wake-up area can be woken up.
For example, the intelligent cabin comprises a vehicle-mounted screen, wherein the vehicle-mounted screen is provided with a wake-up area A, a wake-up area B, a wake-up area C, a wake-up area D and a wake-up area E, the device corresponding to the wake-up area A is the vehicle-mounted screen, the device corresponding to the wake-up area B is sound, the device corresponding to the wake-up area C is an air conditioner, the device corresponding to the wake-up area D is a windscreen wiper, and the device corresponding to the wake-up area E is an electric heating device.
If the calibrated gaze point position is located in the wake area a, the electronic device may execute a wake action of the wake area a to wake the on-board screen. If the calibrated gaze point position is located in the wake area C, the electronic device may execute a wake action of the wake area C to wake the air conditioner.
In another embodiment, the intelligent cabin comprises a plurality of screens, each screen is provided with a wake-up area, and when the gaze point position calibrated by the personnel in the cabin is located in any wake-up area, the device corresponding to the wake-up area can be woken up.
For example, the intelligent cabin comprises a plurality of screens, wherein the screen A has a wake-up area a, the screen B has a wake-up area B, and the screen C has a wake-up area C, wherein the device corresponding to the wake-up area a is an on-vehicle screen, the device corresponding to the wake-up area B is sound, and the device corresponding to the wake-up area C is an air conditioner. If the calibrated gaze point location is within wake-up area a, the electronic device may wake up the on-board screen. If the calibrated gaze point location is within the wake up area b, the electronic device may wake up the sound.
It can be seen that, in this embodiment, the electronic device may wake up the device corresponding to the wake-up area, where the device includes at least one of a vehicle-mounted screen, a sound device, an air conditioner, a wiper, and an electric heating device. The third position of the wake-up area where the person in the cabin looks at is calibrated based on the gaze point deviation so as to wake up the device corresponding to the wake-up area, so that the existing personalized difference of the line of sight can be corrected, the problem of inaccurate interactive operations such as screen wake-up caused by the gaze point deviation can be reduced, and the user experience is improved.
Corresponding to the method for determining the gaze point, the embodiment of the application also provides a device for determining the gaze point. The following describes a gaze point determining device provided in an embodiment of the present application.
As shown in fig. 8, a gaze point determining apparatus, the apparatus comprising a processor 801, wherein:
the processor 801 is configured to obtain, as a first position, a gaze point position of a person in the current cabin when the person gazes at the intelligent shelter, where the gaze point calibration condition is satisfied; calculating the difference between the first position and a pre-acquired second position to obtain a gaze point deviation, wherein the second position is a clustering position determined in advance based on gaze point positions when a plurality of test cabin personnel gaze the intelligent shelter; and under the condition that a third position of the current in-cabin personnel watching the wake-up area is acquired, determining the calibrated gaze point position based on the deviation between the third position and the gaze point.
In the solution provided in this embodiment of the present application, the processor may obtain, as the first position, a gaze point position when a person in the current cabin gazes at the intelligent shelter under a condition that a gaze point calibration condition is satisfied, calculate a difference between the first position and a second position obtained in advance, and obtain a gaze point deviation, where the second position is a clustered position determined in advance based on gaze point positions when a person in the plurality of test cabins gazes at the intelligent shelter, and determine, based on the third position and the gaze point deviation, a corrected gaze point position under a condition that a third position when a person in the current cabin gazes at the wake-up area is obtained. The second positions are clustering positions of the gaze point positions when a plurality of test persons watch the intelligent shelter, so that the difference between the gaze point positions and the second positions when the current persons watch the intelligent shelter can be calculated, the gaze point deviation is obtained, and then the third positions of the current persons watch the wake-up area can be calibrated based on the gaze point deviation, so that the gaze point deviation of the gaze-up area is compensated, the existing personalized difference of the sights can be corrected, and the problem of inaccurate interaction operations such as screen wake-up caused by the gaze point deviation can be reduced.
As an implementation manner of the embodiment of the present application, the processor 801 may be further configured to, before the step of obtaining the gaze point position when the current personnel in the cabin gaze at the intelligent shelter, determine that the gaze point calibration condition is satisfied under at least one of the following conditions:
the speed of the vehicle is not less than a preset speed, and the steering wheel angle is not greater than a preset angle; the gaze point corresponding to the vision of the personnel in the current cabin is concentrated in a preset range; the vehicle is in a safe state.
As an implementation manner of the embodiment of the present application, the processor 801 may be further configured to, before the step of obtaining the gaze point positions when the current personnel in the cabin gaze at the intelligent shelter, obtain the gaze point positions when the personnel in the plurality of test cabins gaze at the intelligent shelter as the test positions; and clustering the test positions to obtain a clustering center position serving as a second position.
As an implementation manner of the embodiment of the present application, the processor 801 may be specifically configured to obtain a gaze point position of a person in a current cabin when the person in the current cabin gazes at the intelligent cabin for a preset period of time; and clustering the gaze point positions of the current personnel in the cabin to obtain a clustering center position serving as a first position.
As an implementation manner of the embodiment of the present application, the wake-up area is a preset bounded plane in a coordinate system of the camera;
the processor 801 may also be configured to capture, via the camera, a line of sight of a human eye of the person currently in the cabin; under the condition that the human eye sight line intersects with the preset bounded plane, determining the intersection point position of the human eye sight line and the preset bounded plane as a third position of the current in-cabin personnel gazing at the wake-up area; and calculating the sum of the third position and the gaze point deviation, and determining the corrected gaze point position.
As an implementation manner of the embodiment of the present application, the processor 801 may be further configured to perform a wake-up action of the wake-up area if the calibrated gaze point position is located in the wake-up area.
As an implementation of the embodiment of the present application, as shown in fig. 9, the apparatus may further include a controller 802, where:
the controller 802 may be specifically configured to wake up a device corresponding to the wake-up area, where the device includes at least one of a vehicle screen, a sound device, an air conditioner, a wiper, and an electric heating device.
The embodiment of the application also provides a fixation point determining system, as shown in fig. 10, which includes a processor 1002 and a camera 1001, wherein:
the camera 1001 is used for collecting the eye sight of a person in the cabin when the person looks at the intelligent cabin;
the processor 1002 is configured to determine, as a first position, a gaze point position when the person in the cabin gazes at the intelligent shelter, based on the human eye line of sight, if a gaze point calibration condition is satisfied; calculating the difference between the first position and a pre-acquired second position to obtain a gaze point deviation, wherein the second position is a clustering position determined in advance based on gaze point positions when a plurality of test cabin personnel gaze the intelligent shelter; and under the condition that a third position of the current in-cabin personnel watching the wake-up area is acquired, determining the calibrated gaze point position based on the deviation between the third position and the gaze point.
In the solution provided in this embodiment of the present application, the processor may obtain, as the first position, a gaze point position when a person in the current cabin gazes at the intelligent shelter under a condition that a gaze point calibration condition is satisfied, calculate a difference between the first position and a second position obtained in advance, and obtain a gaze point deviation, where the second position is a clustered position determined in advance based on gaze point positions when a person in the plurality of test cabins gazes at the intelligent shelter, and determine, based on the third position and the gaze point deviation, a corrected gaze point position under a condition that a third position when a person in the current cabin gazes at the wake-up area is obtained. The second positions are clustering positions of the gaze point positions when a plurality of test persons watch the intelligent shelter, so that the difference between the gaze point positions and the second positions when the current persons watch the intelligent shelter can be calculated, the gaze point deviation is obtained, and then the third positions of the current persons watch the wake-up area can be calibrated based on the gaze point deviation, so that the gaze point deviation of the gaze-up area is compensated, the existing personalized difference of the sights can be corrected, and the problem of inaccurate interaction operations such as screen wake-up caused by the gaze point deviation can be reduced.
As an implementation manner of the embodiment of the present application, the processor 1002 may be further configured to, before the step of obtaining the gaze point position when the current personnel in the cabin gaze at the intelligent shelter, determine that the gaze point calibration condition is satisfied under at least one of the following conditions:
the speed of the vehicle is not less than a preset speed, and the steering wheel angle is not greater than a preset angle; the gaze point corresponding to the vision of the personnel in the current cabin is concentrated in a preset range; the vehicle is in a safe state with the cabin personnel looking at the intelligent cabin.
As an implementation manner of the embodiment of the present application, the processor 1002 may be further configured to, before the step of obtaining the gaze point positions when the current personnel in the cabin gaze at the intelligent shelter, obtain the gaze point positions when the personnel in the plurality of test cabins gaze at the intelligent shelter as the test positions; and clustering the test positions to obtain a clustering center position serving as a second position.
As an implementation manner of the embodiment of the present application, the processor 1002 may be specifically configured to obtain a gaze point position of a person in a current cabin when the person in the current cabin gazes at the intelligent cabin for a preset duration; and clustering the gaze point positions of the current personnel in the cabin to obtain a clustering center position serving as a first position.
As an implementation manner of the embodiment of the present application, the wake-up area is a preset bounded plane in a coordinate system of the camera; the camera 1001 may also be used to capture the human eye line of sight of the personnel in the current cabin;
the processor 1002 may be further configured to determine, when the eye gaze intersects the preset bounded plane, a position of an intersection of the eye gaze and the preset bounded plane as a third position of the current in-cabin person gazing at a wake area; and calculating the sum of the third position and the gaze point deviation, and determining the corrected gaze point position.
As an implementation manner of the embodiments of the present application, the processor 1002 may be further configured to perform a wake-up action of the wake-up area if the calibrated gaze point position is located in the wake-up area.
As an implementation manner of the embodiment of the present application, the system further includes a controller, and may specifically be configured to wake up a device corresponding to the wake-up area, where the device includes at least one of a vehicle-mounted screen, a sound device, an air conditioner, a wiper, and an electric heating device.
The embodiment of the application also provides an intelligent vehicle, as shown in fig. 11, comprising an intelligent cabin 1101, the intelligent cabin 1101 comprising a processor 1102, wherein:
The processor 1102 is configured to implement the method for determining a gaze point according to any one of the foregoing embodiments when executing a computer program.
In the solution provided in this embodiment of the present application, the processor may obtain, as the first position, a gaze point position when a person in the current cabin gazes at the intelligent shelter under a condition that a gaze point calibration condition is satisfied, calculate a difference between the first position and a second position obtained in advance, and obtain a gaze point deviation, where the second position is a clustered position determined in advance based on gaze point positions when a person in the plurality of test cabins gazes at the intelligent shelter, and determine, based on the third position and the gaze point deviation, a corrected gaze point position under a condition that a third position when a person in the current cabin gazes at the wake-up area is obtained. The second positions are clustering positions of the gaze point positions when a plurality of test persons watch the intelligent shelter, so that the difference between the gaze point positions and the second positions when the current persons watch the intelligent shelter can be calculated, the gaze point deviation is obtained, and then the third positions of the current persons watch the wake-up area can be calibrated based on the gaze point deviation, so that the gaze point deviation of the gaze-up area is compensated, the existing personalized difference of the sights can be corrected, and the problem of inaccurate interaction operations such as screen wake-up caused by the gaze point deviation can be reduced.
In a further embodiment provided herein, there is also provided a computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the steps of the method of determining any gaze point described above.
In a further embodiment provided herein, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the method of determining a gaze point of any of the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, tape), an optical medium (e.g., DVD), or a Solid State Disk (SSD), etc.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In this specification, each embodiment is described in a related manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for apparatus, systems, intelligent vehicles, computer-readable storage media, and computer program product embodiments, the description is relatively simple, as relevant to the method embodiments by virtue of being substantially similar thereto.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the present application. Any modifications, equivalent substitutions, improvements, etc. that are within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (10)

1. A method of gaze point determination, the method comprising:
under the condition that the fixation point calibration condition is met, the fixation point position of the current personnel in the cabin when the personnel in the cabin watch the intelligent cabin is obtained and used as a first position;
calculating the difference between the first position and a pre-acquired second position to obtain a gaze point deviation, wherein the second position is a clustering position determined in advance based on gaze point positions when a plurality of test cabin personnel gaze the intelligent shelter;
and under the condition that a third position of the current in-cabin personnel watching the wake-up area is acquired, determining the calibrated gaze point position based on the deviation between the third position and the gaze point.
2. The method of claim 1, wherein prior to the step of obtaining the gaze point location of the current in-cabin person looking at the intelligent shelter, the method further comprises:
determining that the gaze point calibration condition is met, if at least one of the following conditions is met:
The speed of the vehicle is not less than a preset speed, and the steering wheel angle is not greater than a preset angle;
the gaze point corresponding to the vision of the personnel in the current cabin is concentrated in a preset range;
the vehicle is in a safe state.
3. The method of claim 1, wherein prior to the step of obtaining the gaze point location of the current in-cabin person looking at the intelligent shelter, the method further comprises:
the method comprises the steps of obtaining the fixation point positions of a plurality of test cabin personnel when the intelligent cabin is fixated, and taking the fixation point positions as test positions;
and clustering the test positions to obtain a clustering center position serving as a second position.
4. The method according to claim 1, wherein the step of obtaining, as the first position, a gaze point position of the current person in the cabin when gazing at the intelligent shelter, comprises:
under the condition that the current personnel in the cabin watch the intelligent cabin for a preset time, acquiring the watching point position of the current personnel in the cabin;
and clustering the gaze point positions of the current personnel in the cabin to obtain a clustering center position serving as a first position.
5. The method of claim 1, wherein the wake-up area is a preset bounded plane in a coordinate system of a camera; the method further comprises the steps of:
Capturing human eye vision of the personnel in the current cabin through the camera;
under the condition that the human eye sight line intersects with the preset bounded plane, determining the intersection point position of the human eye sight line and the preset bounded plane as a third position of the current in-cabin personnel gazing at the wake-up area;
the step of determining the calibrated gaze point location based on the third location and the gaze point deviation comprises:
and calculating the sum of the third position and the gaze point deviation, and determining the corrected gaze point position.
6. The method according to any one of claims 1-5, further comprising:
and executing the wake-up action of the wake-up area under the condition that the calibrated gaze point position is located in the wake-up area.
7. The method of claim 6, wherein the step of performing a wake-up action of the wake-up region comprises:
and waking up a device corresponding to the wake-up area, wherein the device comprises at least one of a vehicle-mounted screen, a sound device, an air conditioner, a windscreen wiper and an electric heating device.
8. A gaze point determining device, the device comprising a processor, wherein:
The processor is used for acquiring the gaze point position of the current personnel in the cabin when the personnel in the cabin gaze the intelligent cockpit as a first position under the condition that the gaze point calibration condition is met; calculating the difference between the first position and a pre-acquired second position to obtain a gaze point deviation, wherein the second position is a clustering position determined in advance based on gaze point positions when a plurality of test cabin personnel gaze the intelligent shelter; and under the condition that a third position of the current in-cabin personnel watching the wake-up area is acquired, determining the calibrated gaze point position based on the deviation between the third position and the gaze point.
9. A gaze point determination system, the system comprising a processor and a camera, wherein:
the camera is used for collecting the eye sight of a person in the cabin when the person looks at the intelligent cabin;
the processor is used for determining the gaze point position of the personnel in the cabin when the personnel in the cabin gaze at the intelligent shelter based on the human eye vision under the condition that the gaze point calibration condition is met, and taking the gaze point position as a first position; calculating the difference between the first position and a pre-acquired second position to obtain a gaze point deviation, wherein the second position is a clustering position determined in advance based on gaze point positions when a plurality of test cabin personnel gaze the intelligent shelter; and under the condition that a third position of the current in-cabin personnel watching the wake-up area is acquired, determining the calibrated gaze point position based on the deviation between the third position and the gaze point.
10. An intelligent vehicle comprising an intelligent cabin, the intelligent cabin comprising a processor, wherein:
the processor being adapted to implement the method of any of claims 1-7 when executing a computer program.
CN202310193173.9A 2023-02-27 2023-02-27 Method, device and system for determining gaze point and intelligent vehicle Pending CN116185199A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310193173.9A CN116185199A (en) 2023-02-27 2023-02-27 Method, device and system for determining gaze point and intelligent vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310193173.9A CN116185199A (en) 2023-02-27 2023-02-27 Method, device and system for determining gaze point and intelligent vehicle

Publications (1)

Publication Number Publication Date
CN116185199A true CN116185199A (en) 2023-05-30

Family

ID=86438162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310193173.9A Pending CN116185199A (en) 2023-02-27 2023-02-27 Method, device and system for determining gaze point and intelligent vehicle

Country Status (1)

Country Link
CN (1) CN116185199A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152411A (en) * 2023-11-01 2023-12-01 安徽蔚来智驾科技有限公司 Sight line calibration method, control device and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152411A (en) * 2023-11-01 2023-12-01 安徽蔚来智驾科技有限公司 Sight line calibration method, control device and storage medium

Similar Documents

Publication Publication Date Title
CN112041910B (en) Information processing apparatus, mobile device, method, and program
US10317900B2 (en) Controlling autonomous-vehicle functions and output based on occupant position and attention
US9904362B2 (en) Systems and methods for use at a vehicle including an eye tracking device
US9530065B2 (en) Systems and methods for use at a vehicle including an eye tracking device
CN108137052B (en) Driving control device, driving control method, and computer-readable medium
JP7324716B2 (en) Information processing device, mobile device, method, and program
US9101313B2 (en) System and method for improving a performance estimation of an operator of a vehicle
KR102591432B1 (en) Systems and methods for detecting and dynamically mitigating driver fatigue
US10260898B2 (en) Apparatus and method of determining an optimized route for a highly automated vehicle
CN103568992B (en) Equipment for the driver's location sound image for vehicle and method
WO2015106690A1 (en) Method and device for detecting safe driving state of driver
WO2022000448A1 (en) In-vehicle air gesture interaction method, electronic device, and system
CN102975718B (en) In order to determine that vehicle driver is to method, system expected from object state and the computer-readable medium including computer program
CN116185199A (en) Method, device and system for determining gaze point and intelligent vehicle
JP2008217274A (en) Driver status determination device and operation support device
US20180229654A1 (en) Sensing application use while driving
CN111539333B (en) Method for identifying gazing area and detecting distraction of driver
CN108725444B (en) Driving method and device, electronic device, vehicle, program, and medium
JP2010018201A (en) Driver assistant device, driver assistant method, and driver assistant processing program
CN111366168A (en) AR navigation system and method based on multi-source information fusion
EP4137914A1 (en) Air gesture-based control method and apparatus, and system
CN111105594A (en) Vehicle and recognition method and device for fatigue driving of driver
JP2015085719A (en) Gazing object estimation device
US10268903B2 (en) Method and system for automatic calibration of an operator monitor
JP5644414B2 (en) Awakening level determination device, awakening level determination method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination