WO2020042542A1 - Procédé et appareil d'acquisition de données d'étalonnage de commande de mouvement oculaire - Google Patents

Procédé et appareil d'acquisition de données d'étalonnage de commande de mouvement oculaire Download PDF

Info

Publication number
WO2020042542A1
WO2020042542A1 PCT/CN2019/073766 CN2019073766W WO2020042542A1 WO 2020042542 A1 WO2020042542 A1 WO 2020042542A1 CN 2019073766 W CN2019073766 W CN 2019073766W WO 2020042542 A1 WO2020042542 A1 WO 2020042542A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
eye
eyeball
calibration data
data
Prior art date
Application number
PCT/CN2019/073766
Other languages
English (en)
Chinese (zh)
Inventor
蒋壮
Original Assignee
深圳市沃特沃德股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市沃特沃德股份有限公司 filed Critical 深圳市沃特沃德股份有限公司
Publication of WO2020042542A1 publication Critical patent/WO2020042542A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Definitions

  • the present application relates to the field of human-computer interaction technology, and in particular, to a method and device for acquiring calibration data for eye movement control.
  • the eye movement control method is a non-contact human-computer interaction method.
  • the position of the eye's fixation point is calculated by tracking the position of the eyeball.
  • Eye movement control is a great help for users who ca n’t use both hands.
  • gaming computers with eye tracking capabilities make players more immersive in the game scene.
  • Eye-tracking technology requires special equipment, such as an eye tracker. During the use of these special equipment, users need to control the equipment according to the eye movements defined in the instructions.
  • the trend of human-computer interaction is human-centered, more friendly and convenient, so eye tracking is also moving towards controlling the device according to the user's eye movement habits.
  • Each user can first calibrate the device according to their specific eye movement habits, so that subsequent eye movement control can be operated according to the user's eye movement habits.
  • image processing is usually performed according to a user staring at an image of a preset positioning point, and a pupil center position corresponding to the preset positioning point is calculated to collect calibration data.
  • the accuracy of the gaze judgment is low, and the user experience is not high.
  • the purpose of this application is to provide a method and device for acquiring eye movement control calibration data, which aims to solve the problem that in the prior art, accurate eye movement control calibration data cannot be obtained according to a user's eye movement habits.
  • This application proposes a method for obtaining calibration data for eye movement control, including:
  • the present application also proposes an eye movement control calibration data acquisition device, including:
  • An image acquisition module configured to sequentially obtain a user image where a human eye fixes on a plurality of positioning points, wherein a plurality of the positioning points are preset in a designated viewing area;
  • An image analysis module configured to sequentially search a human eye image and an eyeball image from the user image, and obtain human eye position data and eyeball position data;
  • a data calculation module is configured to calculate calibration data according to the position data of the human eye and the position data of the eyeball, and sequentially record the calibration data and position information of a plurality of corresponding anchor points.
  • the present application also proposes a computer device including a processor, a memory, and a computer program stored on the memory and executable on the processor.
  • the processor implements the foregoing eye movement when the computer program is executed. Controls the method of acquiring calibration data.
  • At least one positioning point is preset in a designated viewing area, and when a human eye looks at one positioning point, an image is acquired through a common camera, and a human eye image and an eyeball image are searched from the image.
  • the calibration data is calculated, and the calibration data and the position information of the positioning point are stored in the memory until all the positioning points have been collected.
  • the calibration data can be used in subsequent eye tracking control to determine whether the distance between the user and the specified viewing area is within a preset range, and track the position of the user's line of sight to improve the accuracy of the line of sight judgment.
  • the method and device for acquiring eye movement control calibration data of the present application do not need to use special equipment, and can collect data according to the eye movement habits of the user, and the user experience is good.
  • FIG. 1 is a schematic flowchart of an eye movement control calibration data acquisition method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of an anchor point in a designated viewing area according to an embodiment of the present application (wherein FIG. 2 a is a schematic diagram of each anchor point, FIG. 2 b is a schematic diagram of division of a left region and a right region, and FIG. 2 c is an illustration of an upper region and a lower region. Division diagram);
  • FIG. 3 is a schematic block diagram of a structure of an eye movement control calibration data acquisition device according to an embodiment of the present application.
  • FIG. 4 is a schematic block diagram of a structure of an image analysis module in FIG. 3;
  • FIG. 5 is a schematic block diagram of a structure of a data calculation module in FIG. 3;
  • FIG. 6 is a schematic block diagram of a structure of a first data acquisition unit in FIG. 5;
  • FIG. 7 is a schematic block diagram of a structure of a second data obtaining unit in FIG. 5;
  • FIG. 8 is a schematic structural diagram of a computer device according to an embodiment of the present application.
  • an embodiment of the present application provides a method for acquiring eye movement control calibration data, including:
  • the designated viewing area in step S1 includes a terminal device interface for human-computer interaction with a user, such as a smartphone display, a flat-panel display, a smart TV display, a personal computer display, and a laptop display.
  • a user such as a smartphone display, a flat-panel display, a smart TV display, a personal computer display, and a laptop display.
  • the user image can be obtained through a camera.
  • the camera includes a front camera built in the terminal device, an external camera, such as a front camera of a mobile phone, etc., which is not limited in this application.
  • FIG. 2a it is a schematic diagram of the anchor points of the designated viewing area, including 9 anchor points of upper left, upper middle, upper right, left middle, middle middle, right middle, lower left, middle lower, and lower right. Referring to FIG.
  • the upper left , The left, middle, bottom left, middle bottom, middle middle, and middle top surrounded by the designated viewing area is the left area
  • the middle designated middle and upper center surrounded by the designated viewing area is the right area
  • the designated viewing area surrounded by top left, middle left, center middle, right middle, top right, and top middle is the top area
  • the designated viewing area surrounded by bottom left, left middle, middle center, right middle, bottom right, and bottom middle is the bottom area .
  • the user looks at an anchor point of the mobile phone display at an appropriate distance from the mobile phone display according to his habits, and collects an image of a human eye watching the anchor point through a front camera of the mobile phone.
  • a fixation time may be set in advance, and reminders for reminding the user to continuously look at each anchor point may be sent separately to remind the user to keep looking at the anchor point; judging whether the time between the current time and the moment when the reminder information is sent is greater than a preset fixation Duration, if the time between the current time and the time at which the reminder information is sent is greater than the preset gaze duration, an instruction to capture a user image is generated, and the camera obtains an instruction to capture a user image to collect the image; the reminder can also be sent separately After the user continuously looks at the information of each anchor point, the camera continuously collects images in real time, and distinguishes the state of the human eye through a pre-trained classifier.
  • any frame of the above image in the gaze state is obtained
  • the image serves as the user image. Further searching for the human eye image and the eyeball image from the acquired image, obtaining the human eye position data and the eyeball position data; calculating a series of calibration data according to the human eye position data and the eyeball position data, and sequentially recording the calibration data Correspondence with the anchor point.
  • the calibration data can be used in subsequent eye tracking control to determine whether the distance between the user and the specified viewing area is within a preset range, and track the position of the user's line of sight to improve the accuracy of the line of sight judgment.
  • the user in this embodiment first looks at the upper left positioning point, and the camera collects an image of the human eye gazing at the upper left positioning point, finds the human eye image and the eyeball image from the image, obtains human eye position data and eyeball position data, and calculates a calibration. Data to record the correspondence between the calibration data and the upper left anchor point. The user then starts looking at the center-up anchor point, and the rest of the steps are the same as the top-left anchor point. Until the upper left, middle upper, upper right, left middle, middle middle, right middle, lower left, middle lower, and lower right calibration data of the nine positioning points and the corresponding relationship of the positioning points are collected.
  • step S2 of searching for a human eye image and an eyeball image from the user image in order to obtain human eye image position data and eyeball image position data includes:
  • step S21 first finds a face image from the image. If no face image is found in the image, it returns to step S1 to adjust the relative position of the user and the designated viewing area until the image obtained by the camera can be found.
  • face images There are many ways to search for facial images, such as: using face rules (such as the distribution of eyes, nose, mouth, etc.) to perform face detection on the input image; by finding features that are invariant to the face (such as skin color, edges, textures) ) To perform face detection on the input image; describe the facial features of the face with a standard face template.
  • the face detection When performing face detection, first calculate the correlation value between the input image and the standard face template, and then The obtained correlation value is compared with a preset threshold value to determine whether a face exists in the input image; the face area is regarded as a type of pattern, and a large amount of face data is used as a sample training to learn potential rules
  • a classifier is constructed to detect faces by discriminating all possible region pattern attributes in the image.
  • the found face image is marked with a rectangular frame.
  • Step S22 searches for the human eye image from the rectangular frame of the face image, which is helpful to narrow the search range and improve the search efficiency and accuracy of the human eye search. If no human eye image is found, return to step S1 and reacquire the image until A human eye image can be found in step S22.
  • Human eye search methods include template-based methods, statistics-based methods, and knowledge-based methods. Among them, the method based on template matching includes a gray projection template and a geometric feature template.
  • the gray projection method refers to the horizontal and vertical projection of a gray image of a human face, and respectively counts the gray value and / or in two directions.
  • the value of the gray function finds specific change points, and then combine the positions of change points in different directions according to prior knowledge to obtain the position of the human eye; the geometric feature template is implemented using the individual features and distribution features of the eyes as the basis Human eye detection.
  • Statistics-based methods generally train and learn a large number of target samples and non-target samples to obtain a set of model parameters, and then build a classifier or filter to detect the target based on the model.
  • the knowledge-based method is to determine the application environment of the image, summarize the knowledge (such as contour information, color information, position information) that can be used for human eye detection under specific conditions, and summarize them into rules that guide human eye detection.
  • This embodiment uses a rectangular frame to frame the left-eye image and the right-eye image, respectively, to obtain the following human eye position data, including:
  • r 1 the distance from the upper left vertex of the rectangular frame of the left-eye image to the left-most face image
  • t 1 the distance from the upper left vertex of the rectangular frame of the left eye image to the uppermost edge of the face image
  • w 1 the width of the rectangular frame of the left-eye image
  • h 1 the height of the rectangular frame of the left-eye image
  • r 2 the distance from the upper-left vertex of the rectangular frame of the right-eye image to the left-most face image
  • t 2 the distance from the top left vertex of the rectangular frame of the right eye image to the uppermost edge of the face image
  • w 2 the width of the rectangular frame of the right-eye image
  • h 2 the height of the rectangular frame of the right-eye image.
  • Step S23 finds the left eyeball image from the left eye image and the right eyeball image from the right eye image. If no eyeball image is found, the process returns to step S1 to acquire the image again until the eyeball image can be found in step S23.
  • Eyeball search methods include neural network method, extreme point position discrimination method of edge point integral projection curve, template matching method, multi-resolution mosaic map method, geometric and symmetry detection method, and Hough transform-based method. This embodiment uses a rectangular frame to frame the left eyeball image and the right eyeball image, respectively, to obtain the following eyeball position data, including:
  • r 3 the distance between the top left vertex of the rectangular frame of the left eyeball image and the leftmost face of the face image
  • t 3 the distance from the top left vertex of the rectangular frame of the left eyeball image to the uppermost edge of the face image
  • w 3 the width of the rectangular frame of the left eyeball image
  • h 3 the height of the rectangular frame of the left eyeball image
  • r 4 the distance between the top left vertex of the rectangular frame of the right eyeball image and the leftmost face of the face image
  • t 4 the distance from the top left vertex of the rectangular frame of the right eyeball image to the uppermost edge of the face image
  • w 4 the width of the rectangular frame of the right eyeball image
  • h 4 the height of the rectangular frame of the right eyeball image.
  • eyeball position data can also be obtained from a human eye image, and this application does not go into details about obtaining eyeball position data from a human eye image.
  • the calibration data includes distance calibration data, horizontal calibration data, and vertical calibration data.
  • the calibration data is calculated according to the position data of the human eye and the position data of the eyeball, and the calibration data and corresponding multiple data are recorded in sequence.
  • the step S3 of the positioning point location information includes:
  • Steps S31 to S32 are used to calculate the calibration data when the human eye looks at an anchor point, and the calibration data and the corresponding anchor point information are stored in the memory.
  • calculation and data storage are performed on the nine positioning points of upper left, upper middle, upper right, middle left, middle middle, right middle, lower left, middle lower, and lower right.
  • the distance calibration data is used to locate the distance of the human eye from the specified viewing area, and the horizontal calibration data and vertical calibration data are used to indicate the position of the eyeball when the human eye looks at the specified positioning point.
  • the step of calculating distance calibration data when a human eye fixes on one of the positioning points according to the human eye position data includes:
  • step S321 the coordinates (x 1 , y 1 ) of the center position of the left eye can be calculated by formula (1)
  • step S322 the distance d between the center of the left eye and the center of the right eye can be calculated by formula (3), where d is the distance calibration data.
  • the value of d can be used to locate the distance of the human eye from the specified viewing area.
  • the step of calculating, based on the human eye position data and the eyeball position data, horizontal eyeball position lateral calibration data and vertical eyeball position vertical calibration data when a human eye fixates on one of the positioning points includes:
  • nine positioning points are set in a specified viewing area, and the human eye sequentially looks at the nine positioning points, and sequentially records the correspondence between the calibration data and the positioning points when the human eye looks at one positioning point.
  • the camera acquires an image, looks for a face image from the image, then looks for a human eye image, and finally looks for an eyeball image from the human eye image.
  • This method is fast and accurate.
  • Degree is high; distance calibration data d, horizontal calibration data m, and vertical calibration data n are calculated according to human eye position data and eyeball position data, and d, m, n and position information of the positioning point are stored in a memory.
  • the distance calibration data of 9 positioning points can be used to calibrate the distance between the human eye and the specified viewing area, thereby limiting the distance between the user and the specified viewing area within the specified range.
  • the horizontal calibration data and vertical calibration data can estimate the position of the specified viewing area to which the user's line of sight is looking, and the accuracy of the line of sight tracking is high.
  • the method for acquiring the eye movement control calibration data in this embodiment does not require special equipment, and can collect data according to the eye movement habits of the user, and the user experience is good.
  • an embodiment of the present application further provides a device for acquiring eye movement control calibration data, including: an image acquisition module 10 for sequentially acquiring user images of a user gazing at a plurality of positioning points; The point is preset in the designated viewing area;
  • An image analysis module 20 configured to sequentially search a human eye image and an eyeball image from the user image, and obtain human eye position data and eyeball position data;
  • a data calculation module 30 is configured to calculate calibration data according to the position data of the human eye and the position data of the eyeball, and sequentially record the calibration data and position information of a plurality of corresponding anchor points.
  • the designated viewing area in the image acquisition module 10 includes a terminal device interface for human-computer interaction with a user, such as a smartphone display, a flat-panel display, a smart TV display, a personal computer display, and a notebook computer. Display, etc.
  • User images can be obtained through cameras.
  • the cameras include the front camera and external cameras, such as the front camera of the mobile phone.
  • FIG. 2 a schematic diagram of an anchor point for a designated viewing area is provided, including upper left, middle upper, upper right, left middle, middle middle, right middle, lower left, middle lower, and lower right.
  • upper left, middle left, The designated viewing area surrounded by bottom left, middle bottom, middle middle, and top middle is the left area
  • the designated viewing area surrounded by top right, middle right, bottom right, middle bottom, middle, and middle top is the right area, top left, middle left, and middle.
  • the designated viewing areas surrounded by middle, right middle, top right, and top middle are the top areas
  • the designated viewing areas surrounded by bottom left, left middle, middle middle, right middle, bottom right, and bottom middle are the bottom areas.
  • the user looks at an anchor point of the mobile phone display at an appropriate distance from the mobile phone display according to his habits, and collects an image of a human eye watching the anchor point through a front camera of the mobile phone.
  • the gaze time may be set in advance, and the first reminder unit may separately send reminder information reminding the user to continuously look at each anchor point to remind the user to continuously watch the anchor point; the first judgment unit may determine the current time from the time when the reminder information is sent.
  • the first image acquisition unit determines whether the time between is greater than the preset gaze duration, and if the time between the current time and the time when the reminder information is sent is greater than the preset gaze duration, the first image acquisition unit generates an instruction to capture a user image, the camera Obtain shooting instructions and collect images; or you can send information reminding the user to continuously watch each anchor point through the second reminder unit, and then use the camera to continuously capture real-time images through the camera through the real-time image acquisition unit.
  • the classifier distinguishes the state of the human eye. If it is determined that the human eye is in the gaze state, then the second image acquisition unit acquires any frame image of the above-mentioned image in the gaze state as the user image.
  • the calibration data can be used in subsequent eye tracking control to determine whether the distance between the user and the specified viewing area is within a preset range, and track the position of the user's line of sight to improve the accuracy of the line of sight judgment.
  • the user in this embodiment first looks at the upper left positioning point, and the camera collects an image of the human eye gazing at the upper left positioning point, finds the human eye image and the eyeball image from the image, obtains human eye position data and eyeball position data, and calculates a calibration. Data to record the correspondence between the calibration data and the upper left anchor point. The user then starts looking at the center-up anchor point, and the rest of the steps are the same as the top-left anchor point. Until the upper left, middle upper, upper right, left middle, middle middle, right middle, lower left, middle lower, and lower right calibration data of the nine positioning points and the corresponding relationship of the positioning points are collected.
  • the image analysis module 20 includes:
  • a face finding unit 201 configured to find a face image from the user image
  • the human eye searching unit 202 is configured to search for a human eye image from the human face image and obtain human eye position data from the human face image, where the human eye image includes a left eye image and a right eye image;
  • the eyeball search unit 203 is configured to search an eyeball image from the human eye image, and obtain eyeball position data from the human face image.
  • the face search unit 201 first searches for a face image from the image. If no face image is found in the image, it returns to step S1 to adjust the relative position of the user and the designated viewing area until the image obtained by the camera Face images can be found in.
  • face images There are many ways to search for facial images, such as: using face rules (such as the distribution of eyes, nose, mouth, etc.) to perform face detection on the input image; by finding features that are invariant to the face (such as skin color, edges, textures) ) To perform face detection on the input image; describe the facial features of the face with a standard face template.
  • the face detection When performing face detection, first calculate the correlation value between the input image and the standard face template, and then The obtained correlation value is compared with a preset threshold value to determine whether a face exists in the input image; the face area is regarded as a type of pattern, and a large amount of face data is used as a sample training to learn potential rules
  • a classifier is constructed to detect faces by discriminating all possible region pattern attributes in the image.
  • the found face image is marked with a rectangular frame.
  • the human eye search unit 202 searches for a human eye image from a rectangular frame of the face image, which is beneficial to narrow the search range and improve the search efficiency and accuracy of the human eye search. If no human eye image is found, return to step S1 to reacquire Image until a human eye image can be found in step S22.
  • Human eye search methods include template-based methods, statistics-based methods, and knowledge-based methods. Among them, the method based on template matching includes a gray projection template and a geometric feature template.
  • the gray projection method refers to the horizontal and vertical projection of a gray image of a human face, and respectively counts the gray value and / or in two directions.
  • the value of the gray function finds specific change points, and then combine the positions of change points in different directions according to prior knowledge to obtain the position of the human eye; the geometric feature template is implemented using the individual features and distribution features of the eyes as the basis Human eye detection.
  • Statistics-based methods generally train and learn a large number of target samples and non-target samples to obtain a set of model parameters, and then build a classifier or filter to detect the target based on the model.
  • the knowledge-based method is to determine the application environment of the image, summarize the knowledge (such as contour information, color information, position information) that can be used for human eye detection under specific conditions, and summarize them into rules that guide human eye detection.
  • This embodiment uses a rectangular frame to frame the left-eye image and the right-eye image, respectively, to obtain the following human eye position data, including:
  • r 1 the distance from the upper left vertex of the rectangular frame of the left-eye image to the left-most face image
  • t 1 the distance from the upper left vertex of the rectangular frame of the left eye image to the uppermost edge of the face image
  • w 1 the width of the rectangular frame of the left-eye image
  • h 1 the height of the rectangular frame of the left-eye image
  • r 2 the distance from the upper-left vertex of the rectangular frame of the right-eye image to the left-most face image
  • t 2 the distance from the top left vertex of the rectangular frame of the right eye image to the uppermost edge of the face image
  • w 2 the width of the rectangular frame of the right-eye image
  • h 2 the height of the rectangular frame of the right-eye image.
  • the eyeball search unit 203 finds the left eyeball image from the left eye image, and the right eyeball image from the right eye image. If no eyeball image is found, the process returns to step S1 to reacquire the image until the eyeball image can be found in step S23.
  • Eyeball search methods include neural network method, extreme point position discrimination method of edge point integral projection curve, template matching method, multi-resolution mosaic map method, geometric and symmetry detection method, and Hough transform-based method. This embodiment uses a rectangular frame to frame the left eyeball image and the right eyeball image, respectively, to obtain the following eyeball position data, including:
  • r 3 the distance between the top left vertex of the rectangular frame of the left eyeball image and the leftmost face of the face image
  • t 3 the distance from the top left vertex of the rectangular frame of the left eyeball image to the uppermost edge of the face image
  • w 3 the width of the rectangular frame of the left eyeball image
  • h 3 the height of the rectangular frame of the left eyeball image
  • r 4 the distance between the top left vertex of the rectangular frame of the right eyeball image and the leftmost face of the face image
  • t 4 the distance from the top left vertex of the rectangular frame of the right eyeball image to the uppermost edge of the face image
  • w 4 the width of the rectangular frame of the right eyeball image
  • h 4 the height of the rectangular frame of the right eyeball image.
  • eyeball position data can also be obtained from a human eye image, and this application does not go into details about obtaining eyeball position data from a human eye image.
  • the calibration data includes distance calibration data, horizontal calibration data, and vertical calibration data
  • the data calculation module 30 includes:
  • a first data obtaining unit 301 configured to calculate distance calibration data when a human eye looks at one of the positioning points according to the human eye position data
  • a second data obtaining unit 302 configured to calculate, according to the human eye position data and the eyeball position data, horizontal eyeball position lateral calibration data and vertical eyeball position calibration data when a human eye fixates on one of the positioning points;
  • a data storage unit 303 is configured to store the distance calibration data, the horizontal calibration data, the vertical calibration data, and the corresponding position information of the anchor point in a memory.
  • the first data acquisition unit 301, the second data acquisition unit 302, and the data storage unit 303 are used to calculate calibration data when a human eye looks at an anchor point, and the calibration data and the corresponding anchor point information are stored in a memory. .
  • calculation and data storage are performed on the nine positioning points of upper left, upper middle, upper right, middle left, middle middle, right middle, lower left, middle lower, and lower right.
  • the distance calibration data is used to locate the distance of the human eye from the specified viewing area, and the horizontal calibration data and vertical calibration data are used to indicate the position of the eyeball when the human eye looks at the specified positioning point.
  • the first data obtaining unit 301 includes:
  • a first calculation subunit 3011 is configured to calculate coordinates of a left eye center position according to left eye position data included in the human eye position data, and calculate a right eye center position according to right eye position data included in the human eye position data. coordinate;
  • a second calculation subunit 3012 configured to calculate the distance between the center of the left eye and the center of the right eye according to the coordinates of the center position of the left eye and the coordinates of the center position of the right eye to obtain the distance calibration data;
  • the second calculation subunit 3012 can calculate the distance d between the center of the left eye and the center of the right eye by using formula (14), where d is distance calibration data.
  • the value of d can be used to locate the distance of the human eye from the specified viewing area.
  • the second data obtaining unit 302 includes:
  • a third calculation subunit 3021 configured to calculate coordinates of the left eyeball center position according to the left eyeball position data included in the eyeball position data; and calculate the right eyeball center position coordinates according to the right eyeball position data included in the eyeball position data;
  • a fourth calculation subunit 3022 is configured to calculate a first lateral distance between the left eyeball center and the leftmost side of the left eye image, and the left eyeball center according to the left eyeball center position coordinates and the left eye position data.
  • a first longitudinal distance from the uppermost edge of the left-eye image; and calculating a distance between the center of the right-eyeball and the right-most side of the right-eye image based on the right-eye-center position coordinates and the right-eye position data A second lateral distance, and a second longitudinal distance between the center of the right eyeball and the lowermost edge of the right eye image;
  • a fifth calculation subunit 3023 is configured to calculate a ratio of the first lateral distance to the second lateral distance to obtain the lateral calibration data; and calculate a ratio of the first longitudinal distance to the second longitudinal distance To obtain the longitudinal calibration data.
  • the fifth calculation subunit 3023 can calculate the lateral calibration data m by formula (21):
  • nine positioning points are set in a designated viewing area, and the human eye sequentially looks at the nine positioning points, and sequentially records the correspondence between the calibration data and the positioning points when the human eye looks at one positioning point.
  • the camera acquires an image, looks for a face image from the image, then looks for a human eye image, and finally looks for an eyeball image from the human eye image.
  • This method is fast and accurate.
  • Degree is high; distance calibration data d, horizontal calibration data m, and vertical calibration data n are calculated according to human eye position data and eyeball position data, and d, m, n and position information of the positioning point are stored in a memory.
  • the distance calibration data of 9 positioning points can be used to calibrate the distance between the human eye and the specified viewing area, thereby limiting the distance between the user and the specified viewing area within the specified range.
  • the horizontal calibration data and vertical calibration data can estimate the position of the specified viewing area to which the user's line of sight is looking, and the accuracy of the line of sight tracking is high.
  • the apparatus for acquiring eye movement control calibration data in this embodiment does not need to use special equipment, and can collect data according to a user's eye movement habits, and the user experience is good.
  • This application also proposes a computer device 03, which includes a processor 04, a memory 01, and a computer program 02 stored on the memory 01 and executable on the processor 04.
  • the processor 04 executes the computer At program 02, the above-mentioned method for acquiring eye movement control calibration data is implemented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne un procédé et un appareil pour acquérir des données d'étalonnage de commande de mouvement oculaire, le procédé consistant à : acquérir successivement des images d'utilisateur d'yeux humains fixant une pluralité de points de positionnement ; rechercher successivement des images d'œil humain et des images de globe oculaire à partir des images d'utilisateur, et acquérir des données de position d'œil humain et des données de position de globe oculaire ; et calculer des données d'étalonnage, et enregistrer successivement les données d'étalonnage et des informations de position correspondantes de la pluralité de points de positionnement. Selon la présente invention, aucun dispositif spécial n'a besoin d'être utilisé, et une collecte de données peut être réalisée selon les habitudes de mouvement oculaire d'un utilisateur.
PCT/CN2019/073766 2018-08-31 2019-01-29 Procédé et appareil d'acquisition de données d'étalonnage de commande de mouvement oculaire WO2020042542A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811014201.1A CN109343700B (zh) 2018-08-31 2018-08-31 眼动控制校准数据获取方法和装置
CN201811014201.1 2018-08-31

Publications (1)

Publication Number Publication Date
WO2020042542A1 true WO2020042542A1 (fr) 2020-03-05

Family

ID=65292236

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073766 WO2020042542A1 (fr) 2018-08-31 2019-01-29 Procédé et appareil d'acquisition de données d'étalonnage de commande de mouvement oculaire

Country Status (2)

Country Link
CN (1) CN109343700B (fr)
WO (1) WO2020042542A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444789A (zh) * 2020-03-12 2020-07-24 深圳市时代智汇科技有限公司 一种基于视频感应技术的近视预防方法及其系统
CN113255476A (zh) * 2021-05-08 2021-08-13 西北大学 一种基于眼动追踪的目标跟踪方法、系统及存储介质
CN114995412A (zh) * 2022-05-27 2022-09-02 东南大学 一种基于眼动追踪技术的遥控小车控制系统及方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109976528B (zh) * 2019-03-22 2023-01-24 北京七鑫易维信息技术有限公司 一种基于头动调整注视区域的方法以及终端设备
CN110275608B (zh) * 2019-05-07 2020-08-04 清华大学 人眼视线追踪方法
CN110399930B (zh) * 2019-07-29 2021-09-03 北京七鑫易维信息技术有限公司 一种数据处理方法及系统
CN110780742B (zh) * 2019-10-31 2021-11-02 Oppo广东移动通信有限公司 眼球追踪处理方法及相关装置
CN111290580B (zh) * 2020-02-13 2022-05-31 Oppo广东移动通信有限公司 基于视线追踪的校准方法及相关装置
CN113918007B (zh) * 2021-04-27 2022-07-05 广州市保伦电子有限公司 一种基于眼球追踪的视频交互操作方法
CN116824683B (zh) * 2023-02-20 2023-12-12 广州视景医疗软件有限公司 一种基于移动设备的眼动数据采集方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060110008A1 (en) * 2003-11-14 2006-05-25 Roel Vertegaal Method and apparatus for calibration-free eye tracking
CN101807110A (zh) * 2009-02-17 2010-08-18 由田新技股份有限公司 瞳孔定位方法及系统
CN102802502A (zh) * 2010-03-22 2012-11-28 皇家飞利浦电子股份有限公司 用于跟踪观察者的注视点的系统和方法
CN105094337A (zh) * 2015-08-19 2015-11-25 华南理工大学 一种基于虹膜和瞳孔的三维视线估计方法
CN109375765A (zh) * 2018-08-31 2019-02-22 深圳市沃特沃德股份有限公司 眼球追踪交互方法和装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102830793B (zh) * 2011-06-16 2017-04-05 北京三星通信技术研究有限公司 视线跟踪方法和设备
CN102662476B (zh) * 2012-04-20 2015-01-21 天津大学 一种视线估计方法
CN107436675A (zh) * 2016-05-25 2017-12-05 深圳纬目信息技术有限公司 一种视觉交互方法、系统和设备
US9996744B2 (en) * 2016-06-29 2018-06-12 International Business Machines Corporation System, method, and recording medium for tracking gaze using only a monocular camera from a moving screen
CN107633240B (zh) * 2017-10-19 2021-08-03 京东方科技集团股份有限公司 视线追踪方法和装置、智能眼镜
CN108427503B (zh) * 2018-03-26 2021-03-16 京东方科技集团股份有限公司 人眼追踪方法及人眼追踪装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060110008A1 (en) * 2003-11-14 2006-05-25 Roel Vertegaal Method and apparatus for calibration-free eye tracking
CN101807110A (zh) * 2009-02-17 2010-08-18 由田新技股份有限公司 瞳孔定位方法及系统
CN102802502A (zh) * 2010-03-22 2012-11-28 皇家飞利浦电子股份有限公司 用于跟踪观察者的注视点的系统和方法
CN105094337A (zh) * 2015-08-19 2015-11-25 华南理工大学 一种基于虹膜和瞳孔的三维视线估计方法
CN109375765A (zh) * 2018-08-31 2019-02-22 深圳市沃特沃德股份有限公司 眼球追踪交互方法和装置

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111444789A (zh) * 2020-03-12 2020-07-24 深圳市时代智汇科技有限公司 一种基于视频感应技术的近视预防方法及其系统
CN111444789B (zh) * 2020-03-12 2023-06-20 深圳市时代智汇科技有限公司 一种基于视频感应技术的近视预防方法及其系统
CN113255476A (zh) * 2021-05-08 2021-08-13 西北大学 一种基于眼动追踪的目标跟踪方法、系统及存储介质
CN113255476B (zh) * 2021-05-08 2023-05-19 西北大学 一种基于眼动追踪的目标跟踪方法、系统及存储介质
CN114995412A (zh) * 2022-05-27 2022-09-02 东南大学 一种基于眼动追踪技术的遥控小车控制系统及方法

Also Published As

Publication number Publication date
CN109343700A (zh) 2019-02-15
CN109343700B (zh) 2020-10-27

Similar Documents

Publication Publication Date Title
WO2020042542A1 (fr) Procédé et appareil d'acquisition de données d'étalonnage de commande de mouvement oculaire
WO2020042541A1 (fr) Procédé et dispositif interactifs de suivi de globe oculaire
US9791927B2 (en) Systems and methods of eye tracking calibration
Xu et al. Turkergaze: Crowdsourcing saliency with webcam based eye tracking
CN105184246B (zh) 活体检测方法和活体检测系统
Li et al. Learning to predict gaze in egocentric video
US9075453B2 (en) Human eye controlled computer mouse interface
CN104978548B (zh) 一种基于三维主动形状模型的视线估计方法与装置
US9750420B1 (en) Facial feature selection for heart rate detection
KR101288447B1 (ko) 시선 추적 장치와 이를 이용하는 디스플레이 장치 및 그 방법
CN105912126B (zh) 一种手势运动映射到界面的增益自适应调整方法
WO2021135639A1 (fr) Procédé et appareil de détection de corps vivant
Emery et al. OpenNEEDS: A dataset of gaze, head, hand, and scene signals during exploration in open-ended VR environments
CN111696140A (zh) 基于单目的三维手势追踪方法
WO2023071882A1 (fr) Procédé de détection de regard humain, procédé de commande et dispositif associé
CN110051319A (zh) 眼球追踪传感器的调节方法、装置、设备及存储介质
KR20230085901A (ko) 탈모 상태 정보 제공 방법 및 장치
Yang et al. Continuous gaze tracking with implicit saliency-aware calibration on mobile devices
US10036902B2 (en) Method of determining at least one behavioural parameter
Yang et al. vGaze: Implicit saliency-aware calibration for continuous gaze tracking on mobile devices
Kim et al. Gaze estimation using a webcam for region of interest detection
WO2024113275A1 (fr) Procédé et appareil d'acquisition de point de regard, dispositif électronique et support de stockage
JP2016111612A (ja) コンテンツ表示装置
CN115393963A (zh) 运动动作纠正方法、系统、存储介质、计算机设备及终端
CN112527103B (zh) 显示设备的遥控方法、装置、设备及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19854978

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19854978

Country of ref document: EP

Kind code of ref document: A1