WO2020063021A1 - Eye tracking method, apparatus and system applied to terminal device - Google Patents

Eye tracking method, apparatus and system applied to terminal device Download PDF

Info

Publication number
WO2020063021A1
WO2020063021A1 PCT/CN2019/095264 CN2019095264W WO2020063021A1 WO 2020063021 A1 WO2020063021 A1 WO 2020063021A1 CN 2019095264 W CN2019095264 W CN 2019095264W WO 2020063021 A1 WO2020063021 A1 WO 2020063021A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
eye
eye feature
server
Prior art date
Application number
PCT/CN2019/095264
Other languages
French (fr)
Chinese (zh)
Inventor
孔祥晖
严海
黄通兵
Original Assignee
北京七鑫易维信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京七鑫易维信息技术有限公司 filed Critical 北京七鑫易维信息技术有限公司
Publication of WO2020063021A1 publication Critical patent/WO2020063021A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Definitions

  • the present application relates to the technical field of eye tracking, and in particular, to an eye tracking method, device, and system applied to a terminal device.
  • Eye tracking technology as an innovative interactive method, is more and more well known to the public, and has been widely used in people's work and learning.
  • the process of eye tracking technology mainly includes the following three steps:
  • Step S1 the eye-tracking device acquires a user's face image using a collection device, where the collection device may be an optical device, an electrical device, etc .;
  • Step S2 The eye-tracking device processes the face image of the user, and extracts eye feature information of the user;
  • step S3 the eye tracking device processes the eye feature information of the user, and obtains the user's gaze direction and gaze point position.
  • the above three steps are implemented on the device side.
  • the system running capacity, storage space and other resources are consumed on the device side.
  • the eye tracking program is run for a long time, the device side may become hot and hot. Problems such as increased power consumption, and the above problems reduce the user experience.
  • Embodiments of the present application provide an eye tracking method, device, and system applied to a terminal device, so as to at least solve a technical problem that the prior art eye tracking algorithms are all run on the device side, which causes a large resource consumption on the device side.
  • an eye tracking method applied to a terminal device includes: collecting image information; sending image information to a cloud processor; and receiving and analyzing gaze information fed back by the cloud processor.
  • the eye tracking method applied to the terminal device further includes: acquiring transmission information; determining an image identifier corresponding to the transmission information, wherein the image identifier is used to identify a transmission sequence of the first frame image; and transmitting the transmission information in a first regular manner To cloud processor.
  • the eye tracking method applied to the terminal device further includes: determining a time stamp corresponding to the first frame image of the transmission information; and generating an image identifier corresponding to the first frame image according to the time stamp.
  • the sending information includes: a first frame header, a first control word, a first check code, and cloud information.
  • the eye tracking method applied to the terminal device further includes: receiving reception information sent in a second regular manner; analyzing the reception information, the reception information includes a second frame header, a second control word, a second check code, and gaze information Complete the verification of the second check code; if the verification result of the second check code satisfies a preset condition, determine the fixation information as the fixation information of the target object.
  • an eye tracking method applied to a terminal device includes: acquiring image information sent by the device end; extracting eye feature information from the image information to form an eye feature information set. , Wherein the image information includes eye feature information; determining gaze information according to the eye feature information set; and sending gaze information to the device.
  • an eye tracking method applied to a terminal device including: collecting image information; extracting eye feature information from the image information to form an eye feature information set, wherein the image The information includes eye feature information; sends the eye feature information set to the server; receives and analyzes the gaze information fed back by the server.
  • the eye tracking method applied to the terminal device further includes: converting the image information into corresponding eye feature information; configuring the corresponding eye feature information with a data label; The corresponding eye feature information of the configured data tags is sorted.
  • the corresponding eye feature information includes one or more of pupil information, corneal information, spot information, iris information, and / or eyelid information, eye image information, and gaze depth information of the target object.
  • the eye feature information set is configured as one or more groups of eye feature information with data labels.
  • the eye tracking method applied to the terminal device further includes: acquiring transmission information; determining an image identifier corresponding to the transmission information, and acquiring a set of eye feature information corresponding to the series of image identifiers, wherein the image identifier is used to identify the first frame of the image.
  • Sending order ; sending the sending information to the server in a first regular manner.
  • the eye tracking method applied to the terminal device further includes: determining a time stamp corresponding to the first frame image of the transmission information; and generating an image identifier corresponding to the first frame image according to the time stamp.
  • the sending information further includes: a first frame header, a first control word, and a first check code.
  • the eye tracking method applied to the terminal device further includes: receiving reception information sent in a second regular manner; analyzing the reception information, the reception information includes a second frame header, a second control word, a second check code, and gaze information Complete the verification of the second check code; if the verification result of the second check code satisfies a preset condition, determine the fixation information as the fixation information of the target object.
  • an eye tracking device applied to a terminal device including: a first acquisition module configured to acquire image information; and a first sending module configured to send image information to cloud processing
  • a first receiving module configured to receive and analyze gaze information fed back by a cloud processor.
  • an eye tracking device applied to a terminal device including: a second acquisition module configured to acquire image information; and a second extraction module configured to extract eyes from the image information Eye feature information, forming eye feature information set, wherein the image information includes eye feature information; a second sending module is configured to send the eye feature information set to the server; a second receiving module is configured to receive and analyze the server feedback Gaze information.
  • an eye tracking system applied to a terminal device including: a device end configured to collect image information, and extract eye feature information from the image information to form eye features Information collection, and then send the eye feature information to the server to receive and analyze the gaze information returned by the server; the server is configured to receive the eye feature information sent by the device, and process the eye feature information to obtain the gaze of the target object information.
  • a storage medium includes a stored program, where the program executes an eye tracking method applied to a terminal device.
  • a processor is further provided.
  • the processor is configured to run a program, wherein the program executes an eye tracking method applied to a terminal device when the program is run.
  • the method of combining the device side with the server is used to collect image information on the device side, and extract eye feature information from the image information to form an eye feature information set, and then send the eye feature information set.
  • the server processes according to the eye feature information set to obtain the fixation information of the target object, and sends the fixation information of the target object to the device side, and then the device side receives and analyzes the fixation information fed back by the server.
  • the collection of image information of the target object and the extraction of the eye feature information set run on the device side, and the processing of the eye feature information set is implemented in the server.
  • the gaze point algorithm that processes gaze information to obtain gaze information consumes more resources
  • processing it on the server effectively reduces the resource consumption on the device side.
  • the device can extract the eye feature information set from the image information, and set the eye feature information set. Sending to the server instead of directly sending the image information to the server can reduce the bandwidth occupied by directly uploading the image information to the server, and further reduce the resource consumption on the device side.
  • the solution provided in the present application can solve the technical problem that the eye tracking algorithms in the prior art all run on the device side, which causes a large resource consumption on the device side.
  • FIG. 1 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of an optional frame format according to an embodiment of the present application.
  • FIG. 3 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application
  • FIG. 4 is a schematic structural diagram of an eye tracking device applied to a terminal device according to an embodiment of the present application
  • FIG. 5 is a schematic structural diagram of an eye tracking device applied to a terminal device according to an embodiment of the present application
  • FIG. 6 is a schematic structural diagram of an eye tracking system applied to a terminal device according to an embodiment of the present application.
  • FIG. 7 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application.
  • FIG. 8 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application.
  • FIG. 9 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application.
  • an embodiment of an eye tracking method applied to a terminal device is provided. It should be noted that the steps shown in the flowchart of the accompanying drawings can be executed in a computer system such as a set of computer-executable instructions. And, although the logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than here.
  • the device end can execute the eye tracking method applied to the terminal device provided in this embodiment.
  • the device end includes a mobile device and a non-mobile device, and the mobile device may be, but is not limited to, a mobile phone, a tablet, and the like.
  • FIG. 1 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application. As shown in FIG. 1, the method includes the following steps:
  • step S102 image information is collected.
  • the device has an image collector, which can collect image information of the target object, wherein the image information of the target object includes an eye image of the target object.
  • the image collector may be a camera of the mobile phone.
  • the resolution of the rear camera of the mobile phone is higher than the resolution of the front camera of the mobile phone, but the resolution of the front camera of the mobile phone is 10 million pixels, and the image information it collects meets the needs of eye tracking technology.
  • the image collector in this embodiment uses the front camera of the mobile phone.
  • Step S104 Send the image information to the cloud processor.
  • the cloud server has strong processing capabilities and can run more complex algorithms, after collecting the image information of the target object, the device sends the image information of the target object to the cloud processor for processing.
  • the cloud processor is a processor in a cloud server.
  • the device side and the cloud processor may communicate through a wireless communication method.
  • the wireless communication method may include, but is not limited to, WIFI, 3G, 4G, and GPRS.
  • the cloud processor after receiving the image information, extracts at least one of the following information in the image information based on the data extraction algorithm: pupil information, corneal information, iris information and / or eyelid information, eyes Image information, gaze depth information, and at least one piece of information is used as eye feature information.
  • the cloud processor calculates the fixation information of the target object according to the eye tracking algorithm.
  • the fixation information includes at least fixation direction information and fixation point information.
  • the fixation point information may be the fixation point of the target object on the device. Coordinate information on the screen.
  • the cloud processor sends the calculated gaze information to the device side through wireless communication, and the device side can receive the gaze information.
  • Step S106 Receive and analyze gaze information fed back by the cloud processor.
  • the device side and the cloud processor can communicate through wireless communication.
  • the wireless communication methods may include, but are not limited to, WIFI, 3G, 4G, and GPRS.
  • the cloud processor does not need to send the image information to the device side, and only sends the gaze information to the device side.
  • the communication rate between the cloud processor and the device can meet the real-time requirements, and the throughput is small. Even if the GPRS with a slower communication rate is used, it can meet the requirements.
  • the device-side and cloud processing methods are used to collect the image information of the target object on the device side and send the image information to the cloud processor. After receiving the image information, extract the eye feature information from the image information to form the eye feature information, and then process the eye feature information to obtain the fixation information of the target object, and send the fixation information of the target object to the device. The device side then receives and analyzes the gaze information fed back by the cloud processor.
  • the gaze point algorithm that processes gaze information based on eye feature information consumes a lot of resources, processing it on a cloud processor effectively reduces resource consumption on the device side.
  • the cloud processor has strong processing power and can run more complex algorithms, the cloud processor can directly process the image information of the target object to obtain eye feature information, and extract gaze information based on the eye feature information. , And finally send the gaze information to the device side to further reduce the resource consumption of the device side.
  • the solution provided in the present application can solve the technical problem that the eye tracking algorithms in the prior art all run on the device side, which causes a large resource consumption on the device side.
  • the device end can extract a series of eye feature information from the image information. Specifically, the device side extracts at least one of the following information in the image information based on the data extraction algorithm: pupil information, corneal information, spot information, iris information and / or eyelid information, eye image information, gaze depth information, and at least One piece of information is used as the eye feature information.
  • the image information collected by the device may include image information in multiple frames of images.
  • image information in multiple frames of images.
  • Gaze depth information is a set of eye feature information extracted from the same frame image.
  • Step S1060 obtaining the sending information
  • Step S1062 determining an image identifier corresponding to the transmission information, where the image identifier is used to identify a transmission sequence of the transmission information;
  • Step S1064 Send the sending information to the processor in a first regular manner.
  • the image identifier may be, but is not limited to, a time stamp, a name, or a random assigned identifier.
  • the timestamp is a timestamp corresponding to the time when the image information was collected.
  • the foregoing sending information may be information corresponding to the current frame image.
  • the device end may determine a timestamp corresponding to the first frame image of the sending information, and generate an image identifier corresponding to the first frame image according to the timestamp.
  • the image identification is also sent to the cloud processor during the process of sending image information to the cloud processor.
  • the cloud processor can receive the image identification in sequence according to the image identification of each frame of the image, which ensures that the cloud processor correctly analyzes the image information after the identification, and then processes the correct eye feature information to obtain accurate gaze information of the target object.
  • the device before sending the image information corresponding to the image identifier, the device needs to configure the sending information corresponding to the image information, and then sends the sending information to the cloud processor in a first regular manner.
  • the sending information includes: a first frame header, a first control word, a first check code, and image information.
  • the first rule mode may be a frame format.
  • FIG. 2 shows a schematic diagram of an optional frame format. As can be seen from FIG. 2, the frame format includes at least four parts, that is, a frame header, a control word, and data. (Ie, image information) and a check code, where the check code may be, but is not limited to, a number obtained by CRC (Cyclic Redundancy Check).
  • CRC Cyclic Redundancy Check
  • a CRC check can ensure that after the device communicates with the cloud processor, the cloud processor can parse the correct data and ensure the correctness of the data. At the same time, it can prevent the interference signal from affecting the analysis result.
  • the processor of the server analyzes the eye feature information and processes the parsed data to obtain the fixation information of the target object, and according to the second rule
  • the gaze information is sent to the device.
  • the data in the frame format is the gaze information.
  • the device end receives the received information sent in the second regular mode, parses the received information, and then completes the verification of the second check code. Wherein, if the verification result of the second verification code satisfies a preset condition, the device end determines the fixation information as the fixation information of the target object.
  • the received information includes a second frame header, a second control word, a second check code, and gaze information.
  • the device end uses the check code to verify the parsed data to ensure the correctness of the data, and at the same time, it can prevent the interference signal from affecting the analysis result. Get the correct gaze information for your target.
  • the solution provided by the present application can effectively reduce the requirement of the computing capability of the device tracking of the eye tracking technology.
  • the cloud processor updates the eye tracking algorithm in real time to complete the extraction of gaze information, which can improve the eye tracking processing rate on the device side.
  • the solution provided in this application runs the image processing algorithm on the device side and the gaze point algorithm is run by the cloud processor, which can greatly reduce the throughput of communication data and increase the communication rate.
  • FIG. 9 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application. As shown in FIG. 9, the method includes the following steps:
  • step S902 image information is collected.
  • the device has an image collector, which can collect image information of the target object, wherein the image information of the target object includes an eye image of the target object.
  • the image collector may be a camera of the mobile phone.
  • the resolution of the rear camera of the mobile phone is higher than the resolution of the front camera of the mobile phone, but the resolution of the front camera of the mobile phone is 10 million pixels, and the image information it collects meets the needs of eye tracking technology.
  • the image collector in this embodiment selects the front camera of the mobile phone.
  • Step S904 extracting eye feature information from the image information to form an eye feature information set, where the image information includes eye feature information.
  • the eye feature information set is configured as one or more groups of eye feature information with data labels.
  • the device side before forming the eye feature information set, the device side also converts the image information of the target object into corresponding eye feature information, configures the corresponding eye feature information with data tags, and then configures the configured data tags in a specified order.
  • the corresponding eye feature information is sorted.
  • the corresponding eye characteristic information includes one or more of pupil information, corneal information, spot information, iris information and / or eyelid information, eye image information, and gaze depth information of the target object.
  • the pupil information is at least Including: pupil center position, pupil diameter; corneal information includes at least: corneal spot reflection information; iris information includes at least: iris edge information.
  • Step S906 Send the eye feature information set to the server.
  • the server in step S906 may be a cloud processor, wherein the device side and the server may communicate through wireless communication, and the wireless communication method may include, but is not limited to, WIFI, 3G, 4G, GPRS, etc. .
  • the device does not need to send the entire collected image information to the server, and only sends the eye feature information to the server.
  • the device end may send the eye feature information to the server in the form of a frame format.
  • the frame format may be composed of a frame header, a control word, human eye data, and a check code.
  • the device end sends the eye feature on the device side.
  • the information can be based on a 1-byte frame header, a 1-byte control word, 3 to 6 bytes of eye characteristic information (that is, human eye data), and a 2-byte CRC check (that is, a check code).
  • Sent to the server where the total amount sent is about 10 bytes.
  • the device side collects a two-dimensional face image of the target object, and the resolution of the two-dimensional face image is 1920 * 1080.
  • the device side processes the two-dimensional face image to obtain a solid circle of 6 pixels, where the center coordinates of the solid circle are (800 * 800, 840 * 800).
  • the device side sends the center coordinates of the solid circle to the server through wireless communication.
  • Step S908 Receive and analyze the gaze information fed back by the server.
  • the server calculates the fixation information of the target object according to the eye tracking algorithm, wherein the fixation information includes at least fixation direction information and fixation point information, wherein the fixation point
  • the information may be coordinate information of the gaze point of the target object on the screen of the device.
  • the server sends the calculated gaze information to the device side through wireless communication, and the device side receives the gaze information. It should be noted that, in the above process, the server does not need to send the image information to the device side, and only sends the gaze information to the device side.
  • the communication rate between the server and the device can meet the real-time requirements, and the throughput is small, even if the GPRS with a slower communication rate can meet the requirements.
  • the eye tracking algorithm requires high computing power on the device side, the eye tracking algorithm has strict requirements on the device core and the main frequency, and it is difficult for the mid-level device side to meet the hardware index.
  • the eye tracking algorithm is always running on the device side, the resource consumption on the device side is large, and even the device side generates heat and freezes.
  • the solution provided in this application runs the eye tracking algorithm on the server, which can effectively avoid the above problems.
  • the battery capacity on the device side is limited, and the eye tracking algorithm is a technology based on video streaming, which consumes high power. Therefore, running the eye tracking algorithm on the server can reduce energy consumption on the device side.
  • the method of combining the device side with the server collects image information of the target object on the device side, and extracts eye feature information from the image information to form eye features. Information set, and then send the eye feature information set to the server, and the server processes the eye feature information set to obtain the fixation information of the target object, and sends the fixation information of the target object to the device side, and then the device side receives and analyzes Gaze information returned by the server.
  • the device side can also download The eye feature information set is extracted from the image information and sent to the server instead of directly sending the image information to the server, which can reduce the bandwidth occupied by directly uploading the image information to the server, further reducing the device-side LF.
  • the solution provided in the present application can solve the technical problem that the eye tracking algorithms in the prior art all run on the device side, which causes a large resource consumption on the device side.
  • the device end can extract the eye feature information from the image information. Specifically, the device side extracts at least one of the following information in the image information based on the data extraction algorithm: pupil information, corneal information, spot information, iris information and / or eyelid information, eye image information, gaze depth information, and at least One piece of information is used as the eye feature information.
  • the eye feature information set extracted from the image information may contain eye feature information in multiple frames of images.
  • the eye image information and gaze depth information are eye feature information extracted from the same frame image.
  • image identification is performed on the current frame image. The specific process may include the following steps;
  • Step S2060 obtaining the sending information
  • Step S2062 Determine an image identifier corresponding to the transmission information, and obtain an eye feature information set corresponding to the series of image identifiers, where the image identifier is used to identify a transmission sequence of the transmission information;
  • Step S2064 Send the sending information to the server in a first regular manner.
  • the image identifier may be, but is not limited to, a time stamp, a name, or a random assigned identifier.
  • the timestamp is a timestamp corresponding to the time when the image information was collected.
  • the foregoing sending information may be information corresponding to the current frame image.
  • the device end may determine a timestamp corresponding to the first frame image of the sending information, and generate an image identifier corresponding to the first frame image according to the timestamp.
  • the image identification is also sent to the server during the process of sending the eye feature information set to the server.
  • the server can receive the image identification in order according to the image identification of each frame of the image, which ensures that the server correctly analyzes the identified eye feature information, and then processes the correct eye information to obtain accurate gaze information of the target object.
  • the device end needs to configure the sending information corresponding to the eye characteristic information before sending the set of eye characteristic information corresponding to the image identification series, and then sends the sending information to the server in a first regular manner.
  • the sending information includes: a first frame header, a first control word, a first check code, and an eye feature information set.
  • the first rule mode may be a frame format.
  • FIG. 2 shows a schematic diagram of an optional frame format.
  • the frame format includes at least four parts, that is, a frame header, a control word, and data. (Ie, eye feature information) and a check code, where the check code may be, but is not limited to, a number obtained by CRC (Cyclic Redundancy Check).
  • CRC Cyclic Redundancy Check
  • the server parses the eye feature information and processes the parsed data to obtain the fixation information of the target object, and fixes the fixation according to the second rule.
  • the information is sent to the device.
  • the data in the frame format is gaze information.
  • the device end receives the received information sent in the second regular mode, parses the received information, and then completes the verification of the second check code.
  • the device end determines the fixation information as the fixation information of the target object.
  • the received information includes a second frame header, a second control word, a second check code, and gaze information.
  • the device end uses the check code to verify the parsed data to ensure the correctness of the data, and at the same time, it can prevent the interference signal from affecting the analysis result. Get the correct gaze information for your target.
  • the solution provided by the present application can effectively reduce the requirement of the computing capability of the device tracking of the eye tracking technology.
  • the eye tracking algorithm is updated in real time by the server to complete the extraction of gaze information, which can improve the processing rate of eye tracking on the device side.
  • the solution provided in this application runs the image processing algorithm on the device side and the gaze point algorithm is run by the server, which can greatly reduce the throughput of communication data and increase the communication rate.
  • FIG. 3 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application. As shown in FIG. 3, the method includes the following steps:
  • Step S302 Receive eye feature information of the target object sent by the device.
  • the device end includes a mobile device and a non-mobile device, and the mobile device may be, but is not limited to, a mobile phone, a tablet, and the like.
  • the device end has an image collector, which can collect image information of the target object, wherein the image information of the target object includes an eye image of the target object.
  • the image collector may be a camera of the mobile phone.
  • the resolution of the rear camera of the mobile phone is higher than the resolution of the front camera of the mobile phone, but the resolution of the front camera of the mobile phone is 10 million pixels, and the image information it collects meets the needs of eye tracking technology.
  • the image collector of this embodiment selects the front camera of the mobile phone.
  • the eye feature information of the target object includes at least one of the following: pupil information, corneal information, spot information, iris information, eyelid information, eye image information, and gaze depth information, where the pupil information includes at least the pupil center position, Pupil diameter; corneal information includes at least: corneal spot reflection information; iris information includes at least: iris edge information.
  • Step S304 Determine gaze information of the target object according to the eye feature information.
  • the server calculates the fixation information of the target object according to the eye tracking algorithm, where the fixation information may be coordinate information of the fixation point of the target object on the device-side screen .
  • Step S306 Send gaze information to the device.
  • the device and the server can communicate through wireless communication.
  • the wireless communication methods may include, but are not limited to, WIFI, 3G, 4G, and GPRS.
  • the server does not need to send the image information to the device side, it only needs to send the gaze information to the device side.
  • the communication rate between the server and the device can meet the real-time requirements, and the throughput is small, even if the GPRS with a slower communication rate can meet the requirements.
  • the server calculates the fixation information of the target object according to the eye tracking algorithm, where the fixation information may be coordinate information of the fixation point of the target object on the screen of the device. Then, the server sends the calculated gaze information to the device side through wireless communication, and the device side receives the gaze information.
  • the device side and the server are used to collect the image information of the target object on the device side, and extract the eye feature information of the target object from the image information, and then The eye feature information is sent to the server, and the server processes the eye feature information to obtain the fixation information of the target object, and sends the fixation information of the target object to the device side, so that the fixation information of the target object can be displayed on the device side.
  • the device can extract the eye feature information from the image information and send the eye feature information to The server, instead of sending the image information directly to the server, can reduce the bandwidth occupied by directly uploading the image information to the server, further reducing the resource consumption on the device side.
  • the solution provided in the present application can solve the technical problem that the eye tracking algorithms in the prior art all run on the device side, which causes a large resource consumption on the device side.
  • the device sends the eye feature information of the target object in a frame format, and determining the fixation information of the target object according to the eye feature information may include the following steps:
  • Step S3020 Parse the data in the frame format to obtain an analysis result, where the data in the frame format includes eye feature information;
  • Step S3022 verify the analysis result to obtain a verification result
  • step S3024 if the verification result meets a preset condition, fixation information of the target object is determined according to the eye feature information.
  • the foregoing frame format may be the frame format shown in FIG. 2.
  • the server parses the eye feature information and checks the check code in the frame format. When the check code passes the check, the server processes the parsed data to obtain the fixation information of the target object.
  • the server sends the gaze information to the device side, and the specific steps may include:
  • Step S3060 determining an image identifier corresponding to the eye feature information
  • Step S3062 obtaining gaze information corresponding to the image identification
  • Step S3064 sending the gaze information and the image identification format to the device.
  • the server may send the eye feature information and the image identifier corresponding to the current frame image according to the following frame format: a frame header, a control word, human eye data, and a check code, where the human eye data includes gaze information.
  • the image identifier may be, but is not limited to, a time stamp, a name, or a random assigned identifier. Where the image identification is a timestamp, the timestamp is a timestamp corresponding to the time when the image information was collected.
  • the server processes the eye feature information based on the eye tracking algorithm to obtain fixation information, and feeds the fixation information to the device according to the frame format.
  • the data in the frame format For gaze information.
  • the device side parses the gaze information and passes the gaze information to the application side (for example, a display device).
  • the predetermined format includes at least: a frame header, a control word, human eye data, and a check code.
  • the check code may be, but is not limited to, obtained by CRC (Cyclic Redundancy Check). Numbers.
  • CRC Cyclic Redundancy Check
  • the server after receiving the eye feature information, analyzes the eye feature information and processes the parsed data to obtain the fixation information of the target object, and converts the fixation information according to the frame format. Sent to the device. At this time, the data in the frame format is gaze information. After receiving the gaze information, the device side parses the gaze information and passes the gaze information to the application side (for example, a display device).
  • the solution provided by the present application can effectively reduce the requirement of the computing capability of the device tracking of the eye tracking technology.
  • the eye tracking algorithm is updated in real time by the server to complete the extraction of gaze information, which can improve the processing rate of eye tracking on the device side.
  • the solution provided in this application runs the image processing algorithm on the device side and the gaze point algorithm on the server, which can greatly reduce the throughput of communication data and increase the communication rate.
  • FIG. 7 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application, as shown in FIG. 7, The method includes the following steps:
  • step S702 the eye feature information of the target object is extracted from the image information of the target object.
  • the eye feature information of the target object includes at least one of the following: pupil information, corneal information, iris information, eyelid information, eye image information, and gaze depth information, where the pupil information includes at least the pupil center position, Pupil diameter; corneal spot includes at least: corneal spot reflection information; iris information includes at least: iris edge information.
  • the device has an image collector, and the image collector can collect image information of the target object.
  • the device uses the pupil-corneal reflection method to implement eye tracking technology to extract the eye feature information of the target object.
  • the device side uses an image processing algorithm to extract the eye characteristic information of the target object, where the eye characteristic information is expressed as
  • the coordinate data in the image information includes, for example, coordinates of the center position of the pupil, coordinates of the center position of the corneal spot, and the like.
  • Step S704 Send eye feature information to the server, where the server processes the eye feature information to obtain gaze information of the target object.
  • the device and the server can communicate through wireless communication.
  • the wireless communication methods may include, but are not limited to, WIFI, 3G, 4G, and GPRS.
  • the device does not need to send the entire collected image information to the server, only the pupil information, corneal information, iris information, and eyelid information.
  • the eye feature information such as eye image information and gaze depth information may be transmitted to the server.
  • it can be sent to the server according to the 1-byte frame header, 1-byte control word, 3 to 6 bytes of eye feature information, and 2-byte CRC check. Among them, the total number sent is about 10 words Section.
  • the server calculates the fixation information of the target object according to the eye tracking algorithm, where the fixation information may be coordinate information of the fixation point of the target object on the device-side screen . Then, the server sends the calculated gaze information to the device side through wireless communication, and the device side receives the gaze information.
  • the server does not need to send the image information to the device side, and only sends the gaze information to the device side.
  • the communication rate between the server and the device can meet the real-time requirements, and the throughput is small, even if the GPRS with a slower communication rate can meet the requirements.
  • the eye tracking algorithm requires high computing power on the device side, the eye tracking algorithm has strict requirements on the device core and the main frequency, and it is difficult for the mid-level device side to meet the hardware index.
  • the eye tracking algorithm is always running on the device side, the resource consumption on the device side is large, and even the device side generates heat and freezes.
  • the solution provided in this application runs the eye tracking algorithm on the server, which can effectively avoid the above problems.
  • the device side and the server are used to collect the image information of the target object on the device side, and extract the eye feature information of the target object from the image information, and then The eye feature information is sent to the server, and the server processes the eye feature information to obtain the fixation information of the target object, and sends the fixation information of the target object to the device side, so that the fixation information of the target object can be displayed on the device side.
  • the solution provided in the present application can solve the technical problem that the eye tracking algorithms in the prior art all run on the device side, which causes a large resource consumption on the device side.
  • FIG. 8 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application, as shown in FIG. 8, The method includes the following steps:
  • Step S802 Acquire image information sent by the device.
  • the device has an image collector, which can collect image information of the target object, wherein the image information of the target object includes an eye image of the target object.
  • the image collector may be a camera of the mobile phone.
  • the resolution of the rear camera of the mobile phone is higher than the resolution of the front camera of the mobile phone, but the resolution of the front camera of the mobile phone is 10 million pixels, and the image information it collects meets the needs of eye tracking technology.
  • the image collector uses the front camera of the mobile phone.
  • the device after collecting the image information of the target object, the device sends the image information of the target object to the server for processing in order to To achieve the purpose of reducing equipment-side resource consumption.
  • Step S804 extracting eye feature information from the image information to form an eye feature information set, where the image information includes eye feature information.
  • the server after receiving the image information, extracts at least one of the following information in the image information based on the data extraction algorithm: pupil information, corneal information, iris information and / or eyelid information, and eye image information 2. Gaze depth information, and use at least one piece of information as eye feature information.
  • Step S806 Determine the visual information according to the eye feature information set.
  • the server calculates the fixation information of the target object according to the eye tracking algorithm.
  • the fixation information includes at least fixation direction information and fixation point information.
  • the fixation point information may be The coordinate information of the target object's fixation point on the screen of the device. Then, the server sends the calculated gaze information to the device side through wireless communication, and the device side receives the gaze information, that is, step S808 is performed.
  • the eye tracking algorithm requires high computing power on the device side, the eye tracking algorithm has strict requirements on the device core and the main frequency, and it is difficult for the mid-level device side to meet the hardware index.
  • the eye tracking algorithm is always running on the device side, the resource consumption on the device side is large, and even the device side generates heat and freezes.
  • the solution provided in this application runs the eye tracking algorithm on the server, which can effectively avoid the above problems.
  • Step S808 Send gaze information to the device.
  • the device and the server can communicate through wireless communication.
  • the wireless communication methods may include, but are not limited to, WIFI, 3G, 4G, and GPRS.
  • the server does not need to send the image information to the device side, it only needs to send the gaze information to the device side.
  • the communication rate between the server and the device can meet the real-time requirements, and the throughput is small, even if the GPRS with a slower communication rate can meet the requirements.
  • the server extracts the eyes of the target object from the image information. Feature information, and determine gaze information of the target object based on the eye feature information, and then send the gaze information to the device.
  • the server can directly process the image information of the target object to obtain eye feature information, extract gaze information based on the eye feature information, and finally fixate The information is sent to the device side, which further reduces the resource consumption of the device side.
  • the solution provided in the present application can solve the technical problem that the eye tracking algorithms in the prior art all run on the device side, which causes a large resource consumption on the device side.
  • an embodiment of an eye tracking device applied to a terminal device is provided, and the device can execute the eye tracking method applied to a terminal device provided in Embodiment 1.
  • 4 is a schematic structural diagram of an eyeball tracking device applied to a terminal device according to an embodiment of the present application. As shown in FIG. 4, the device includes a first acquisition module 401, a first sending module 403, and a first receiving module 405. .
  • the first collection module 401 is configured to collect image information; the first sending module 403 is configured to send an eye feature information set to a processor; and the first receiving module 405 is configured to receive and analyze gaze information fed back by the processor.
  • first acquisition module 401 corresponds to steps S102 to S104 in Embodiment 1.
  • first sending module 403, and first receiving module 405 correspond to steps S102 to S104 in Embodiment 1.
  • Examples and application scenarios implemented by the three modules and corresponding steps The same, but not limited to the content disclosed in the first embodiment.
  • the first sending module includes a first obtaining module, a first determining module, and a second sending module.
  • the first obtaining module is configured to obtain the sending information;
  • the first determining module is configured to determine the image identifier corresponding to the sending information, wherein the image identifier is used to identify the sending order of the sending information;
  • the second sending module is configured to send the sending information;
  • the information is sent to the cloud processor in a first regular manner.
  • first obtaining module corresponds to steps S1060 to S1064 in Embodiment 1.
  • first determining module corresponds to steps S1060 to S1064 in Embodiment 1.
  • second sending module corresponds to steps S1060 to S1064 in Embodiment 1.
  • the three modules and the corresponding steps implement the same examples and application scenarios, but It is not limited to the content disclosed in the first embodiment.
  • the first determining module includes a second determining module and a first generating module.
  • the second determining module is configured to determine a timestamp corresponding to the first frame image of the transmitted information; the first generating module is configured to generate an image identifier corresponding to the first frame image according to the timestamp.
  • the sending information includes: a first frame header, a first control word, a first check code, and image information.
  • the first receiving module includes a second receiving module, a parsing module, a verifying module, and a third determining module.
  • the second receiving module is configured to receive the received information sent in a second regular manner; the parsing module is configured to analyze the received information, and the received information includes a second frame header, a second control word, a second check code, and gaze information.
  • a verification module configured to complete the verification of the second verification code; a third determination module configured to determine the fixation information as the fixation of the target object if the verification result of the second verification code meets a preset condition information.
  • an embodiment of an eye tracking device applied to a terminal device is provided.
  • the device can execute the eye tracking method applied to a terminal device provided in Embodiment 2.
  • 5 is a schematic structural diagram of an eye tracking device applied to a terminal device according to an embodiment of the present application. As shown in FIG. 5, the device includes a second acquisition module 501, a second extraction module 503, and a second sending module 505. And the second receiving module 507.
  • the second acquisition module 501 is configured to collect image information; the second extraction module 503 is configured to extract eye characteristic information from the image information to form an eye characteristic information set, where the image information includes eye characteristic information;
  • the second sending module 505 is configured to send the eye feature information set to the server;
  • the second receiving module 507 is configured to receive and analyze the gaze information fed back by the server.
  • second acquisition module 501 corresponds to steps S902 to S908 in Embodiment 2
  • second extraction module 503 corresponds to steps S902 to S908 in Embodiment 2
  • the four modules correspond to the corresponding steps.
  • the implementation example is the same as the application scenario, but is not limited to the content disclosed in the above embodiment 2.
  • the second sending module includes an obtaining module, a determining module, and a sending module.
  • the obtaining module is configured to obtain the sending information;
  • the determining module is configured to determine the image identifier corresponding to the sending information and obtain the eye feature information set corresponding to the image identifier, wherein the image identifier is used to identify the sending order of the sending information;
  • the sending information further includes: a first frame header, a first control word, a first check code, and an eye feature information set.
  • obtaining module corresponds to steps S2060 to S2064 in Embodiment 2.
  • determining module corresponds to steps S2060 to S2064 in Embodiment 2.
  • sending module corresponds to steps S2060 to S2064 in Embodiment 2.
  • the three modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the above embodiment 2. What's public.
  • FIG. 6 is a schematic structural diagram of an eye tracking system applied to a terminal device according to an embodiment of the present application, as shown in FIG. 6,
  • the system includes: a device end 601 and a server 603.
  • the device end 601 is configured to collect image information of the target object, complete the sorting of the eye feature information of the target object, form a series of eye feature information, and then send the series of eye feature information to the server to receive and analyze The gaze information fed back by the server;
  • the server 603 is configured to receive the eye feature information sent by the device, and process the eye feature information to obtain the gaze information of the target object.
  • the device side and the server are used to collect the image information of the target object on the device side, and extract the eye feature information of the target object from the image information, and then send the eye feature information to the server and the server Process according to the eye feature information to obtain the fixation information of the target object, and send the fixation information of the target object to the device side, and then the fixation information of the target object can be displayed on the device side.
  • the device side extracts the eye feature information from the image information and sends the eye feature information to the server instead of directly sending the image information to the server, which can reduce the direct upload of the image information to the server.
  • the occupied bandwidth further reduces resource consumption on the device side.
  • the solution provided in the present application can solve the technical problem that the eye tracking algorithms in the prior art all run on the device side, which causes a large resource consumption on the device side.
  • the device side 601 may perform the eye tracking method applied to the terminal device provided in Embodiments 1, 2, and 4; the server 603 may perform the eye tracking method applied to the terminal device provided in Embodiments 3 and 5. method.
  • the specific content has been described in detail in Embodiments 1-5, and details are not described herein again.
  • a storage medium includes a stored program, where the program executes the eye tracking method applied to the terminal device provided in Embodiments 1 to 5.
  • a processor is further provided for running a program, wherein the program executes the eye tracking method applied to the terminal device provided in the embodiments 1 to 5 when the program is run.
  • the disclosed technical content can be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit may be a logical function division.
  • multiple units or components may be combined or may be combined. Integration into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on multiple units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the integrated unit When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially a part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium. , Including a number of instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application.
  • the foregoing storage media include: U disks, Read-Only Memory (ROM), Random Access Memory (RAM), mobile hard disks, magnetic disks, or optical disks, and other media that can store program codes .
  • the solution provided by the embodiment of the present application can be applied to the eye tracking technology.
  • the method of processing the image by combining the device side with the server solves the problem that the eye tracking algorithms in the prior art are all run on the device side and cause the device side resources Expensive technical problems reduce the bandwidth occupied by image information transmission and reduce resource consumption on the device side.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

Disclosed in the present application are an eye tracking method, apparatus and system applied to a terminal device. The method comprises: acquiring image information; extracting eye feature information from the image information to form an eye feature information set, wherein the image information comprises the eye feature information; sending the eye feature information set to a server; and receiving and parsing gaze information fed back from the server.

Description

应用于终端设备的眼球追踪方法、装置以及系统Eye tracking method, device and system applied to terminal equipment 技术领域Technical field
本申请涉及眼球追踪技术领域,具体而言,涉及一种应用于终端设备的眼球追踪方法、装置以及系统。The present application relates to the technical field of eye tracking, and in particular, to an eye tracking method, device, and system applied to a terminal device.
背景技术Background technique
眼球追踪技术作为革新的交互方式,越来越多被大众所熟知,在人们的工作、学习等方面得到了广泛的应用。其中,眼球追踪技术的处理流程主要包括如下三个步骤:Eye tracking technology, as an innovative interactive method, is more and more well known to the public, and has been widely used in people's work and learning. The process of eye tracking technology mainly includes the following three steps:
步骤S1,眼球追踪设备使用采集设备获取用户的面部图像,其中,采集设备可以为光学装置、电学装置等;Step S1, the eye-tracking device acquires a user's face image using a collection device, where the collection device may be an optical device, an electrical device, etc .;
步骤S2,眼球追踪设备对用户的面部图像进行处理,并提取用户的眼部特征信息;Step S2: The eye-tracking device processes the face image of the user, and extracts eye feature information of the user;
步骤S3,眼球追踪设备对用户的眼部特征信息进行处理,求取用户的注视方向以及注视点位置。In step S3, the eye tracking device processes the eye feature information of the user, and obtains the user's gaze direction and gaze point position.
目前,上述三个步骤均在设备端实现。然而,在设备端实现对用户的眼球追踪的过程中,设备端的系统运行能力、存储空间等资源的消耗量比较大,此外,如果长时间运行眼球追踪的程序,设备端可能出现发热发烫,耗电量增加等问题,上述问题降低了用户的使用体验。At present, the above three steps are implemented on the device side. However, in the process of implementing eye tracking for the user on the device side, the system running capacity, storage space and other resources are consumed on the device side. In addition, if the eye tracking program is run for a long time, the device side may become hot and hot. Problems such as increased power consumption, and the above problems reduce the user experience.
针对上述现有技术的眼球追踪算法均在设备端运行导致设备端的资源消耗大的问题,目前尚未提出有效的解决方案。Aiming at the problem that the aforementioned eye tracking algorithms of the prior art all run on the device side, causing a large resource consumption on the device side, no effective solution has been proposed at present.
发明内容Summary of the Invention
本申请实施例提供了一种应用于终端设备的眼球追踪方法、装置以及系统,以至少解决现有技术的眼球追踪算法均在设备端运行导致设备端的资源消耗大的技术问题。Embodiments of the present application provide an eye tracking method, device, and system applied to a terminal device, so as to at least solve a technical problem that the prior art eye tracking algorithms are all run on the device side, which causes a large resource consumption on the device side.
根据本申请实施例的一个方面,提供了一种应用于终端设备的眼球追踪方法,包括:采集图像信息;发送图像信息至云端处理器;接收并解析云端处理器反馈的注视信息。According to an aspect of the embodiment of the present application, an eye tracking method applied to a terminal device is provided, which includes: collecting image information; sending image information to a cloud processor; and receiving and analyzing gaze information fed back by the cloud processor.
进一步地,应用于终端设备的眼球追踪方法还包括:获取发送信息;确定发送信 息对应的图像标识,其中,图像标识用于标识第一帧图像的发送顺序;将发送信息以第一规则方式发送至云端处理器。Further, the eye tracking method applied to the terminal device further includes: acquiring transmission information; determining an image identifier corresponding to the transmission information, wherein the image identifier is used to identify a transmission sequence of the first frame image; and transmitting the transmission information in a first regular manner To cloud processor.
进一步地,应用于终端设备的眼球追踪方法还包括:确定发送信息的第一帧图像对应的时间戳;根据时间戳生成第一帧图像对应的图像标识。Further, the eye tracking method applied to the terminal device further includes: determining a time stamp corresponding to the first frame image of the transmission information; and generating an image identifier corresponding to the first frame image according to the time stamp.
进一步地,发送信息包括:第一帧头、第一控制字、第一校验码及云端信息。Further, the sending information includes: a first frame header, a first control word, a first check code, and cloud information.
进一步地,应用于终端设备的眼球追踪方法还包括:接收以第二规则方式发送的接收信息;解析接收信息,接收信息包括第二帧头、第二控制字以及第二校验码以及注视信息;完成对第二校验码进行校验;若第二校验码的校验结果满足预设条件时,将注视信息确定为目标对象的注视信息。Further, the eye tracking method applied to the terminal device further includes: receiving reception information sent in a second regular manner; analyzing the reception information, the reception information includes a second frame header, a second control word, a second check code, and gaze information Complete the verification of the second check code; if the verification result of the second check code satisfies a preset condition, determine the fixation information as the fixation information of the target object.
根据本申请实施例的另一方面,还提供了一种应用于终端设备的眼球追踪方法,包括:获取设备端发送的图像信息;从图像信息中提取眼部特征信息,形成眼部特征信息集合,其中,图像信息中包含眼部特征信息;根据眼部特征信息集合确定注视信息;发送注视信息至设备端。According to another aspect of the embodiments of the present application, an eye tracking method applied to a terminal device is further provided. The method includes: acquiring image information sent by the device end; extracting eye feature information from the image information to form an eye feature information set. , Wherein the image information includes eye feature information; determining gaze information according to the eye feature information set; and sending gaze information to the device.
根据本申请实施例的另一方面,还提供了一种应用于终端设备的眼球追踪方法,包括:采集图像信息;从图像信息中提取眼部特征信息,形成眼部特征信息集合,其中,图像信息中包含眼部特征信息;发送眼部特征信息集合至服务器;接收并解析服务器反馈的注视信息。According to another aspect of the embodiments of the present application, an eye tracking method applied to a terminal device is further provided, including: collecting image information; extracting eye feature information from the image information to form an eye feature information set, wherein the image The information includes eye feature information; sends the eye feature information set to the server; receives and analyzes the gaze information fed back by the server.
进一步地,在形成眼部特征信息集合之前,应用于终端设备的眼球追踪方法还包括:将图像信息转化为对应的眼部特征信息;将对应的眼部特征信息配置数据标签;按照指定顺序将已配置数据标签的对应的眼部特征信息完成排序。Further, before forming the eye feature information set, the eye tracking method applied to the terminal device further includes: converting the image information into corresponding eye feature information; configuring the corresponding eye feature information with a data label; The corresponding eye feature information of the configured data tags is sorted.
进一步地,对应的眼部特征信息包括目标对象的瞳孔信息、角膜信息、光斑信息、虹膜信息和/或眼睑信息、眼部图像信息、注视深度信息中的一种或几种。Further, the corresponding eye feature information includes one or more of pupil information, corneal information, spot information, iris information, and / or eyelid information, eye image information, and gaze depth information of the target object.
进一步地,眼部特征信息集合被配置为带有数据标签的一组或多组眼部特征信息。Further, the eye feature information set is configured as one or more groups of eye feature information with data labels.
进一步地,应用于终端设备的眼球追踪方法还包括:获取发送信息;确定发送信息对应的图像标识,获取图像标识对应系列的眼部特征信息集合,其中,图像标识用于标识第一帧图像的发送顺序;将发送信息以第一规则方式发送至服务器。Further, the eye tracking method applied to the terminal device further includes: acquiring transmission information; determining an image identifier corresponding to the transmission information, and acquiring a set of eye feature information corresponding to the series of image identifiers, wherein the image identifier is used to identify the first frame of the image. Sending order; sending the sending information to the server in a first regular manner.
进一步地,应用于终端设备的眼球追踪方法还包括:确定发送信息的第一帧图像对应的时间戳;根据时间戳生成第一帧图像对应的图像标识。Further, the eye tracking method applied to the terminal device further includes: determining a time stamp corresponding to the first frame image of the transmission information; and generating an image identifier corresponding to the first frame image according to the time stamp.
进一步地,发送信息还包括:第一帧头、第一控制字以及第一校验码。Further, the sending information further includes: a first frame header, a first control word, and a first check code.
进一步地,应用于终端设备的眼球追踪方法还包括:接收以第二规则方式发送的接收信息;解析接收信息,接收信息包括第二帧头、第二控制字以及第二校验码以及注视信息;完成对第二校验码进行校验;若第二校验码的校验结果满足预设条件时,将注视信息确定为目标对象的注视信息。Further, the eye tracking method applied to the terminal device further includes: receiving reception information sent in a second regular manner; analyzing the reception information, the reception information includes a second frame header, a second control word, a second check code, and gaze information Complete the verification of the second check code; if the verification result of the second check code satisfies a preset condition, determine the fixation information as the fixation information of the target object.
根据本申请实施例的另一方面,还提供了一种应用于终端设备的眼球追踪装置,包括:第一采集模块,设置为采集图像信息;第一发送模块,设置为发送图像信息至云端处理器;第一接收模块,设置为接收并解析云端处理器反馈的注视信息。According to another aspect of the embodiments of the present application, an eye tracking device applied to a terminal device is further provided, including: a first acquisition module configured to acquire image information; and a first sending module configured to send image information to cloud processing A first receiving module configured to receive and analyze gaze information fed back by a cloud processor.
根据本申请实施例的另一方面,还提供了一种应用于终端设备的眼球追踪装置,包括:第二采集模块,设置为采集图像信息;第二提取模块,设置为从图像信息中提取眼部特征信息,形成眼部特征信息集合,其中,图像信息中包含眼部特征信息;第二发送模块,设置为发送眼部特征信息集合至服务器;第二接收模块,设置为接收并解析服务器反馈的注视信息。According to another aspect of the embodiments of the present application, an eye tracking device applied to a terminal device is further provided, including: a second acquisition module configured to acquire image information; and a second extraction module configured to extract eyes from the image information Eye feature information, forming eye feature information set, wherein the image information includes eye feature information; a second sending module is configured to send the eye feature information set to the server; a second receiving module is configured to receive and analyze the server feedback Gaze information.
根据本申请实施例的另一方面,还提供了一种应用于终端设备的眼球追踪系统,包括:设备端,设置为采集图像信息,并从图像信息中提取眼部特征信息,形成眼部特征信息集合,然后将眼部特征信息发送至服务器,接收并解析服务器反馈的注视信息;服务器,设置为接收设备端发送的眼部特征信息,并对眼部特征信息进行处理,得到目标对象的注视信息。According to another aspect of the embodiments of the present application, an eye tracking system applied to a terminal device is further provided, including: a device end configured to collect image information, and extract eye feature information from the image information to form eye features Information collection, and then send the eye feature information to the server to receive and analyze the gaze information returned by the server; the server is configured to receive the eye feature information sent by the device, and process the eye feature information to obtain the gaze of the target object information.
根据本申请实施例的另一方面,还提供了一种存储介质,该存储介质包括存储的程序,其中,程序执行应用于终端设备的眼球追踪方法。According to another aspect of the embodiments of the present application, a storage medium is also provided. The storage medium includes a stored program, where the program executes an eye tracking method applied to a terminal device.
根据本申请实施例的另一方面,还提供了一种处理器,该处理器设置为运行程序,其中,程序运行时执行应用于终端设备的眼球追踪方法。According to another aspect of the embodiments of the present application, a processor is further provided. The processor is configured to run a program, wherein the program executes an eye tracking method applied to a terminal device when the program is run.
在本申请实施例中,采用设备端与服务器相结合的方式,在设备端采集图像信息,并从图像信息中提取眼部特征信息,形成眼部特征信息集合,然后将眼部特征信息集合发送至服务器,由服务器根据眼部特征信息集合进行处理,得到目标对象的注视信息,并将目标对象的注视信息发送至设备端,进而设备端接收并解析服务器反馈的注视信息。In the embodiment of the present application, the method of combining the device side with the server is used to collect image information on the device side, and extract eye feature information from the image information to form an eye feature information set, and then send the eye feature information set. To the server, the server processes according to the eye feature information set to obtain the fixation information of the target object, and sends the fixation information of the target object to the device side, and then the device side receives and analyzes the fixation information fed back by the server.
在上述过程中,目标对象的图像信息的采集、眼部特征信息集合的提取在设备端运行,而眼部特征信息集合的处理过程在服务器中实现。由于根据眼部特征信息集合进行处理得到注视信息的注视点算法消耗的资源较多,因此,将其在服务器进行处理,有效降低了设备端的资源消耗。另外,如果直接将图像上传至服务器将会占用较多的带宽,造成资源消耗,因此,在本申请中,设备端可从图像信息中提取出眼部特征信 息集合,并将眼部特征信息集合发送至服务器,而不是将图像信息直接发送至服务器,可以降低直接将图像信息上传至服务器所占用的带宽,进一步降低设备端的资源消耗。In the above process, the collection of image information of the target object and the extraction of the eye feature information set run on the device side, and the processing of the eye feature information set is implemented in the server. Because the gaze point algorithm that processes gaze information to obtain gaze information consumes more resources, processing it on the server effectively reduces the resource consumption on the device side. In addition, if the image is directly uploaded to the server, it will consume more bandwidth and cause resource consumption. Therefore, in this application, the device can extract the eye feature information set from the image information, and set the eye feature information set. Sending to the server instead of directly sending the image information to the server can reduce the bandwidth occupied by directly uploading the image information to the server, and further reduce the resource consumption on the device side.
由此可见,本申请所提供的方案可以解决现有技术的眼球追踪算法均在设备端运行导致设备端的资源消耗大的技术问题。It can be seen that the solution provided in the present application can solve the technical problem that the eye tracking algorithms in the prior art all run on the device side, which causes a large resource consumption on the device side.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:The drawings described here are used to provide a further understanding of the present application and constitute a part of the present application. The schematic embodiments of the present application and the description thereof are used to explain the present application, and do not constitute an improper limitation on the present application. In the drawings:
图1是根据本申请实施例的一种应用于终端设备的眼球追踪方法的流程图;1 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application;
图2是根据本申请实施例的一种可选的帧格式的示意图;2 is a schematic diagram of an optional frame format according to an embodiment of the present application;
图3是根据本申请实施例的一种应用于终端设备的眼球追踪方法的流程图;3 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application;
图4是根据本申请实施例的一种应用于终端设备的眼球追踪装置的结构示意图;4 is a schematic structural diagram of an eye tracking device applied to a terminal device according to an embodiment of the present application;
图5是根据本申请实施例的一种应用于终端设备的眼球追踪装置的结构示意图;5 is a schematic structural diagram of an eye tracking device applied to a terminal device according to an embodiment of the present application;
图6是根据本申请实施例的一种应用于终端设备的眼球追踪系统的结构示意图;6 is a schematic structural diagram of an eye tracking system applied to a terminal device according to an embodiment of the present application;
图7是根据本申请实施例的一种应用于终端设备的眼球追踪方法的流程图;7 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application;
图8是根据本申请实施例的一种应用于终端设备的眼球追踪方法的流程图;以及8 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application; and
图9是根据本申请实施例的一种应用于终端设备的眼球追踪方法的流程图。FIG. 9 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application.
具体实施方式detailed description
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。In order to enable those skilled in the art to better understand the solutions of the present application, the technical solutions in the embodiments of the present application will be described clearly and completely in combination with the drawings in the embodiments of the present application. Obviously, the described embodiments are only Examples are part of this application, but not all examples. Based on the embodiments in this application, all other embodiments obtained by a person of ordinary skill in the art without creative efforts should fall within the protection scope of this application.
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的 任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be noted that the terms “first” and “second” in the specification and claims of the present application and the above-mentioned drawings are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence. It should be understood that the data so used may be interchanged where appropriate so that the embodiments of the present application described herein can be implemented in an order other than those illustrated or described herein. Furthermore, the terms "including" and "having" and any of their variations are intended to cover non-exclusive inclusions, for example, a process, method, system, product, or device that includes a series of steps or units need not be limited to those explicitly listed Those steps or units may instead include other steps or units not explicitly listed or inherent to these processes, methods, products or equipment.
实施例1Example 1
根据本申请实施例,提供了一种应用于终端设备的眼球追踪方法实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。According to the embodiment of the present application, an embodiment of an eye tracking method applied to a terminal device is provided. It should be noted that the steps shown in the flowchart of the accompanying drawings can be executed in a computer system such as a set of computer-executable instructions. And, although the logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than here.
此外,还需要说明的是,设备端可执行本实施例所提供的应用于终端设备的眼球追踪方法。其中,设备端包括可移动设备和不可移动设备,可移动设备可以为但不限于手机、平板等设备。In addition, it should also be noted that the device end can execute the eye tracking method applied to the terminal device provided in this embodiment. Wherein, the device end includes a mobile device and a non-mobile device, and the mobile device may be, but is not limited to, a mobile phone, a tablet, and the like.
图1是根据本申请实施例的应用于终端设备的眼球追踪方法的流程图,如图1所示,该方法包括如下步骤:FIG. 1 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application. As shown in FIG. 1, the method includes the following steps:
步骤S102,采集图像信息。In step S102, image information is collected.
需要说明的是,设备端具有图像采集器,该图像采集器可采集目标对象的图像信息,其中,目标对象的图像信息中包含目标对象的眼部图像。可选的,在设备端为手机的情况下,图像采集器可以为手机的摄像头。通常手机的后置摄像头的分辨率要高于手机的前置摄像头的分辨率,但手机的前置摄像头的分辨率为千万级像素,其采集到的图像信息满足眼球追踪技术的需求。在手机具有前置摄像头的情况下,本实施例中的图像采集器选用手机的前置摄像头。It should be noted that the device has an image collector, which can collect image information of the target object, wherein the image information of the target object includes an eye image of the target object. Optionally, when the device is a mobile phone, the image collector may be a camera of the mobile phone. Generally, the resolution of the rear camera of the mobile phone is higher than the resolution of the front camera of the mobile phone, but the resolution of the front camera of the mobile phone is 10 million pixels, and the image information it collects meets the needs of eye tracking technology. In the case where the mobile phone has a front camera, the image collector in this embodiment uses the front camera of the mobile phone.
步骤S104,发送图像信息至云端处理器。Step S104: Send the image information to the cloud processor.
需要说明的是,由于云端服务器的处理能力较强,可以运行较为复杂的算法,因此,在采集到目标对象的图像信息之后,设备端将目标对象的图像信息发送至云端处理器进行处理,以达到降低设备端资源消耗的目的。其中,云端处理器为云端服务器中的处理器。It should be noted that because the cloud server has strong processing capabilities and can run more complex algorithms, after collecting the image information of the target object, the device sends the image information of the target object to the cloud processor for processing. To achieve the purpose of reducing equipment-side resource consumption. The cloud processor is a processor in a cloud server.
另外,设备端与云端处理器可通过无线通信的方式进行通信,其中,无线通信方式可以包括但不限于WIFI、3G、4G、GPRS等方式。In addition, the device side and the cloud processor may communicate through a wireless communication method. The wireless communication method may include, but is not limited to, WIFI, 3G, 4G, and GPRS.
在一种可选的方案中,在接收到图像信息之后,云端处理器基于数据提取算法提取图像信息中的以下至少之一信息:瞳孔信息、角膜信息、虹膜信息和/或眼睑信息、眼部图像信息、注视深度信息,并将至少之一信息作为眼部特征信息。在得到眼部特 征信息之后,云端处理器根据眼球追踪算法计算目标对象的注视信息,其中,注视信息至少包括注视方向信息以及注视点信息,其中,注视点信息可以为目标对象的注视点在设备端屏幕上的坐标信息。然后,云端处理器将计算得到的注视信息通过无线通信的方式发送至设备端,设备端可接收注视信息。In an optional solution, after receiving the image information, the cloud processor extracts at least one of the following information in the image information based on the data extraction algorithm: pupil information, corneal information, iris information and / or eyelid information, eyes Image information, gaze depth information, and at least one piece of information is used as eye feature information. After obtaining the eye feature information, the cloud processor calculates the fixation information of the target object according to the eye tracking algorithm. The fixation information includes at least fixation direction information and fixation point information. The fixation point information may be the fixation point of the target object on the device. Coordinate information on the screen. Then, the cloud processor sends the calculated gaze information to the device side through wireless communication, and the device side can receive the gaze information.
步骤S106,接收并解析云端处理器反馈的注视信息。Step S106: Receive and analyze gaze information fed back by the cloud processor.
需要说明的是,设备端与云端处理器可通过无线通信的方式进行通信,其中,无线通信方式可以包括但不限于WIFI、3G、4G、GPRS等方式。It should be noted that the device side and the cloud processor can communicate through wireless communication. The wireless communication methods may include, but are not limited to, WIFI, 3G, 4G, and GPRS.
此外,还需要说明的是,云端处理器无需将图像信息发送至设备端,仅将注视信息发送至设备端即可。云端处理器与设备端的通信速率可以满足实时性要求,并且吞吐量较小,即使使用通信速率较慢的GPRS也可以满足要求。In addition, it should be noted that the cloud processor does not need to send the image information to the device side, and only sends the gaze information to the device side. The communication rate between the cloud processor and the device can meet the real-time requirements, and the throughput is small. Even if the GPRS with a slower communication rate is used, it can meet the requirements.
基于上述步骤S102至步骤S106所限定的方案,可以获知,采用设备端与云端处理相结合的方式,在设备端采集目标对象的图像信息,并将图像信息发送至云端处理器,云端处理器在接收到图像信息之后,从图像信息中提取眼部特征信息,形成眼部特征信息,然后对眼部特征信息进行处理,得到目标对象的注视信息,并将目标对象的注视信息发送至设备端,进而设备端接收并解析云端处理器反馈的注视信息。Based on the solutions defined in the above steps S102 to S106, it can be learned that the device-side and cloud processing methods are used to collect the image information of the target object on the device side and send the image information to the cloud processor. After receiving the image information, extract the eye feature information from the image information to form the eye feature information, and then process the eye feature information to obtain the fixation information of the target object, and send the fixation information of the target object to the device. The device side then receives and analyzes the gaze information fed back by the cloud processor.
容易注意到的是,由于根据眼部特征信息进行处理得到注视信息的注视点算法消耗的资源较多,因此,将其在云端处理器进行处理,有效降低了设备端的资源消耗。另外,由于云端处理器的处理能力较强,可以运行较为复杂的算法,因此,云端处理器可直接对目标对象的图像信息进行处理得到眼部特征信息,并根据眼部特征信息提取到注视信息,最后将注视信息发送至设备端,进一步降低设备端的资源消耗。It is easy to notice that because the gaze point algorithm that processes gaze information based on eye feature information consumes a lot of resources, processing it on a cloud processor effectively reduces resource consumption on the device side. In addition, because the cloud processor has strong processing power and can run more complex algorithms, the cloud processor can directly process the image information of the target object to obtain eye feature information, and extract gaze information based on the eye feature information. , And finally send the gaze information to the device side to further reduce the resource consumption of the device side.
由此可见,本申请所提供的方案可以解决现有技术的眼球追踪算法均在设备端运行导致设备端的资源消耗大的技术问题。It can be seen that the solution provided in the present application can solve the technical problem that the eye tracking algorithms in the prior art all run on the device side, which causes a large resource consumption on the device side.
需要说明的是,在采集到目标对象的图像信息之后,设备端可从图像信息中提取系列的眼部特征信息。具体的,设备端基于数据提取算法提取图像信息中的以下至少之一信息:瞳孔信息、角膜信息、光斑信息、虹膜信息和/或眼睑信息、眼部图像信息、注视深度信息,并将至少之一信息作为眼部特征信息。It should be noted that after the image information of the target object is collected, the device end can extract a series of eye feature information from the image information. Specifically, the device side extracts at least one of the following information in the image information based on the data extraction algorithm: pupil information, corneal information, spot information, iris information and / or eyelid information, eye image information, gaze depth information, and at least One piece of information is used as the eye feature information.
此外,还需要说明的是,设备端采集到的图像信息可能为包含多帧图像中的图像信息,为了保证瞳孔信息、角膜信息、光斑信息、虹膜信息和/或眼睑信息、眼部图像信息、注视深度信息是同一幅帧图像提取的眼部特征信息集合,在将图像信息发送至云端处理器的过程中,对当前帧图像进行图像标识。具体过程可以包括如下步骤;In addition, it should be noted that the image information collected by the device may include image information in multiple frames of images. In order to ensure pupil information, corneal information, light spot information, iris information, and / or eyelid information, eye image information, Gaze depth information is a set of eye feature information extracted from the same frame image. During the process of sending the image information to the cloud processor, the current frame image is image-identified. The specific process may include the following steps;
步骤S1060,获取发送信息;Step S1060, obtaining the sending information;
步骤S1062,确定发送信息对应的图像标识,其中,图像标识用于标识发送信息的发送顺序;Step S1062, determining an image identifier corresponding to the transmission information, where the image identifier is used to identify a transmission sequence of the transmission information;
步骤S1064,将发送信息以第一规则方式发送至处理器。Step S1064: Send the sending information to the processor in a first regular manner.
需要说明的是,图像标识可以为但不限于时间戳、名称或者随机分配到标识。其中,在图像标识为时间戳的情况下,该时间戳为采集图像信息的时间所对应的时间戳。可选的,上述发送信息可以为当前帧图像所对应的信息,其中,设备端可确定发送信息的第一帧图像对应的时间戳,并根据时间戳生成第一帧图像对应的图像标识。It should be noted that the image identifier may be, but is not limited to, a time stamp, a name, or a random assigned identifier. Where the image identification is a timestamp, the timestamp is a timestamp corresponding to the time when the image information was collected. Optionally, the foregoing sending information may be information corresponding to the current frame image. The device end may determine a timestamp corresponding to the first frame image of the sending information, and generate an image identifier corresponding to the first frame image according to the timestamp.
容易注意到的是,上述步骤S1060至步骤S1064所限定的方案中,在发送图像信息至云端处理器的过程中,也将图像标识发送至云端处理器。云端处理器可根据每帧图像的图像标识按照顺序接收,保证了云端处理器对标识后的图像信息的正确解析,进而对正确的眼部特征信息进行处理,得到准确的目标对象的注视信息。It is easy to notice that in the solutions defined in the above steps S1060 to S1064, the image identification is also sent to the cloud processor during the process of sending image information to the cloud processor. The cloud processor can receive the image identification in sequence according to the image identification of each frame of the image, which ensures that the cloud processor correctly analyzes the image information after the identification, and then processes the correct eye feature information to obtain accurate gaze information of the target object.
在一种可选的方案中,设备端在发送图像标识对应图像信息之前,需要配置图像信息对应的发送信息,然后再将发送信息以第一规则方式发送至云端处理器。其中,发送信息包括:第一帧头、第一控制字、第一校验码以及图像信息。可选的,第一规则方式可以为帧格式,其中,图2示出了一种可选的帧格式的示意图,由图2可知,帧格式至少包括四部分,即帧头、控制字、数据(即图像信息)以及校验码,其中,校验码可以为但不限于通过CRC(Cyclic Redundancy Check,循环冗余校验)校验得到的数字。In an optional solution, before sending the image information corresponding to the image identifier, the device needs to configure the sending information corresponding to the image information, and then sends the sending information to the cloud processor in a first regular manner. The sending information includes: a first frame header, a first control word, a first check code, and image information. Optionally, the first rule mode may be a frame format. FIG. 2 shows a schematic diagram of an optional frame format. As can be seen from FIG. 2, the frame format includes at least four parts, that is, a frame header, a control word, and data. (Ie, image information) and a check code, where the check code may be, but is not limited to, a number obtained by CRC (Cyclic Redundancy Check).
需要说明的是,采用CRC校验可以保证设备端与云端处理器通信之后,云端处理器可解析到正确的数据,并保证数据的正确性,同时还可防止干扰信号对解析结果的影响。It should be noted that the use of a CRC check can ensure that after the device communicates with the cloud processor, the cloud processor can parse the correct data and ensure the correctness of the data. At the same time, it can prevent the interference signal from affecting the analysis result.
此外,还需要说明的是,服务器的处理器在接收到眼部特征信息之后,对眼部特征信息进行解析,并对解析后的数据进行处理,得到目标对象的注视信息,并按照第二规则方式将注视信息发送至设备端,此时,帧格式中的数据为注视信息。设备端在接收以第二规则方式发送的接收信息,并解析接收信息,然后完成对第二校验码的校验。其中,如果第二校验码的校验结果满足预设条件,则设备端将注视信息确定为目标对象的注视信息。可选的,接收信息包括:第二帧头、第二控制字以及第二校验码以及注视信息。In addition, it should also be noted that after receiving the eye feature information, the processor of the server analyzes the eye feature information and processes the parsed data to obtain the fixation information of the target object, and according to the second rule The gaze information is sent to the device. At this time, the data in the frame format is the gaze information. The device end receives the received information sent in the second regular mode, parses the received information, and then completes the verification of the second check code. Wherein, if the verification result of the second verification code satisfies a preset condition, the device end determines the fixation information as the fixation information of the target object. Optionally, the received information includes a second frame header, a second control word, a second check code, and gaze information.
需要说明的是,设备端在接收到第二规则方式的数据之后,采用校验码对解析后的数据进行校验可以保证数据的正确性,同时还可防止干扰信号对解析结果的影响, 从而得到正确的目标对象的注视信息。It should be noted that, after receiving the data in the second regular mode, the device end uses the check code to verify the parsed data to ensure the correctness of the data, and at the same time, it can prevent the interference signal from affecting the analysis result. Get the correct gaze information for your target.
由上述内容可知,本申请所提供的方案可以有效降低眼球追踪技术对设备端的计算能力的要求。另外,通过云端处理器实时更新眼球追踪算法,完成对注视信息的提取,可以提升设备端的眼球追踪处理速率。最后,本申请所提供的方案在设备端运行图像处理算法,由云端处理器运行注视点算法,可以大大降低通信数据的吞吐量,提高通信速率。It can be known from the foregoing that the solution provided by the present application can effectively reduce the requirement of the computing capability of the device tracking of the eye tracking technology. In addition, the cloud processor updates the eye tracking algorithm in real time to complete the extraction of gaze information, which can improve the eye tracking processing rate on the device side. Finally, the solution provided in this application runs the image processing algorithm on the device side and the gaze point algorithm is run by the cloud processor, which can greatly reduce the throughput of communication data and increase the communication rate.
实施例2Example 2
根据本申请实施例,提供了一种应用于终端设备的眼球追踪方法实施例,需要说明的是,服务器可执行本实施例所提供的应用于终端设备的眼球追踪方法。其中,图9是根据本申请实施例的应用于终端设备的眼球追踪方法的流程图,如图9所示,该方法包括如下步骤:According to the embodiment of the present application, an embodiment of an eye tracking method applied to a terminal device is provided. It should be noted that the server may execute the eye tracking method applied to a terminal device provided by this embodiment. FIG. 9 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application. As shown in FIG. 9, the method includes the following steps:
步骤S902,采集图像信息。In step S902, image information is collected.
需要说明的是,设备端具有图像采集器,该图像采集器可采集目标对象的图像信息,其中,目标对象的图像信息中包含目标对象的眼部图像。可选的,在设备端为手机的情况下,图像采集器可以为手机的摄像头。通常手机的后置摄像头的分辨率要高于手机的前置摄像头的分辨率,但手机的前置摄像头的分辨率为千万级像素,其采集到的图像信息满足眼球追踪技术的需求。在手机具有前置摄像头的情况下,在本实施例中的图像采集器选用手机的前置摄像头。It should be noted that the device has an image collector, which can collect image information of the target object, wherein the image information of the target object includes an eye image of the target object. Optionally, when the device is a mobile phone, the image collector may be a camera of the mobile phone. Generally, the resolution of the rear camera of the mobile phone is higher than the resolution of the front camera of the mobile phone, but the resolution of the front camera of the mobile phone is 10 million pixels, and the image information it collects meets the needs of eye tracking technology. In the case where the mobile phone has a front camera, the image collector in this embodiment selects the front camera of the mobile phone.
步骤S904,从图像信息中提取眼部特征信息,形成眼部特征信息集合,其中,图像信息中包含眼部特征信息。Step S904, extracting eye feature information from the image information to form an eye feature information set, where the image information includes eye feature information.
需要说明的是,眼部特征信息集合被配置为带有数据标签的一组或多组眼部特征信息。另外,在形成眼部特征信息集合之前,设备端还将目标对象的图像信息转化为对应的眼部特征信息,并将对应的眼部特征信息配置数据标签,然后按照指定顺序将已配置数据标签的对应的眼部特征信息完成排序。其中,对应的眼部特征信息包括目标对象的瞳孔信息、角膜信息、光斑信息、虹膜信息和/或眼睑信息、眼部图像信息、注视深度信息中的一种或几种,其中,瞳孔信息至少包括:瞳孔中心位置、瞳孔直径;角膜信息至少包括:角膜光斑反射信息;虹膜信息至少包括:虹膜边缘信息。It should be noted that the eye feature information set is configured as one or more groups of eye feature information with data labels. In addition, before forming the eye feature information set, the device side also converts the image information of the target object into corresponding eye feature information, configures the corresponding eye feature information with data tags, and then configures the configured data tags in a specified order. The corresponding eye feature information is sorted. The corresponding eye characteristic information includes one or more of pupil information, corneal information, spot information, iris information and / or eyelid information, eye image information, and gaze depth information of the target object. The pupil information is at least Including: pupil center position, pupil diameter; corneal information includes at least: corneal spot reflection information; iris information includes at least: iris edge information.
步骤S906,发送眼部特征信息集合至服务器。Step S906: Send the eye feature information set to the server.
需要说明的是,步骤S906中的服务器可以为云端处理器,其中,设备端与服务器可通过无线通信的方式进行通信,其中,无线通信方式可以包括但不限于WIFI、3G、 4G、GPRS等方式。It should be noted that the server in step S906 may be a cloud processor, wherein the device side and the server may communicate through wireless communication, and the wireless communication method may include, but is not limited to, WIFI, 3G, 4G, GPRS, etc. .
此外,还需要说明的是,在设备端向服务器发送眼部特征信息集合的过程中,设备端无需将采集到的整个图像信息发送至服务器,仅将眼部特征信息发送至服务器即可。可选的,设备端可以按照帧格式的形式发送眼部特征信息至服务器,其中,帧格式可以由帧头、控制字、人眼数据、校验码构成,设备端在设备端发送眼部特征信息至服务器的过程中,可按照1字节帧头、1字节控制字、3至6字节的眼部特征信息(即人眼数据)、2字节CRC校验(即校验码)发送至服务器,其中,发送的数量总量约为10字节。另外,在每秒处理90帧图像数据的情况下,设备端向服务器发送的数据量为90*10*8=7.2kbps。由此可见,设备端与服务器的通信速率可以满足实时性要求,并且吞吐量较小,即使使用通信速率较慢的GPRS也可以满足要求。In addition, it should also be noted that during the process of sending the eye feature information set by the device to the server, the device does not need to send the entire collected image information to the server, and only sends the eye feature information to the server. Optionally, the device end may send the eye feature information to the server in the form of a frame format. The frame format may be composed of a frame header, a control word, human eye data, and a check code. The device end sends the eye feature on the device side. When the information is sent to the server, it can be based on a 1-byte frame header, a 1-byte control word, 3 to 6 bytes of eye characteristic information (that is, human eye data), and a 2-byte CRC check (that is, a check code). Sent to the server, where the total amount sent is about 10 bytes. In addition, in the case of processing 90 frames of image data per second, the amount of data sent by the device to the server is 90 * 10 * 8 = 7.2kbps. It can be seen that the communication rate between the device and the server can meet the real-time requirements, and the throughput is small, even if the GPRS with a slower communication rate can meet the requirements.
在一种可选的方案中,设备端采集到目标对象的二维人脸图像,其中,二维人脸图像的分辨率为1920*1080。设备端对二维人脸图像进行处理,得到6个像素的实心圆,其中,该实心圆的中心坐标为(800*800,840*800)。此时,设备端将该实心圆的中心坐标通过无线通信的方式发送至服务器。In an optional solution, the device side collects a two-dimensional face image of the target object, and the resolution of the two-dimensional face image is 1920 * 1080. The device side processes the two-dimensional face image to obtain a solid circle of 6 pixels, where the center coordinates of the solid circle are (800 * 800, 840 * 800). At this time, the device side sends the center coordinates of the solid circle to the server through wireless communication.
步骤S908,接收并解析服务器反馈的注视信息。Step S908: Receive and analyze the gaze information fed back by the server.
在一种可选的方案中,服务器在接收到眼部特征信息集合之后,服务器根据眼球追踪算法计算目标对象的注视信息,其中,注视信息至少包括注视方向信息以及注视点信息,其中,注视点信息可以为目标对象的注视点在设备端屏幕上的坐标信息。然后,服务器将计算得到的注视信息通过无线通信的方式发送至设备端,设备端接收注视信息。需要说明的是,在上述过程中,服务器无需将图像信息发送至设备端,仅将注视信息发送至设备端即可。服务器与设备端的通信速率可以满足实时性要求,并且吞吐量较小,即使使用通信速率较慢的GPRS也可以满足要求。In an optional solution, after the server receives the eye feature information set, the server calculates the fixation information of the target object according to the eye tracking algorithm, wherein the fixation information includes at least fixation direction information and fixation point information, wherein the fixation point The information may be coordinate information of the gaze point of the target object on the screen of the device. Then, the server sends the calculated gaze information to the device side through wireless communication, and the device side receives the gaze information. It should be noted that, in the above process, the server does not need to send the image information to the device side, and only sends the gaze information to the device side. The communication rate between the server and the device can meet the real-time requirements, and the throughput is small, even if the GPRS with a slower communication rate can meet the requirements.
此外,还需要说明的是,由于眼球追踪算法对设备端的计算能力要求较高,眼球追踪算法对设备端的内核以及主频的要求严格,中端水平的设备端很难满足该硬件指标。而且,如果在设备端一直运行眼球追踪算法,则对设备端的资源消耗较大,甚至导致设备端的发热、卡顿。而本申请所提供的方案将眼球追踪算法在服务器运行,可以有效避免上述问题。In addition, it should also be noted that, because the eye tracking algorithm requires high computing power on the device side, the eye tracking algorithm has strict requirements on the device core and the main frequency, and it is difficult for the mid-level device side to meet the hardware index. In addition, if the eye tracking algorithm is always running on the device side, the resource consumption on the device side is large, and even the device side generates heat and freezes. The solution provided in this application runs the eye tracking algorithm on the server, which can effectively avoid the above problems.
另外,设备端的电池容量有限,而眼球追踪算法是基于视频流的一项技术,其耗电量较高,因此,将眼球追踪算法在服务器运行,可以降低设备端的能耗。In addition, the battery capacity on the device side is limited, and the eye tracking algorithm is a technology based on video streaming, which consumes high power. Therefore, running the eye tracking algorithm on the server can reduce energy consumption on the device side.
基于上述步骤S902至步骤S908所限定的方案,可以获知,采用设备端与服务器相结合的方式,在设备端采集目标对象的图像信息,并从图像信息中提取眼部特征信 息,形成眼部特征信息集合,然后将眼部特征信息集合发送至服务器,由服务器根据眼部特征信息集合进行处理,得到目标对象的注视信息,并将目标对象的注视信息发送至设备端,进而设备端接收并解析服务器反馈的注视信息。Based on the solutions defined in the above steps S902 to S908, it can be learned that the method of combining the device side with the server collects image information of the target object on the device side, and extracts eye feature information from the image information to form eye features. Information set, and then send the eye feature information set to the server, and the server processes the eye feature information set to obtain the fixation information of the target object, and sends the fixation information of the target object to the device side, and then the device side receives and analyzes Gaze information returned by the server.
容易注意到的是,目标对象的图像信息的采集、眼部特征信息集合的提取在设备端运行,而眼部特征信息集合的处理过程在服务器中实现。由于根据眼部特征信息集合进行处理得到注视信息的注视点算法消耗的资源较多,因此,将其在服务器进行处理,有效降低了设备端的资源消耗。另外,如果直接将图像上传至服务器将会占用较多的带宽,造成资源消耗,因此,由于服务器的处理能力较强,可以运行较为复杂的算法,因此,在本申请中,设备端还可从图像信息中提取出眼部特征信息集合,并将眼部特征信息集合发送至服务器,而不是将图像信息直接发送至服务器,可以降低直接将图像信息上传至服务器所占用的带宽,进一步降低设备端的资源消耗。It is easy to notice that the collection of image information of the target object and the extraction of the eye feature information set run on the device side, and the processing of the eye feature information set is implemented in the server. Because the gaze point algorithm that processes gaze information to obtain gaze information consumes more resources, processing it on the server effectively reduces the resource consumption on the device side. In addition, uploading the image directly to the server will consume more bandwidth and cause resource consumption. Therefore, because the server has a strong processing capability, it can run more complex algorithms. Therefore, in this application, the device side can also download The eye feature information set is extracted from the image information and sent to the server instead of directly sending the image information to the server, which can reduce the bandwidth occupied by directly uploading the image information to the server, further reducing the device-side LF.
由此可见,本申请所提供的方案可以解决现有技术的眼球追踪算法均在设备端运行导致设备端的资源消耗大的技术问题。It can be seen that the solution provided in the present application can solve the technical problem that the eye tracking algorithms in the prior art all run on the device side, which causes a large resource consumption on the device side.
需要说明的是,在采集到目标对象的图像信息之后,设备端可从图像信息中提取眼部特征信息。具体的,设备端基于数据提取算法提取图像信息中的以下至少之一信息:瞳孔信息、角膜信息、光斑信息、虹膜信息和/或眼睑信息、眼部图像信息、注视深度信息,并将至少之一信息作为眼部特征信息。It should be noted that after the image information of the target object is collected, the device end can extract the eye feature information from the image information. Specifically, the device side extracts at least one of the following information in the image information based on the data extraction algorithm: pupil information, corneal information, spot information, iris information and / or eyelid information, eye image information, gaze depth information, and at least One piece of information is used as the eye feature information.
此外,还需要说明的是,从图像信息中提取到的眼部特征信息集合可能包含多帧图像中的眼部特征信息,为了保证瞳孔信息、角膜信息、光斑信息、虹膜信息和/或眼睑信息、眼部图像信息、注视深度信息是同一幅帧图像提取的眼部特征信息,在将眼部特征信息集合发送至服务器的过程中,对当前帧图像进行图像标识。具体过程可以包括如下步骤;In addition, it should be noted that the eye feature information set extracted from the image information may contain eye feature information in multiple frames of images. In order to ensure pupil information, corneal information, light spot information, iris information, and / or eyelid information The eye image information and gaze depth information are eye feature information extracted from the same frame image. In the process of sending the eye feature information set to the server, image identification is performed on the current frame image. The specific process may include the following steps;
步骤S2060,获取发送信息;Step S2060, obtaining the sending information;
步骤S2062,确定发送信息对应的图像标识,获取图像标识对应系列的眼部特征信息集合,其中,图像标识用于标识发送信息的发送顺序;Step S2062: Determine an image identifier corresponding to the transmission information, and obtain an eye feature information set corresponding to the series of image identifiers, where the image identifier is used to identify a transmission sequence of the transmission information;
步骤S2064,将发送信息以第一规则方式发送至服务器。Step S2064: Send the sending information to the server in a first regular manner.
需要说明的是,图像标识可以为但不限于时间戳、名称或者随机分配到标识。其中,在图像标识为时间戳的情况下,该时间戳为采集图像信息的时间所对应的时间戳。可选的,上述发送信息可以为当前帧图像所对应的信息,其中,设备端可确定发送信息的第一帧图像对应的时间戳,并根据时间戳生成第一帧图像对应的图像标识。It should be noted that the image identifier may be, but is not limited to, a time stamp, a name, or a random assigned identifier. Where the image identification is a timestamp, the timestamp is a timestamp corresponding to the time when the image information was collected. Optionally, the foregoing sending information may be information corresponding to the current frame image. The device end may determine a timestamp corresponding to the first frame image of the sending information, and generate an image identifier corresponding to the first frame image according to the timestamp.
容易注意到的是,上述步骤S2060至步骤S2064所限定的方案中,在发送眼部特征信息集合至服务器的过程中,也将图像标识发送至服务器。服务器可根据每帧图像的图像标识按照顺序接收,保证了服务器对标识后的眼部特征信息的正确解析,进而对正确的眼部信息进行处理,得到准确的目标对象的注视信息。It is easy to notice that in the solutions defined in the above steps S2060 to S2064, the image identification is also sent to the server during the process of sending the eye feature information set to the server. The server can receive the image identification in order according to the image identification of each frame of the image, which ensures that the server correctly analyzes the identified eye feature information, and then processes the correct eye information to obtain accurate gaze information of the target object.
在一种可选的方案中,设备端在发送图像标识对应系列的眼部特征信息集合之前,需要配置眼部特征信息对应的发送信息,然后再将发送信息以第一规则方式发送至服务器。其中,发送信息包括:第一帧头、第一控制字、第一校验码以及眼部特征信息集合。可选的,第一规则方式可以为帧格式,其中,图2示出了一种可选的帧格式的示意图,由图2可知,帧格式至少包括四部分,即帧头、控制字、数据(即眼部特征信息)以及校验码,其中,校验码可以为但不限于通过CRC(Cyclic Redundancy Check,循环冗余校验)校验得到的数字。需要说明的是,采用CRC校验可以保证设备端与服务器通信之后,服务器可解析到正确的数据,并保证数据的正确性,同时还可防止干扰信号对解析结果的影响。In an optional solution, the device end needs to configure the sending information corresponding to the eye characteristic information before sending the set of eye characteristic information corresponding to the image identification series, and then sends the sending information to the server in a first regular manner. The sending information includes: a first frame header, a first control word, a first check code, and an eye feature information set. Optionally, the first rule mode may be a frame format. FIG. 2 shows a schematic diagram of an optional frame format. As can be seen from FIG. 2, the frame format includes at least four parts, that is, a frame header, a control word, and data. (Ie, eye feature information) and a check code, where the check code may be, but is not limited to, a number obtained by CRC (Cyclic Redundancy Check). It should be noted that the use of the CRC check can ensure that after the device communicates with the server, the server can parse the correct data and ensure the correctness of the data. At the same time, it can prevent the interference signal from affecting the analysis result.
此外,还需要说明的是,服务器在接收到眼部特征信息之后,对眼部特征信息进行解析,并对解析后的数据进行处理,得到目标对象的注视信息,并按照第二规则方式将注视信息发送至设备端,此时,帧格式中的数据为注视信息。设备端在接收以第二规则方式发送的接收信息,并解析接收信息,然后完成对第二校验码的校验。其中,如果第二校验码的校验结果满足预设条件,则设备端将注视信息确定为目标对象的注视信息。可选的,接收信息包括:第二帧头、第二控制字以及第二校验码以及注视信息。需要说明的是,设备端在接收到第二规则方式的数据之后,采用校验码对解析后的数据进行校验可以保证数据的正确性,同时还可防止干扰信号对解析结果的影响,从而得到正确的目标对象的注视信息。In addition, it should be noted that after receiving the eye feature information, the server parses the eye feature information and processes the parsed data to obtain the fixation information of the target object, and fixes the fixation according to the second rule. The information is sent to the device. At this time, the data in the frame format is gaze information. The device end receives the received information sent in the second regular mode, parses the received information, and then completes the verification of the second check code. Wherein, if the verification result of the second verification code satisfies a preset condition, the device end determines the fixation information as the fixation information of the target object. Optionally, the received information includes a second frame header, a second control word, a second check code, and gaze information. It should be noted that after receiving the data in the second regular mode, the device end uses the check code to verify the parsed data to ensure the correctness of the data, and at the same time, it can prevent the interference signal from affecting the analysis result. Get the correct gaze information for your target.
由上述内容可知,本申请所提供的方案可以有效降低眼球追踪技术对设备端的计算能力的要求。另外,通过服务器实时更新眼球追踪算法,完成对注视信息的提取,可以提升设备端的眼球追踪处理速率。最后,本申请所提供的方案在设备端运行图像处理算法,由服务器运行注视点算法,可以大大降低通信数据的吞吐量,提高通信速率。It can be known from the foregoing that the solution provided by the present application can effectively reduce the requirement of the computing capability of the device tracking of the eye tracking technology. In addition, the eye tracking algorithm is updated in real time by the server to complete the extraction of gaze information, which can improve the processing rate of eye tracking on the device side. Finally, the solution provided in this application runs the image processing algorithm on the device side and the gaze point algorithm is run by the server, which can greatly reduce the throughput of communication data and increase the communication rate.
实施例3Example 3
根据本申请实施例,提供了一种应用于终端设备的眼球追踪方法实施例,需要说明的是,服务器可执行本实施例所提供的应用于终端设备的眼球追踪方法,服务器可以为云服务器。其中,图3是根据本申请实施例的应用于终端设备的眼球追踪方法的流程图,如图3所示,该方法包括如下步骤:According to the embodiment of the present application, an embodiment of an eye tracking method applied to a terminal device is provided. It should be noted that the server can execute the eye tracking method applied to a terminal device provided in this embodiment, and the server may be a cloud server. FIG. 3 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application. As shown in FIG. 3, the method includes the following steps:
步骤S302,接收设备端发送的目标对象的眼部特征信息。Step S302: Receive eye feature information of the target object sent by the device.
需要说明的是,设备端包括可移动设备和不可移动设备,可移动设备可以为但不限于手机、平板等设备。设备端具有图像采集器,该图像采集器可采集目标对象的图像信息,其中,目标对象的图像信息中包含目标对象的眼部图像。可选的,在设备端为手机的情况下,图像采集器可以为手机的摄像头。通常手机的后置摄像头的分辨率要高于手机的前置摄像头的分辨率,但手机的前置摄像头的分辨率为千万级像素,其采集到的图像信息满足眼球追踪技术的需求。在手机具有前置摄像头的情况下,本实施例的图像采集器选用手机的前置摄像头。It should be noted that the device end includes a mobile device and a non-mobile device, and the mobile device may be, but is not limited to, a mobile phone, a tablet, and the like. The device end has an image collector, which can collect image information of the target object, wherein the image information of the target object includes an eye image of the target object. Optionally, when the device is a mobile phone, the image collector may be a camera of the mobile phone. Generally, the resolution of the rear camera of the mobile phone is higher than the resolution of the front camera of the mobile phone, but the resolution of the front camera of the mobile phone is 10 million pixels, and the image information it collects meets the needs of eye tracking technology. In the case where the mobile phone has a front camera, the image collector of this embodiment selects the front camera of the mobile phone.
另外,目标对象的眼部特征信息包括如下至少之一:瞳孔信息、角膜信息、光斑信息、虹膜信息、眼睑信息、眼部图像信息、注视深度信息,其中,瞳孔信息至少包括:瞳孔中心位置、瞳孔直径;角膜信息至少包括:角膜光斑反射信息;虹膜信息至少包括:虹膜边缘信息。In addition, the eye feature information of the target object includes at least one of the following: pupil information, corneal information, spot information, iris information, eyelid information, eye image information, and gaze depth information, where the pupil information includes at least the pupil center position, Pupil diameter; corneal information includes at least: corneal spot reflection information; iris information includes at least: iris edge information.
步骤S304,依据眼部特征信息确定目标对象的注视信息。Step S304: Determine gaze information of the target object according to the eye feature information.
在一种可选的方案中,服务器在接收到眼部特征信息之后,服务器根据眼球追踪算法计算目标对象的注视信息,其中,注视信息可以为目标对象的注视点在设备端屏幕上的坐标信息。In an optional solution, after the server receives the eye feature information, the server calculates the fixation information of the target object according to the eye tracking algorithm, where the fixation information may be coordinate information of the fixation point of the target object on the device-side screen .
步骤S306,发送注视信息至设备端。Step S306: Send gaze information to the device.
需要说明的是,设备端与服务器可通过无线通信的方式进行通信,其中,无线通信方式可以包括但不限于WIFI、3G、4G、GPRS等方式。此外,还需要说明的是,服务器无需将图像信息发送至设备端,仅将注视信息发送至设备端即可。服务器与设备端的通信速率可以满足实时性要求,并且吞吐量较小,即使使用通信速率较慢的GPRS也可以满足要求。It should be noted that the device and the server can communicate through wireless communication. The wireless communication methods may include, but are not limited to, WIFI, 3G, 4G, and GPRS. In addition, it should be noted that the server does not need to send the image information to the device side, it only needs to send the gaze information to the device side. The communication rate between the server and the device can meet the real-time requirements, and the throughput is small, even if the GPRS with a slower communication rate can meet the requirements.
可选的,服务器在接收到眼部特征信息之后,服务器根据眼球追踪算法计算目标对象的注视信息,其中,注视信息可以为目标对象的注视点在设备端屏幕上的坐标信息。然后,服务器将计算得到的注视信息通过无线通信的方式发送至设备端,设备端接收注视信息。Optionally, after receiving the eye feature information, the server calculates the fixation information of the target object according to the eye tracking algorithm, where the fixation information may be coordinate information of the fixation point of the target object on the screen of the device. Then, the server sends the calculated gaze information to the device side through wireless communication, and the device side receives the gaze information.
基于上述步骤S302至步骤S306所限定的方案,可以获知,采用设备端与服务器相结合的方式,在设备端采集目标对象的图像信息,并从图像信息中提取目标对象的眼部特征信息,然后将眼部特征信息发送至服务器,由服务器根据眼部特征信息进行处理,得到目标对象的注视信息,并将目标对象的注视信息发送至设备端,进而可在设备端显示目标对象的注视信息。Based on the solutions defined in the above steps S302 to S306, it can be learned that the device side and the server are used to collect the image information of the target object on the device side, and extract the eye feature information of the target object from the image information, and then The eye feature information is sent to the server, and the server processes the eye feature information to obtain the fixation information of the target object, and sends the fixation information of the target object to the device side, so that the fixation information of the target object can be displayed on the device side.
容易注意到的是,目标对象的图像信息的采集、眼部特征信息的提取在设备端运行,而眼部特征信息的处理过程在服务器实现。由于根据眼部特征信息进行处理得到注视信息的注视点算法消耗的资源较多,因此,将其在服务器进行处理,有效降低了设备端的资源消耗。另外,如果直接将图像上传至服务器将会占用较多的带宽,造成资源消耗,因此,在本申请中,设备端可从图像信息中提取出眼部特征信息,并将眼部特征信息发送至服务器,而不是将图像信息直接发送至服务器,可以降低直接将图像信息上传至服务器所占用的带宽,进一步降低设备端的资源消耗。It is easy to notice that the collection of image information of the target object and the extraction of eye feature information are run on the device side, while the processing of eye feature information is implemented on the server. Because the gaze point algorithm that processes gaze information based on eye feature information consumes a lot of resources, processing it on the server effectively reduces resource consumption on the device side. In addition, if the image is directly uploaded to the server, it will consume more bandwidth and cause resource consumption. Therefore, in this application, the device can extract the eye feature information from the image information and send the eye feature information to The server, instead of sending the image information directly to the server, can reduce the bandwidth occupied by directly uploading the image information to the server, further reducing the resource consumption on the device side.
由此可见,本申请所提供的方案可以解决现有技术的眼球追踪算法均在设备端运行导致设备端的资源消耗大的技术问题。It can be seen that the solution provided in the present application can solve the technical problem that the eye tracking algorithms in the prior art all run on the device side, which causes a large resource consumption on the device side.
在一种可选的方案中,设备端按照帧格式发送目标对象的眼部特征信息,其中,依据眼部特征信息确定目标对象的注视信息,可以包括如下步骤:In an optional solution, the device sends the eye feature information of the target object in a frame format, and determining the fixation information of the target object according to the eye feature information may include the following steps:
步骤S3020,对帧格式的数据进行解析,得到解析结果,其中,帧格式的数据包括眼部特征信息;Step S3020: Parse the data in the frame format to obtain an analysis result, where the data in the frame format includes eye feature information;
步骤S3022,对解析结果进行校验,得到校验结果;Step S3022, verify the analysis result to obtain a verification result;
步骤S3024,在校验结果满足预设条件的情况下,依据眼部特征信息确定目标对象的注视信息。In step S3024, if the verification result meets a preset condition, fixation information of the target object is determined according to the eye feature information.
需要说明的是,上述帧格式可以为图2所示的帧格式。可选的,服务器在接收到眼部特征信息之后,对眼部特征信息进行解析,并对帧格式中的校验码进行校验。在校验码通过校验的情况下,服务器对解析后的数据进行处理,得到目标对象的注视信息。It should be noted that the foregoing frame format may be the frame format shown in FIG. 2. Optionally, after receiving the eye feature information, the server parses the eye feature information and checks the check code in the frame format. When the check code passes the check, the server processes the parsed data to obtain the fixation information of the target object.
进一步的,在得到注视信息之后,服务器将注视信息发送注视信息至设备端,具体步骤可以包括:Further, after obtaining the gaze information, the server sends the gaze information to the device side, and the specific steps may include:
步骤S3060,确定眼部特征信息对应的图像标识;Step S3060, determining an image identifier corresponding to the eye feature information;
步骤S3062,获取与图像标识对应的注视信息;Step S3062, obtaining gaze information corresponding to the image identification;
步骤S3064,将注视信息以及图像标识格式发送至设备端。Step S3064, sending the gaze information and the image identification format to the device.
可选的,服务器可按照以下帧格式发送当前帧图像对应的眼部特征信息以及图像标识:帧头、控制字、人眼数据以及校验码,其中,人眼数据包括注视信息。需要说明的是,图像标识可以为但不限于时间戳、名称或者随机分配到标识。其中,在图像标识为时间戳的情况下,该时间戳为采集图像信息的时间所对应的时间戳。Optionally, the server may send the eye feature information and the image identifier corresponding to the current frame image according to the following frame format: a frame header, a control word, human eye data, and a check code, where the human eye data includes gaze information. It should be noted that the image identifier may be, but is not limited to, a time stamp, a name, or a random assigned identifier. Where the image identification is a timestamp, the timestamp is a timestamp corresponding to the time when the image information was collected.
可选的,服务器在解析得到眼部特征信息之后,基于眼球追踪算法对眼部特征信息进行处理,得到注视信息,并按照帧格式将注视信息反馈至设备端,此时,帧格式中的数据为注视信息。设备端在接收到注视信息之后,完成对注视信息进行解析,并将注视信息传递给应用端(例如,显示设备)。需要说明的是,预定格式至少包括:帧头、控制字、人眼数据以及校验码,其中,校验码可以为但不限于通过CRC(Cyclic Redundancy Check,循环冗余校验)校验得到的数字。采用CRC校验可以保证设备端与服务器通信之后,服务器可解析到正确的数据,并保证数据的正确性,同时还可防止干扰信号对解析结果的影响。Optionally, after parsing the eye feature information, the server processes the eye feature information based on the eye tracking algorithm to obtain fixation information, and feeds the fixation information to the device according to the frame format. At this time, the data in the frame format For gaze information. After receiving the gaze information, the device side parses the gaze information and passes the gaze information to the application side (for example, a display device). It should be noted that the predetermined format includes at least: a frame header, a control word, human eye data, and a check code. The check code may be, but is not limited to, obtained by CRC (Cyclic Redundancy Check). Numbers. The use of the CRC check can ensure that after the device communicates with the server, the server can parse the correct data and ensure the correctness of the data. At the same time, it can prevent the interference signal from affecting the analysis result.
在一种可选的方案中,服务器在接收到眼部特征信息之后,对眼部特征信息进行解析,并对解析后的数据进行处理,得到目标对象的注视信息,并按照帧格式将注视信息发送至设备端,此时,帧格式中的数据为注视信息。设备端在接收到注视信息之后,完成对注视信息进行解析,并将注视信息传递给应用端(例如,显示设备)。In an optional solution, after receiving the eye feature information, the server analyzes the eye feature information and processes the parsed data to obtain the fixation information of the target object, and converts the fixation information according to the frame format. Sent to the device. At this time, the data in the frame format is gaze information. After receiving the gaze information, the device side parses the gaze information and passes the gaze information to the application side (for example, a display device).
由上述内容可知,本申请所提供的方案可以有效降低眼球追踪技术对设备端的计算能力的要求。另外,通过服务器实时更新眼球追踪算法,完成对注视信息的提取,可以提升设备端的眼球追踪处理速率。最后,本申请所提供的方案在设备端运行图像处理算法,在服务器运行注视点算法,可以大大降低通信数据的吞吐量,提高通信速率。It can be known from the foregoing that the solution provided by the present application can effectively reduce the requirement of the computing capability of the device tracking of the eye tracking technology. In addition, the eye tracking algorithm is updated in real time by the server to complete the extraction of gaze information, which can improve the processing rate of eye tracking on the device side. Finally, the solution provided in this application runs the image processing algorithm on the device side and the gaze point algorithm on the server, which can greatly reduce the throughput of communication data and increase the communication rate.
实施例4Example 4
根据本申请实施例,提供了一种应用于终端设备的眼球追踪法实施例,其中,图7是根据本申请实施例的应用于终端设备的眼球追踪方法的流程图,如图7所示,该方法包括如下步骤:According to an embodiment of the present application, an embodiment of an eye tracking method applied to a terminal device is provided, wherein FIG. 7 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application, as shown in FIG. 7, The method includes the following steps:
步骤S702,从目标对象的图像信息中提取目标对象的眼部特征信息。In step S702, the eye feature information of the target object is extracted from the image information of the target object.
需要说明的是,目标对象的眼部特征信息包括如下至少之一:瞳孔信息、角膜信息、虹膜信息、眼睑信息、眼部图像信息、注视深度信息,其中,瞳孔信息至少包括:瞳孔中心位置、瞳孔直径;角膜光斑至少包括:角膜光斑反射信息;虹膜信息至少包括:虹膜边缘信息。It should be noted that the eye feature information of the target object includes at least one of the following: pupil information, corneal information, iris information, eyelid information, eye image information, and gaze depth information, where the pupil information includes at least the pupil center position, Pupil diameter; corneal spot includes at least: corneal spot reflection information; iris information includes at least: iris edge information.
在一种可选的方案中,设备端具有图像采集器,图像采集器可采集目标对象的图像信息。在采集到目标对象的图像信息之后,设备端采用瞳孔-角膜反射法实现眼球追踪技术,以提取目标对象的眼部特征信息。可选的,以瞳孔-角膜反射法为例,在图像采集设备采集到目标对象的图像信息之后,设备端使用图像处理算法提取出目标对象的眼部特征信息,其中,眼部特征信息表现为图像信息中的坐标数据,例如,瞳孔中 心位置的坐标、角膜光斑的中心位置的坐标等。In an optional solution, the device has an image collector, and the image collector can collect image information of the target object. After collecting the image information of the target object, the device uses the pupil-corneal reflection method to implement eye tracking technology to extract the eye feature information of the target object. Optionally, taking the pupil-corneal reflection method as an example, after the image acquisition device collects the image information of the target object, the device side uses an image processing algorithm to extract the eye characteristic information of the target object, where the eye characteristic information is expressed as The coordinate data in the image information includes, for example, coordinates of the center position of the pupil, coordinates of the center position of the corneal spot, and the like.
步骤S704,发送眼部特征信息至服务器,其中,服务器对眼部特征信息进行处理得到目标对象的注视信息。Step S704: Send eye feature information to the server, where the server processes the eye feature information to obtain gaze information of the target object.
需要说明的是,设备端与服务器可通过无线通信的方式进行通信,其中,无线通信方式可以包括但不限于WIFI、3G、4G、GPRS等方式。此外,还需要说明的是,在设备端向服务器发送眼部特征信息的过程中,设备端无需将采集到的整个图像信息发送至服务器,仅将瞳孔信息、角膜信息、虹膜信息以及眼睑信息、眼部图像信息、注视深度信息等眼部特征信息发送至服务器即可。其发送时,可按照1字节帧头、1字节控制字、3至6字节的眼部特征信息、2字节CRC校验发送至服务器,其中,发送的数量总量约为10字节。另外,在每秒处理90帧图像数据的情况下,设备端向服务器发送的数据量为90*10*8=7.2kbps。由此可见,设备端与服务器的通信速率可以满足实时性要求,并且吞吐量较小,即使使用通信速率较慢的GPRS也可以满足要求。It should be noted that the device and the server can communicate through wireless communication. The wireless communication methods may include, but are not limited to, WIFI, 3G, 4G, and GPRS. In addition, it should also be noted that during the process of sending eye feature information from the device to the server, the device does not need to send the entire collected image information to the server, only the pupil information, corneal information, iris information, and eyelid information, The eye feature information such as eye image information and gaze depth information may be transmitted to the server. When it is sent, it can be sent to the server according to the 1-byte frame header, 1-byte control word, 3 to 6 bytes of eye feature information, and 2-byte CRC check. Among them, the total number sent is about 10 words Section. In addition, in the case of processing 90 frames of image data per second, the amount of data sent by the device to the server is 90 * 10 * 8 = 7.2kbps. It can be seen that the communication rate between the device and the server can meet the real-time requirements, and the throughput is small, even if the GPRS with a slower communication rate can meet the requirements.
在一种可选的方案中,服务器在接收到眼部特征信息之后,服务器根据眼球追踪算法计算目标对象的注视信息,其中,注视信息可以为目标对象的注视点在设备端屏幕上的坐标信息。然后,服务器将计算得到的注视信息通过无线通信的方式发送至设备端,设备端接收注视信息。In an optional solution, after the server receives the eye feature information, the server calculates the fixation information of the target object according to the eye tracking algorithm, where the fixation information may be coordinate information of the fixation point of the target object on the device-side screen . Then, the server sends the calculated gaze information to the device side through wireless communication, and the device side receives the gaze information.
需要说明的是,在上述过程中,服务器无需将图像信息发送至设备端,仅将注视信息发送至设备端即可。服务器与设备端的通信速率可以满足实时性要求,并且吞吐量较小,即使使用通信速率较慢的GPRS也可以满足要求。It should be noted that, in the above process, the server does not need to send the image information to the device side, and only sends the gaze information to the device side. The communication rate between the server and the device can meet the real-time requirements, and the throughput is small, even if the GPRS with a slower communication rate can meet the requirements.
此外,还需要说明的是,由于眼球追踪算法对设备端的计算能力要求较高,眼球追踪算法对设备端的内核以及主频的要求严格,中端水平的设备端很难满足该硬件指标。而且,如果在设备端一直运行眼球追踪算法,则对设备端的资源消耗较大,甚至导致设备端的发热、卡顿。而本申请所提供的方案将眼球追踪算法在服务器运行,可以有效避免上述问题。In addition, it should also be noted that, because the eye tracking algorithm requires high computing power on the device side, the eye tracking algorithm has strict requirements on the device core and the main frequency, and it is difficult for the mid-level device side to meet the hardware index. In addition, if the eye tracking algorithm is always running on the device side, the resource consumption on the device side is large, and even the device side generates heat and freezes. The solution provided in this application runs the eye tracking algorithm on the server, which can effectively avoid the above problems.
基于上述步骤S702至步骤S704所限定的方案,可以获知,采用设备端与服务器相结合的方式,在设备端采集目标对象的图像信息,并从图像信息中提取目标对象的眼部特征信息,然后将眼部特征信息发送至服务器,由服务器根据眼部特征信息进行处理,得到目标对象的注视信息,并将目标对象的注视信息发送至设备端,进而可在设备端显示目标对象的注视信息。Based on the solutions defined in the above steps S702 to S704, it can be learned that the device side and the server are used to collect the image information of the target object on the device side, and extract the eye feature information of the target object from the image information, and then The eye feature information is sent to the server, and the server processes the eye feature information to obtain the fixation information of the target object, and sends the fixation information of the target object to the device side, so that the fixation information of the target object can be displayed on the device side.
容易注意到的是,目标对象的图像信息的采集、眼部特征信息的提取在设备端运行,而眼部特征信息的处理过程在服务器实现。由于根据眼部特征信息进行处理得到 注视信息的注视点算法消耗的资源较多,因此,将其在服务器进行处理,有效降低了设备端的资源消耗。It is easy to notice that the collection of image information of the target object and the extraction of eye feature information are run on the device side, while the processing of eye feature information is implemented on the server. Because the gaze point algorithm that processes gaze information based on eye feature information consumes more resources, processing it on the server effectively reduces resource consumption on the device side.
由此可见,本申请所提供的方案可以解决现有技术的眼球追踪算法均在设备端运行导致设备端的资源消耗大的技术问题。It can be seen that the solution provided in the present application can solve the technical problem that the eye tracking algorithms in the prior art all run on the device side, which causes a large resource consumption on the device side.
实施例5Example 5
根据本申请实施例,提供了一种应用于终端设备的眼球追踪法实施例,其中,图8是根据本申请实施例的应用于终端设备的眼球追踪方法的流程图,如图8所示,该方法包括如下步骤:According to an embodiment of the present application, an embodiment of an eye tracking method applied to a terminal device is provided, wherein FIG. 8 is a flowchart of an eye tracking method applied to a terminal device according to an embodiment of the present application, as shown in FIG. 8, The method includes the following steps:
步骤S802,获取设备端发送的图像信息。Step S802: Acquire image information sent by the device.
需要说明的是,设备端具有图像采集器,该图像采集器可采集目标对象的图像信息,其中,目标对象的图像信息中包含目标对象的眼部图像。可选的,在设备端为手机的情况下,图像采集器可以为手机的摄像头。通常手机的后置摄像头的分辨率要高于手机的前置摄像头的分辨率,但手机的前置摄像头的分辨率为千万级像素,其采集到的图像信息满足眼球追踪技术的需求。在手机具有前置摄像头的情况下,在本实施例中图像采集器选用手机的前置摄像头。It should be noted that the device has an image collector, which can collect image information of the target object, wherein the image information of the target object includes an eye image of the target object. Optionally, when the device is a mobile phone, the image collector may be a camera of the mobile phone. Generally, the resolution of the rear camera of the mobile phone is higher than the resolution of the front camera of the mobile phone, but the resolution of the front camera of the mobile phone is 10 million pixels, and the image information it collects meets the needs of eye tracking technology. In the case where the mobile phone has a front camera, in this embodiment, the image collector uses the front camera of the mobile phone.
此外,还需要说明的是,由于服务器的处理能力较强,可以运行较为复杂的算法,因此,在采集到目标对象的图像信息之后,设备端将目标对象的图像信息发送至服务器进行处理,以达到降低设备端资源消耗的目的。In addition, it should be noted that because the server has strong processing power and can run more complex algorithms, after collecting the image information of the target object, the device sends the image information of the target object to the server for processing in order to To achieve the purpose of reducing equipment-side resource consumption.
步骤S804,从图像信息中提取眼部特征信息,形成眼部特征信息集合,其中,图像信息中包含眼部特征信息。Step S804, extracting eye feature information from the image information to form an eye feature information set, where the image information includes eye feature information.
在一种可选的方案中,在接收到图像信息之后,服务器基于数据提取算法提取图像信息中的以下至少之一信息:瞳孔信息、角膜信息、虹膜信息和/或眼睑信息、眼部图像信息、注视深度信息,并将至少之一信息作为眼部特征信息。In an optional solution, after receiving the image information, the server extracts at least one of the following information in the image information based on the data extraction algorithm: pupil information, corneal information, iris information and / or eyelid information, and eye image information 2. Gaze depth information, and use at least one piece of information as eye feature information.
步骤S806,根据眼部特征信息集合确定视信息。Step S806: Determine the visual information according to the eye feature information set.
在一种可选的方案中,在得到眼部特征信息之后,服务器根据眼球追踪算法计算目标对象的注视信息,其中,注视信息至少包括注视方向信息以及注视点信息,其中,注视点信息可以为目标对象的注视点在设备端屏幕上的坐标信息。然后,服务器将计算得到的注视信息通过无线通信的方式发送至设备端,设备端接收注视信息,即执行步骤S808。In an optional solution, after obtaining the eye feature information, the server calculates the fixation information of the target object according to the eye tracking algorithm. The fixation information includes at least fixation direction information and fixation point information. The fixation point information may be The coordinate information of the target object's fixation point on the screen of the device. Then, the server sends the calculated gaze information to the device side through wireless communication, and the device side receives the gaze information, that is, step S808 is performed.
需要说明的是,由于眼球追踪算法对设备端的计算能力要求较高,眼球追踪算法对设备端的内核以及主频的要求严格,中端水平的设备端很难满足该硬件指标。而且,如果在设备端一直运行眼球追踪算法,则对设备端的资源消耗较大,甚至导致设备端的发热、卡顿。而本申请所提供的方案将眼球追踪算法在服务器运行,可以有效避免上述问题。It should be noted that, because the eye tracking algorithm requires high computing power on the device side, the eye tracking algorithm has strict requirements on the device core and the main frequency, and it is difficult for the mid-level device side to meet the hardware index. In addition, if the eye tracking algorithm is always running on the device side, the resource consumption on the device side is large, and even the device side generates heat and freezes. The solution provided in this application runs the eye tracking algorithm on the server, which can effectively avoid the above problems.
步骤S808,发送注视信息至设备端。Step S808: Send gaze information to the device.
需要说明的是,设备端与服务器可通过无线通信的方式进行通信,其中,无线通信方式可以包括但不限于WIFI、3G、4G、GPRS等方式。It should be noted that the device and the server can communicate through wireless communication. The wireless communication methods may include, but are not limited to, WIFI, 3G, 4G, and GPRS.
此外,还需要说明的是,服务器无需将图像信息发送至设备端,仅将注视信息发送至设备端即可。服务器与设备端的通信速率可以满足实时性要求,并且吞吐量较小,即使使用通信速率较慢的GPRS也可以满足要求。In addition, it should be noted that the server does not need to send the image information to the device side, it only needs to send the gaze information to the device side. The communication rate between the server and the device can meet the real-time requirements, and the throughput is small, even if the GPRS with a slower communication rate can meet the requirements.
基于上述步骤S802至步骤S808所限定的方案,可以获知,采用设备端与服务器相结合的方式,在获取到设备端发送的目标对象的图像信息之后,服务器从图像信息中提取目标对象的眼部特征信息,并根据眼部特征信息确定目标对象的注视信息,然后将注视信息发送至设备端。Based on the solutions defined in the above steps S802 to S808, it can be learned that, in a manner that the device side and the server are combined, after acquiring the image information of the target object sent by the device side, the server extracts the eyes of the target object from the image information. Feature information, and determine gaze information of the target object based on the eye feature information, and then send the gaze information to the device.
容易注意到的是,由于根据眼部特征信息进行处理得到注视信息的注视点算法消耗的资源较多,因此,将其在服务器进行处理,有效降低了设备端的资源消耗。另外,由于服务器的处理能力较强,可以运行较为复杂的算法,因此,服务器可直接对目标对象的图像信息进行处理得到眼部特征信息,并根据眼部特征信息提取到注视信息,最后将注视信息发送至设备端,进一步降低设备端的资源消耗。It is easy to notice that because the gaze point algorithm that processes gaze information based on eye feature information consumes a lot of resources, processing it on the server effectively reduces resource consumption on the device side. In addition, because the server has strong processing power and can run more complex algorithms, the server can directly process the image information of the target object to obtain eye feature information, extract gaze information based on the eye feature information, and finally fixate The information is sent to the device side, which further reduces the resource consumption of the device side.
由此可见,本申请所提供的方案可以解决现有技术的眼球追踪算法均在设备端运行导致设备端的资源消耗大的技术问题。It can be seen that the solution provided in the present application can solve the technical problem that the eye tracking algorithms in the prior art all run on the device side, which causes a large resource consumption on the device side.
实施例6Example 6
根据本申请实施例,提供了一种应用于终端设备的眼球追踪装置实施例,该装置可执行实施例1所提供的应用于终端设备的眼球追踪方法。其中,图4是根据本申请实施例的应用于终端设备的眼球追踪装置的结构示意图,如图4所示,该装置包括:第一采集模块401、第一发送模块403以及第一接收模块405。According to the embodiment of the present application, an embodiment of an eye tracking device applied to a terminal device is provided, and the device can execute the eye tracking method applied to a terminal device provided in Embodiment 1. 4 is a schematic structural diagram of an eyeball tracking device applied to a terminal device according to an embodiment of the present application. As shown in FIG. 4, the device includes a first acquisition module 401, a first sending module 403, and a first receiving module 405. .
其中,第一采集模块401,设置为采集图像信息;第一发送模块403,设置为发送眼部特征信息集合至处理器;第一接收模块405,设置为接收并解析处理器反馈的注视信息。The first collection module 401 is configured to collect image information; the first sending module 403 is configured to send an eye feature information set to a processor; and the first receiving module 405 is configured to receive and analyze gaze information fed back by the processor.
需要说明的是,上述第一采集模块401、第一发送模块403以及第一接收模块405对应于实施例1中的步骤S102至步骤S104,三个模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。It should be noted that the above-mentioned first acquisition module 401, first sending module 403, and first receiving module 405 correspond to steps S102 to S104 in Embodiment 1. Examples and application scenarios implemented by the three modules and corresponding steps The same, but not limited to the content disclosed in the first embodiment.
在一种可选的方案中,第一发送模块包括:第一获取模块、第一确定模块以及第二发送模块。其中,第一获取模块,设置为获取发送信息;第一确定模块,设置为确定发送信息对应的图像标识,其中,图像标识用于标识发送信息的发送顺序;第二发送模块,设置为将发送信息以第一规则方式发送至云端处理器。In an optional solution, the first sending module includes a first obtaining module, a first determining module, and a second sending module. The first obtaining module is configured to obtain the sending information; the first determining module is configured to determine the image identifier corresponding to the sending information, wherein the image identifier is used to identify the sending order of the sending information; the second sending module is configured to send the sending information; The information is sent to the cloud processor in a first regular manner.
需要说明的是,上述第一获取模块、第一确定模块以及第二发送模块对应于实施例1中的步骤S1060至步骤S1064,三个模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。It should be noted that the first obtaining module, the first determining module, and the second sending module correspond to steps S1060 to S1064 in Embodiment 1. The three modules and the corresponding steps implement the same examples and application scenarios, but It is not limited to the content disclosed in the first embodiment.
在一种可选的方案中,第一确定模块包括:第二确定模块以及第一生成模块。其中,第二确定模块,设置为确定发送信息的第一帧图像对应的时间戳;第一生成模块,设置为根据时间戳生成第一帧图像对应的图像标识。其中,发送信息包括:第一帧头、第一控制字、第一校验码及图像信息。In an optional solution, the first determining module includes a second determining module and a first generating module. The second determining module is configured to determine a timestamp corresponding to the first frame image of the transmitted information; the first generating module is configured to generate an image identifier corresponding to the first frame image according to the timestamp. The sending information includes: a first frame header, a first control word, a first check code, and image information.
在一种可选的方案中,第一接收模块包括:第二接收模块、解析模块、校验模块以及第三确定模块。其中,第二接收模块,设置为接收以第二规则方式发送的接收信息;解析模块,设置为解析接收信息,接收信息包括第二帧头、第二控制字以及第二校验码以及注视信息;校验模块,设置为完成对第二校验码进行校验;第三确定模块,设置为若第二校验码的校验结果满足预设条件时,将注视信息确定为目标对象的注视信息。In an optional solution, the first receiving module includes a second receiving module, a parsing module, a verifying module, and a third determining module. The second receiving module is configured to receive the received information sent in a second regular manner; the parsing module is configured to analyze the received information, and the received information includes a second frame header, a second control word, a second check code, and gaze information. A verification module configured to complete the verification of the second verification code; a third determination module configured to determine the fixation information as the fixation of the target object if the verification result of the second verification code meets a preset condition information.
实施例7Example 7
根据本申请实施例,提供了一种应用于终端设备的眼球追踪装置实施例,该装置可执行实施例2所提供的应用于终端设备的眼球追踪方法。其中,图5是根据本申请实施例的应用于终端设备的眼球追踪装置的结构示意图,如图5所示,该装置包括:第二采集模块501、第二提取模块503、第二发送模块505以及第二接收模块507。According to an embodiment of the present application, an embodiment of an eye tracking device applied to a terminal device is provided. The device can execute the eye tracking method applied to a terminal device provided in Embodiment 2. 5 is a schematic structural diagram of an eye tracking device applied to a terminal device according to an embodiment of the present application. As shown in FIG. 5, the device includes a second acquisition module 501, a second extraction module 503, and a second sending module 505. And the second receiving module 507.
其中,第二采集模块501,设置为采集图像信息;第二提取模块503,设置为从图像信息中提取眼部特征信息,形成眼部特征信息集合,其中,图像信息中包含眼部特征信息;第二发送模块505,设置为发送眼部特征信息集合至服务器;第二接收模块507,设置为接收并解析服务器反馈的注视信息。The second acquisition module 501 is configured to collect image information; the second extraction module 503 is configured to extract eye characteristic information from the image information to form an eye characteristic information set, where the image information includes eye characteristic information; The second sending module 505 is configured to send the eye feature information set to the server; the second receiving module 507 is configured to receive and analyze the gaze information fed back by the server.
需要说明的是,上述第二采集模块501、第二提取模块503、第二发送模块505以及第二接收模块507对应于实施例2中的步骤S902至步骤S908,四个模块与对应 的步骤所实现的示例和应用场景相同,但不限于上述实施例2所公开的内容。It should be noted that the above-mentioned second acquisition module 501, second extraction module 503, second sending module 505, and second receiving module 507 correspond to steps S902 to S908 in Embodiment 2, and the four modules correspond to the corresponding steps. The implementation example is the same as the application scenario, but is not limited to the content disclosed in the above embodiment 2.
在一种可选的方案中,第二发送模块包括:获取模块、确定模块以及发送模块。其中,获取模块,设置为获取发送信息;确定模块,设置为确定发送信息对应的图像标识,获取图像标识对应眼部特征信息集合,其中,图像标识用于标识发送信息的发送顺序;发送模块,设置为将发送信息发送至服务器。其中,发送信息还包括:第一帧头、第一控制字、第一校验码及眼部特征信息集合。In an optional solution, the second sending module includes an obtaining module, a determining module, and a sending module. The obtaining module is configured to obtain the sending information; the determining module is configured to determine the image identifier corresponding to the sending information and obtain the eye feature information set corresponding to the image identifier, wherein the image identifier is used to identify the sending order of the sending information; Set to send messages to the server. The sending information further includes: a first frame header, a first control word, a first check code, and an eye feature information set.
需要说明的是,上述获取模块、确定模块以及发送模块对应于实施例2中的步骤S2060至步骤S2064,三个模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例2所公开的内容。It should be noted that the foregoing obtaining module, determining module, and sending module correspond to steps S2060 to S2064 in Embodiment 2. The three modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the above embodiment 2. What's public.
实施例8Example 8
根据本申请实施例,提供了一种应用于终端设备的眼球追踪系统实施例,其中,图6是根据本申请实施例的应用于终端设备的眼球追踪系统的结构示意图,如图6所示,该系统包括:设备端601以及服务器603。其中,设备端601,设置为采集目标对象的图像信息,并对目标对象的眼部特征信息完成排序,形成系列的眼部特征信息,然后将系列的眼部特征信息发送至服务器,接收并解析服务器反馈的注视信息;服务器603,设置为接收设备端发送的眼部特征信息,并对眼部特征信息进行处理,得到目标对象的注视信息。According to an embodiment of the present application, an embodiment of an eye tracking system applied to a terminal device is provided, wherein FIG. 6 is a schematic structural diagram of an eye tracking system applied to a terminal device according to an embodiment of the present application, as shown in FIG. 6, The system includes: a device end 601 and a server 603. Wherein, the device end 601 is configured to collect image information of the target object, complete the sorting of the eye feature information of the target object, form a series of eye feature information, and then send the series of eye feature information to the server to receive and analyze The gaze information fed back by the server; the server 603 is configured to receive the eye feature information sent by the device, and process the eye feature information to obtain the gaze information of the target object.
由上可知,采用设备端与服务器相结合的方式,在设备端采集目标对象的图像信息,并从图像信息中提取目标对象的眼部特征信息,然后将眼部特征信息发送至服务器,由服务器根据眼部特征信息进行处理,得到目标对象的注视信息,并将目标对象的注视信息发送至设备端,进而可在设备端显示目标对象的注视信息。As can be seen from the above, the device side and the server are used to collect the image information of the target object on the device side, and extract the eye feature information of the target object from the image information, and then send the eye feature information to the server and the server Process according to the eye feature information to obtain the fixation information of the target object, and send the fixation information of the target object to the device side, and then the fixation information of the target object can be displayed on the device side.
容易注意到的是,目标对象的图像信息的采集、眼部特征信息的提取在设备端运行,而眼部特征信息的处理过程在服务器实现。由于根据眼部特征信息进行处理得到注视信息的注视点算法消耗的资源较多,因此,将其在服务器进行处理,有效降低了设备端的资源消耗。另外,在本申请中,设备端从图像信息中提取出眼部特征信息,并将眼部特征信息发送至服务器,而不是将图像信息直接发送至服务器,可以降低直接将图像信息上传至服务器所占用的带宽,进一步降低设备端的资源消耗。It is easy to notice that the collection of image information of the target object and the extraction of eye feature information are run on the device side, while the processing of eye feature information is implemented on the server. Because the gaze point algorithm that processes gaze information based on eye feature information consumes a lot of resources, processing it on the server effectively reduces resource consumption on the device side. In addition, in this application, the device side extracts the eye feature information from the image information and sends the eye feature information to the server instead of directly sending the image information to the server, which can reduce the direct upload of the image information to the server. The occupied bandwidth further reduces resource consumption on the device side.
由此可见,本申请所提供的方案可以解决现有技术的眼球追踪算法均在设备端运行导致设备端的资源消耗大的技术问题。It can be seen that the solution provided in the present application can solve the technical problem that the eye tracking algorithms in the prior art all run on the device side, which causes a large resource consumption on the device side.
需要说明的是,设备端601可执行实施例1、2和4中所提供的应用于终端设备的眼球追踪方法;服务器603可执行实施例3和5中所提供的应用于终端设备的眼球追 踪方法。具体内容已在实施例1-5中进行详细说明,在此不再赘述。It should be noted that the device side 601 may perform the eye tracking method applied to the terminal device provided in Embodiments 1, 2, and 4; the server 603 may perform the eye tracking method applied to the terminal device provided in Embodiments 3 and 5. method. The specific content has been described in detail in Embodiments 1-5, and details are not described herein again.
实施例9Example 9
根据本申请实施例的另一方面,还提供了一种存储介质,该存储介质包括存储的程序,其中,程序执行实施例1至5所提供的应用于终端设备的眼球追踪方法。According to another aspect of the embodiments of the present application, a storage medium is also provided. The storage medium includes a stored program, where the program executes the eye tracking method applied to the terminal device provided in Embodiments 1 to 5.
实施例10Example 10
根据本申请实施例的另一方面,还提供了一种处理器,该处理器用于运行程序,其中,程序运行时执行实施例1至5所提供的应用于终端设备的眼球追踪方法。According to another aspect of the embodiments of the present application, a processor is further provided for running a program, wherein the program executes the eye tracking method applied to the terminal device provided in the embodiments 1 to 5 when the program is run.
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。The above-mentioned serial numbers of the embodiments of the present application are merely for description, and do not represent the superiority or inferiority of the embodiments.
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the above embodiments of the present application, the description of each embodiment has its own emphasis. For a part that is not described in detail in an embodiment, reference may be made to the related description of other embodiments.
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed technical content can be implemented in other ways. The device embodiments described above are only schematic. For example, the division of the unit may be a logical function division. In actual implementation, there may be another division manner. For example, multiple units or components may be combined or may be combined. Integration into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be electrical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on multiple units. Some or all of the units may be selected according to actual needs to achieve the objective of the solution of this embodiment.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit. The above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘 等各种可以存储程序代码的介质。When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially a part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium. , Including a number of instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application. The foregoing storage media include: U disks, Read-Only Memory (ROM), Random Access Memory (RAM), mobile hard disks, magnetic disks, or optical disks, and other media that can store program codes .
以上所述仅是本申请的可选的实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。The above is only an optional implementation of the present application. It should be noted that, for those of ordinary skill in the art, without departing from the principle of the present application, several improvements and retouches can be made. These improvements and Retouching should also be considered as the protection scope of this application.
工业实用性Industrial applicability
本申请实施例提供的方案,可以应用于眼球追踪技术方面,通过设备端与服务器相结合的方式对图像进行处理的方式,解决了现有技术的眼球追踪算法均在设备端运行导致设备端的资源消耗大的技术问题,降低了图像信息传输所占用的带宽,降低了设备端的资源消耗。The solution provided by the embodiment of the present application can be applied to the eye tracking technology. The method of processing the image by combining the device side with the server solves the problem that the eye tracking algorithms in the prior art are all run on the device side and cause the device side resources Expensive technical problems reduce the bandwidth occupied by image information transmission and reduce resource consumption on the device side.

Claims (19)

  1. 一种应用于终端设备的眼球追踪方法,包括:An eye tracking method applied to a terminal device includes:
    采集图像信息;Collect image information;
    发送所述图像信息至云端处理器;Sending the image information to a cloud processor;
    接收并解析所述云端处理器反馈的注视信息。Receiving and analyzing gaze information fed back by the cloud processor.
  2. 根据权利要求1所述的眼球追踪方法,其中,发送所述图像信息至云端处理器,包括:The eye tracking method according to claim 1, wherein sending the image information to a cloud processor comprises:
    获取发送信息;Get sending information;
    确定所述发送信息对应的图像标识,其中,所述图像标识用于标识所述发送信息的发送顺序;Determining an image identifier corresponding to the transmission information, where the image identifier is used to identify a transmission sequence of the transmission information;
    将所述发送信息以第一规则方式发送至所述云端处理器。Sending the sending information to the cloud processor in a first regular manner.
  3. 根据权利要求2所述的眼球追踪方法,其中,确定所述发送信息对应的图像标识,包括:The eye tracking method according to claim 2, wherein determining an image identifier corresponding to the transmission information comprises:
    确定所述发送信息的第一帧图像对应的时间戳;Determining a timestamp corresponding to the first frame image of the sending information;
    根据所述时间戳生成所述第一帧图像对应的图像标识。Generating an image identifier corresponding to the first frame image according to the timestamp.
  4. 根据权利要求3所述的眼球追踪方法,其中,所述发送信息包括:第一帧头、第一控制字、第一校验码及所述图像信息。The eye tracking method according to claim 3, wherein the transmission information comprises: a first frame header, a first control word, a first check code, and the image information.
  5. 根据权利要求1所述的方法,其中,接收并解析所述云端处理器反馈的注视信息,包括:The method according to claim 1, wherein receiving and analyzing the gaze information fed back by the cloud processor comprises:
    接收以第二规则方式发送的接收信息;Receiving reception information sent in a second rule manner;
    解析所述接收信息,所述接收信息包括第二帧头、第二控制字以及第二校验码以及所述注视信息;Parse the received information, the received information includes a second frame header, a second control word, a second check code, and the gaze information;
    完成对所述第二校验码进行校验;Completing verification of the second check code;
    若所述第二校验码的校验结果满足预设条件时,将所述注视信息确定为目标对象的注视信息。If the verification result of the second verification code satisfies a preset condition, the fixation information is determined as fixation information of the target object.
  6. 一种应用于终端设备的眼球追踪方法,包括:An eye tracking method applied to a terminal device includes:
    获取设备端发送的图像信息;Obtain image information sent by the device;
    从所述图像信息中提取眼部特征信息,形成眼部特征信息集合,其中,所述图像信息中包含所述眼部特征信息;Extracting eye feature information from the image information to form an eye feature information set, wherein the image information includes the eye feature information;
    根据所述眼部特征信息集合确定注视信息;Determining gaze information according to the eye feature information set;
    发送所述注视信息至所述设备端。Sending the gaze information to the device.
  7. 一种应用于终端设备的眼球追踪方法,包括:An eye tracking method applied to a terminal device includes:
    采集图像信息;Collect image information;
    从所述图像信息中提取眼部特征信息,形成眼部特征信息集合,其中,所述图像信息中包含所述眼部特征信息;Extracting eye feature information from the image information to form an eye feature information set, wherein the image information includes the eye feature information;
    发送所述眼部特征信息集合至所述服务器;Sending the eye feature information set to the server;
    接收并解析所述服务器反馈的注视信息。Receive and parse gaze information fed back by the server.
  8. 根据权利要求7所述的眼球追踪方法,其中,在形成眼部特征信息集合之前,还包括:The eye tracking method according to claim 7, before the forming of the eye feature information set, further comprising:
    将所述图像信息转化为对应的眼部特征信息;Converting the image information into corresponding eye feature information;
    将所述对应的眼部特征信息配置数据标签;Configuring the corresponding eye feature information with a data label;
    按照指定顺序将已配置数据标签的所述对应的眼部特征信息完成排序。Sort the corresponding eye feature information of the configured data tags in a specified order.
  9. 根据权利要求8所述的眼球追踪方法,其中,所述对应的眼部特征信息包括目标对象的瞳孔信息、角膜信息、光斑信息、虹膜信息和/或眼睑信息、眼部图像信息、注视深度信息中的一种或几种。The eye tracking method according to claim 8, wherein the corresponding eye feature information includes pupil information, corneal information, spot information, iris information and / or eyelid information, eye image information, and gaze depth information of the target One or more of them.
  10. 根据权利要求7所述的眼球追踪方法,其中,所述眼部特征信息集合被配置为带有数据标签的一组或多组眼部特征信息。The eyeball tracking method according to claim 7, wherein the eye feature information set is configured as one or more groups of eye feature information with a data label.
  11. 根据权利要求7所述的眼球追踪方法,其中,发送所述眼部特征信息集合至服务器,包括:The eye tracking method according to claim 7, wherein sending the eye feature information set to a server comprises:
    获取发送信息;Get sending information;
    确定所述发送信息对应的图像标识,获取所述图像标识对应所述眼部特征信息集合,其中,所述图像标识用于标识所述发送信息的发送顺序;Determining an image identifier corresponding to the sending information, and acquiring the eye feature information set corresponding to the image identifier, where the image identifier is used to identify a sending order of the sending information;
    将所述发送信息以第一规则方式发送至服务器。And sending the sending information to the server in a first regular manner.
  12. 根据权利要求11所述的眼球追踪方法,其中,确定所述发送信息对应的图像标识,包括:The eye tracking method according to claim 11, wherein determining an image identifier corresponding to the transmission information comprises:
    确定所述发送信息的第一帧图像对应的时间戳;Determining a timestamp corresponding to the first frame image of the sending information;
    根据所述时间戳生成所述第一帧图像对应的图像标识。Generating an image identifier corresponding to the first frame image according to the timestamp.
  13. 根据权利要求12所述的眼球追踪方法,其中,所述发送信息包括:第一帧头、第一控制字、第一校验码及所述眼部特征信息集合。The eyeball tracking method according to claim 12, wherein the transmission information comprises: a first frame header, a first control word, a first check code, and the eye feature information set.
  14. 根据权利要求7所述的方法,其中,接收并解析所述服务器反馈的注视信息,具体包括:The method according to claim 7, wherein receiving and analyzing the gaze information fed back by the server specifically comprises:
    接收以第二规则方式发送的接收信息;Receiving reception information sent in a second rule manner;
    解析所述接收信息,所述接收信息包括第二帧头、第二控制字以及第二校验码以及所述注视信息;Parse the received information, the received information includes a second frame header, a second control word, a second check code, and the gaze information;
    完成对所述第二校验码进行校验;Completing verification of the second check code;
    若所述第二校验码的校验结果满足预设条件时,将所述注视信息确定为目标对象的注视信息。If the verification result of the second verification code satisfies a preset condition, the fixation information is determined as fixation information of the target object.
  15. 一种应用于终端设备的眼球追踪装置,包括:An eye tracking device applied to terminal equipment includes:
    第一采集模块,设置为采集图像信息;A first acquisition module, configured to acquire image information;
    第一发送模块,设置为发送所述图像信息至云端处理器;A first sending module configured to send the image information to a cloud processor;
    第一接收模块,设置为接收并解析所述云端处理器反馈的注视信息。The first receiving module is configured to receive and analyze gaze information fed back by the cloud processor.
  16. 一种应用于终端设备的眼球追踪装置,包括:An eye tracking device applied to terminal equipment includes:
    第二采集模块,设置为采集图像信息;A second acquisition module, configured to acquire image information;
    第二提取模块,设置为从所述图像信息中提取眼部特征信息,形成眼部特征信息集合,其中,所述图像信息中包含所述眼部特征信息;A second extraction module configured to extract eye feature information from the image information to form an eye feature information set, wherein the image information includes the eye feature information;
    第二发送模块,设置为发送所述眼部特征信息集合至服务器;A second sending module, configured to send the eye feature information set to a server;
    第二接收模块,设置为接收并解析所述服务器反馈的注视信息。The second receiving module is configured to receive and analyze gaze information fed back by the server.
  17. 一种应用于终端设备的眼球追踪系统,包括:An eye tracking system applied to a terminal device includes:
    设备端,设置为采集图像信息,并从所述图像信息中提取眼部特征信息,形 成眼部特征信息集合,然后将所述眼部特征信息发送至服务器,接收并解析所述服务器反馈的注视信息;The device end is configured to collect image information and extract eye feature information from the image information to form a set of eye feature information, and then send the eye feature information to a server to receive and analyze gaze feedback from the server information;
    所述服务器,设置为接收所述设备端发送的眼部特征信息,并对所述眼部特征信息进行处理,得到目标对象的注视信息。The server is configured to receive eye feature information sent by the device, and process the eye feature information to obtain gaze information of a target object.
  18. 一种存储介质,所述存储介质包括存储的程序,其中,所述程序执行权利要求1至14中任意一项所述的应用于终端设备的眼球追踪方法。A storage medium includes a stored program, wherein the program executes the eye tracking method applied to a terminal device according to any one of claims 1 to 14.
  19. 一种处理器,所述处理器用于运行程序,其中,所述程序运行时执行权利要求1至14中任意一项所述的应用于终端设备的眼球追踪方法。A processor configured to run a program, wherein when the program runs, the eye tracking method applied to a terminal device according to any one of claims 1 to 14 is executed.
PCT/CN2019/095264 2018-09-30 2019-07-09 Eye tracking method, apparatus and system applied to terminal device WO2020063021A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811163385.8 2018-09-30
CN201811163385.8A CN109522789A (en) 2018-09-30 2018-09-30 Eyeball tracking method, apparatus and system applied to terminal device

Publications (1)

Publication Number Publication Date
WO2020063021A1 true WO2020063021A1 (en) 2020-04-02

Family

ID=65771747

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/095264 WO2020063021A1 (en) 2018-09-30 2019-07-09 Eye tracking method, apparatus and system applied to terminal device

Country Status (2)

Country Link
CN (1) CN109522789A (en)
WO (1) WO2020063021A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522789A (en) * 2018-09-30 2019-03-26 北京七鑫易维信息技术有限公司 Eyeball tracking method, apparatus and system applied to terminal device
CN110058693B (en) * 2019-04-23 2022-07-15 北京七鑫易维科技有限公司 Data acquisition method and device, electronic equipment and storage medium
CN110334579B (en) * 2019-05-06 2021-08-03 北京七鑫易维信息技术有限公司 Iris recognition image determining method and device, terminal equipment and storage medium
CN110941344B (en) * 2019-12-09 2022-03-15 Oppo广东移动通信有限公司 Method for obtaining gazing point data and related device
CN111225157B (en) * 2020-03-03 2022-01-14 Oppo广东移动通信有限公司 Focus tracking method and related equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106101824A (en) * 2016-06-30 2016-11-09 联想(北京)有限公司 Information processing method, electronic equipment and server
CN107422842A (en) * 2017-03-16 2017-12-01 联想(北京)有限公司 A kind of information processing method and device
CN109522789A (en) * 2018-09-30 2019-03-26 北京七鑫易维信息技术有限公司 Eyeball tracking method, apparatus and system applied to terminal device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8510166B2 (en) * 2011-05-11 2013-08-13 Google Inc. Gaze tracking system
WO2014033306A1 (en) * 2012-09-03 2014-03-06 SensoMotoric Instruments Gesellschaft für innovative Sensorik mbH Head mounted system and method to compute and render a stream of digital images using a head mounted system
US20140340639A1 (en) * 2013-05-06 2014-11-20 Langbourne Rust Research Inc. Method and system for determining the relative gaze-attracting power of visual stimuli
CN103336576B (en) * 2013-06-28 2016-12-28 广州爱九游信息技术有限公司 A kind of moving based on eye follows the trail of the method and device carrying out browser operation
CN107991775B (en) * 2016-10-26 2020-06-05 中国科学院深圳先进技术研究院 Head-mounted visual equipment capable of tracking human eyes and human eye tracking method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106101824A (en) * 2016-06-30 2016-11-09 联想(北京)有限公司 Information processing method, electronic equipment and server
CN107422842A (en) * 2017-03-16 2017-12-01 联想(北京)有限公司 A kind of information processing method and device
CN109522789A (en) * 2018-09-30 2019-03-26 北京七鑫易维信息技术有限公司 Eyeball tracking method, apparatus and system applied to terminal device

Also Published As

Publication number Publication date
CN109522789A (en) 2019-03-26

Similar Documents

Publication Publication Date Title
WO2020063021A1 (en) Eye tracking method, apparatus and system applied to terminal device
US10769465B2 (en) Method for biometric recognition and terminal device
US10664693B2 (en) Method, device, and system for adding contacts in social network
CN109543633A (en) A kind of face identification method, device, robot and storage medium
CN109116991A (en) Control method, device, storage medium and the wearable device of wearable device
CN105022981A (en) Method and device for detecting health state of human eyes and mobile terminal
CN112101124B (en) Sitting posture detection method and device
EP3610738B1 (en) Method and system for unlocking electronic cigarette
WO2019062347A1 (en) Facial recognition method and related product
WO2018099295A1 (en) Image-based determination method and apparatus, and calculation device
WO2018214528A1 (en) Exercise effect displaying method and apparatus
CN107465873B (en) Image information processing method, equipment and storage medium
US20230206093A1 (en) Music recommendation method and apparatus
CN113657195A (en) Face image recognition method, face image recognition equipment, electronic device and storage medium
CN112036262A (en) Face recognition processing method and device
CN110826410B (en) Face recognition method and device
CN114302088A (en) Frame rate adjusting method and device, electronic equipment and storage medium
CN107678541A (en) Intelligent glasses and its information gathering and transmission method, computer-readable recording medium
CN113657154A (en) Living body detection method, living body detection device, electronic device, and storage medium
CN107808081A (en) A kind of based reminding method and relevant device
WO2023051215A1 (en) Gaze point acquisition method and apparatus, electronic device and readable storage medium
CN110222206A (en) A kind of client, server and image identification system
CN109856800A (en) Display control method, device and the split type AR glasses of split type AR glasses
WO2022089220A1 (en) Image data processing method and apparatus, device, storage medium, and product
CN114970761A (en) Model training method, device and system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19867658

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08/07/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19867658

Country of ref document: EP

Kind code of ref document: A1