CN109389082B - Sight line acquisition method, device, system and computer readable storage medium - Google Patents

Sight line acquisition method, device, system and computer readable storage medium Download PDF

Info

Publication number
CN109389082B
CN109389082B CN201811166331.7A CN201811166331A CN109389082B CN 109389082 B CN109389082 B CN 109389082B CN 201811166331 A CN201811166331 A CN 201811166331A CN 109389082 B CN109389082 B CN 109389082B
Authority
CN
China
Prior art keywords
picture
sight
target picture
coordinates
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811166331.7A
Other languages
Chinese (zh)
Other versions
CN109389082A (en
Inventor
陈曦
王塑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhu Maichi Zhixing Technology Co ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201811166331.7A priority Critical patent/CN109389082B/en
Publication of CN109389082A publication Critical patent/CN109389082A/en
Application granted granted Critical
Publication of CN109389082B publication Critical patent/CN109389082B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Position Input By Displaying (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention provides a sight collection method, a device, a system and a computer storage medium, wherein when a picture replacement instruction is obtained through a controller, a current target picture comprising a sight point is determined based on pre-stored target picture generation data, and meanwhile, a display device is controlled to display the current target picture, and the image collection device is controlled to collect a face picture of a tester; and then, coordinates of the sight point corresponding to the face picture acquired by the image acquisition equipment and the current target picture in the display equipment are cached in a one-to-one correspondence mode to form alternative picture information, when a controller acquires a sight point acquisition signal sent by the induction equipment, target picture information is determined from the alternative picture information corresponding to the image acquisition equipment, the target picture information corresponding to the image acquisition equipment and the equipment coordinates corresponding to the image acquisition equipment are stored to form sight acquisition data, and the accuracy of a sight data sample is improved.

Description

Sight line acquisition method, device, system and computer readable storage medium
Technical Field
The invention relates to the field of image acquisition, in particular to a sight line acquisition method, a sight line acquisition device, a sight line acquisition system and a computer-readable storage medium.
Background
The gaze detection model requires a large number of gaze data samples as a training basis, which are generally acquired by gaze data acquisition techniques.
The existing sight line data acquisition technology is that an acquirer directs an acquirer to observe sight line points required to be acquired, then an acquisition tool is used for shooting images of the sight line points observed by the acquirer, and labels of the acquired sight line points are recorded at the same time, wherein the labels are generally simple, such as watching an area A and not watching the area A.
However, for the existing sight line data acquisition technology, the person to be acquired needs to highly cooperate with the command of the commander to watch the sight line point, and once the person to be acquired does not supervise and cooperate with the command of the person to be acquired, the person to be acquired is also difficult to find, so that the error rate of the obtained sight line data sample is high.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a method, an apparatus, a system, and a computer-readable storage medium for line-of-sight acquisition, so as to improve accuracy of line-of-sight data samples.
In a first aspect, an embodiment of the present invention provides a gaze acquisition method, which is applied to a controller in a gaze acquisition system, where the gaze acquisition system further includes a display device, an induction device, a memory, and an image acquisition device, which are coupled to the controller, and a device coordinate of the image acquisition device relative to the display device and target picture generation data are stored in the memory in advance, where the method includes: when the controller acquires a picture replacing instruction, determining a current target picture comprising a sight point based on the target picture generation data, controlling the display equipment to display the current target picture comprising the sight point, and controlling the image acquisition equipment to acquire a face picture of a tester; the controller caches coordinates of sight points corresponding to the face picture acquired by the image acquisition equipment and the current target picture in the display equipment in a one-to-one correspondence manner to form alternative picture information; when the controller acquires a sight point acquisition signal sent by the induction equipment, determining target picture information from at least one piece of alternative picture information corresponding to the image acquisition equipment, wherein the sight point acquisition signal is a signal generated when the induction equipment detects that the tester clicks the sight point; and the controller stores the target picture information corresponding to the image acquisition equipment and the equipment coordinates into the memory to form sight line acquisition data.
In a second aspect, an embodiment of the present invention provides a gaze acquisition apparatus, which is applied to a controller in a gaze acquisition system, where the gaze acquisition system further includes a display device, an induction device, a memory, and an image acquisition device, which are coupled to the controller, and the memory stores device coordinates of the image acquisition device relative to the display device and target picture generation data in advance, and the apparatus includes: the device comprises a selection display module, a cache module, a determination module and a storage module. The selection display module is used for determining a current target picture comprising a sight point based on the target picture generation data when a picture replacing instruction is obtained, controlling the display equipment to display the current target picture and controlling the image acquisition equipment to acquire a face picture of a tester; the cache module is used for correspondingly caching the coordinates of the sight points, corresponding to the face picture acquired by the image acquisition equipment and the current target picture, in the display equipment one by one to form alternative picture information; the determining module is used for determining target picture information from at least one piece of alternative picture information corresponding to the image acquisition device when a sight point acquisition signal sent by the sensing device is acquired, wherein the sight point acquisition signal is a signal generated by the sensing device when the sight point is detected to be clicked by the tester; and the storage module is used for storing the target picture information and the equipment coordinates corresponding to the image acquisition equipment into the storage to form sight line acquisition data.
In a third aspect, an embodiment of the present invention provides a sight line acquisition system, including: the system comprises a controller, a display device coupled with the controller, a sensing device, a memory and an image acquisition device, wherein the memory is pre-stored with device coordinates of the image acquisition device relative to the display device and target picture generation data, and the memory is further stored with a computer program, and when the computer program is executed by the controller, the sight line acquisition system is enabled to execute the method of any one of the embodiments in the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the method described in any one of the implementation manners in the first aspect.
Compared with the prior art, according to the sight line acquisition method, the sight line acquisition device, the sight line acquisition system and the computer-readable storage medium provided by the embodiments of the invention, when the image replacement instruction is acquired through the controller, a current target image comprising a sight line point is determined based on pre-stored target image generation data, the display device is controlled to display the current target image, and the image acquisition device is controlled to acquire a face image of a tester; and then for each image acquisition device, the controller caches coordinates of the sight point corresponding to the face image acquired by the image acquisition device and the current target image in the display device in a one-to-one correspondence manner to form alternative image information, and subsequently when the controller acquires a sight point acquisition signal sent by the induction device, for each image acquisition device, determines one target image information from at least one alternative image information corresponding to the image acquisition device, and stores the target image information corresponding to the image acquisition device and the device coordinates corresponding to the image acquisition device in the memory to form sight acquisition data. When the sight line point is clicked, the portrait before the clicking moment is acquired, and the acquired sight line acquisition data is ensured to be correct as much as possible, so that the accuracy of the sight line data sample is improved; in addition, the whole process can be completed by only one tester, so that the condition that the action of a director is inconsistent with that of the tester is avoided, or the tester does not listen to the command condition of the director, or the condition that the director needs to supervise is avoided.
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part may be learned by the practice of the above-described techniques of the disclosure, or may be learned by practice of the disclosure.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic structural diagram of a sight line acquisition system according to an embodiment of the present invention;
fig. 2 is a flowchart of a sight line acquisition method according to an embodiment of the present invention;
fig. 3 is a schematic view of a sight line point of the sight line acquisition method according to the embodiment of the present invention;
fig. 4 is a block diagram of a view line acquisition device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Training a gaze detection model requires a large number of gaze data samples as a training basis. The existing sight line data acquisition technology is limited by the mutual matching between an acquirer and an acquirer, the acquirer commands the acquirer to observe sight line points required to be acquired, then an acquisition tool is used for shooting images of the sight line points observed by the acquirer, and meanwhile labels of the acquired sight line points are recorded, wherein the labels are generally simple, such as watching an area A and not watching the area A. In the scheme, once the acquirer is not supervised and the acquirer is not matched with the command of the acquirer, the acquirer is difficult to find, and the error rate of the obtained sight line data sample is high.
In order to solve the above problems, embodiments of the present invention provide a method, an apparatus, a system, and a computer-readable storage medium for line-of-sight acquisition, which may be implemented by using corresponding software, hardware, and a combination of software and hardware. The following describes embodiments of the present invention in detail.
First, a gaze acquisition system 100 for implementing a gaze acquisition method, apparatus, and method according to an embodiment of the present invention will be described with reference to fig. 1.
The gaze acquisition system 100 may include a controller 120, a memory 110 coupled to the controller 120, a display device 130, a sensing device 140, an image acquisition device 150, and a gaze acquisition apparatus, and these components of the memory 110, the controller 120, the display device 130, the sensing device 140, the image acquisition device 150, and the gaze acquisition apparatus may be interconnected by a bus system and/or other form of connection mechanism (not shown). It should be noted that the components and configuration of the gaze acquisition system 100 shown in fig. 1 are exemplary only, and not limiting, and that the gaze acquisition system 100 may have other components and configurations as desired. The memory 110, the display device 130, the sensing device 140, and the image capturing device 150 may be integrated into a module, or may be separately configured as a module.
The gaze acquisition device comprises at least one software functional module which may be stored in the memory 110 in the form of software or firmware or may be fixed in an Operating System (OS) of the gaze acquisition system 100. The controller 120 is configured to execute executable modules stored in the memory 110, such as software functional modules or computer programs included in the gaze acquisition apparatus.
The memory 110 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by the controller 120 to implement the functions desired to be implemented in the embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium. Of course, the device coordinates of the image capturing device 150 with respect to the display device 130 and the target picture generation data may be stored in the memory 110 in advance.
The controller 120 may be an integrated circuit chip having signal processing capabilities. The controller 120 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The controller 120 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention.
The display device 130 provides an interactive interface (e.g., a user operation interface) between the gaze acquisition system 100 and the tester or is used for displaying image data to the tester for reference, and in this embodiment, the display device 130 may be a tablet with a display function, a mobile terminal, or the like.
The sensing device 140 is an input device, and may be a capacitive touch screen or a resistive touch screen supporting single-point and multi-point touch operations. Supporting single-point and multi-point touch operations means that the touch display screen can sense touch operations simultaneously generated from one or more positions on the touch display screen, and the sensed touch operations are sent to the controller 120 for calculation and processing. Of course, in an alternative embodiment, the sensing device 140 may also be a mouse for sensing the position of the user click.
The image capturing device 150 is used for capturing a picture of a face of a tester, and may be a camera, or the like.
The following will describe a method of line-of-sight acquisition for optimization purposes in line-of-sight data acquisition:
referring to fig. 2, fig. 2 is a flowchart of a gaze acquisition method according to an embodiment of the present invention, which is described from the perspective of the controller 120 of the gaze acquisition system 100. The memory 110 of the gaze acquisition system 100 stores therein target picture generation data and device coordinates of the plurality of image acquisition devices 150 with respect to the display device 130 in advance.
The flow shown in fig. 2 will be described in detail below, and the method includes:
step S110: when the controller acquires a picture replacing instruction, a current target picture comprising a sight point is determined based on the target picture generation data, the display equipment is controlled to display the current target picture, and the image acquisition equipment is controlled to acquire a face picture of a tester.
Since the sight line acquisition data requires the image data of the pupils of the eyes of the testee, the face picture may be a face picture including the eyes of the testee acquired by each image acquisition device 150 connected to the controller 120.
As an optional implementation manner, the target picture generation data may include a plurality of pictures, each of which includes a preset sight point, and the controller 120 may determine, when the picture replacement instruction is obtained, a target picture from the plurality of pictures as the current target picture.
In this embodiment, optionally, the controller 120 may randomly select one picture from the plurality of pictures to be determined as the current target picture when the command for replacing a picture is obtained.
In this embodiment, optionally, the multiple pictures may be stored in the memory 110 in advance according to a time sequence, and the controller 120 may sequentially select one picture from the multiple pictures according to the sequence of the storage times of the multiple pictures when the picture replacement instruction is obtained, and determine the selected picture as the current target picture. Alternatively, a plurality of pictures may be stored in a queue in the memory 110, and since the reading rule of the queue is first in first out, the picture stored in the queue first is read by the controller 120 and determined as the current target picture.
The controller 120 controls the display device 130 to display the current target picture after determining the current target picture. Each target picture may include a sight line point, and the memory 110 stores coordinates of the sight line point included in each target picture. When displaying the current target picture, the display device 130 displays the sight points included in the current target picture together.
The sight point included in the target picture may be a specific shape displayed in the target picture, such as a cross as shown in fig. 3, or may be any other irregular shape. Of course, the shapes of the sight line points in each target picture may also be different, and in this case, in order to make the sight line points stand out and be easily found by the testee, the background of the target picture may be set to be a pure color, and the color of the sight line point may be set to be another color having a larger contrast with the background color of the target picture.
As another optional implementation, the target picture generation data may include: the device comprises a picture, a sight point identifier and a plurality of sight point coordinates, wherein the picture is used as a background picture. The controller 120 may determine a target sight point coordinate from the plurality of sight point coordinates when the image replacement instruction is obtained, and then display the sight point identifier at the target sight point coordinate in the background image, so as to obtain the current target image.
In this embodiment, optionally, the plurality of sight point coordinates may be generated by a function stored in the memory 110 and stored in the memory 110, or may be generated by a function stored outside the memory 110 and transmitted to the memory 110 for storage. Of course, in this embodiment, regular coordinates generated by a function, such as { (1, 1), (1, 2), (1, 3), (1, 4), (2, 1), (2, 2), (2, 3), (2, 4) } and the like, may be between the plurality of gaze point coordinates, at which time the gaze point identifiers appear on the picture in a varying order from left to right, from top to bottom; the coordinates of the plurality of sight points may be random coordinates randomly generated by a random function.
As another optional implementation, the target picture generation data may include: the method comprises the steps of one picture, one sight point mark and a random number generation function, wherein the picture is used as a background picture. The controller 120 may control the random number generation function to randomly generate a target sight point coordinate when the image replacement instruction is obtained, and then display the sight point identifier at the target sight point coordinate in the background image, so as to obtain the current target image.
In this embodiment, optionally, in order to increase the diversity of the target gaze point coordinates, the target picture generation data further includes a plurality of existing gaze point coordinates, for example { (1, 1), (1, 2), (1, 3), (1, 4), (2, 1), (2, 2), (2, 3), (2, 4) } and the like. The controller 120 controls the random number generating function to randomly generate a coordinate, superimposes the coordinate onto one of the existing multiple sight point coordinates to generate a target sight point coordinate, and displays the sight point identifier at the target sight point coordinate in the background picture to obtain the current target picture. It is to be noted that one of the plurality of sight point coordinates may be selected at random from the plurality of sight point coordinates, or may be selected sequentially in the order of storing the plurality of sight point coordinates.
Optionally, the above-mentioned picture replacing instruction may be automatically generated by the controller 120 every preset time (for example, 2 seconds), or may be generated by the controller 120 when acquiring the gaze point acquisition signal sent by the sensing device 140. Of course, the image replacement command may also be generated by combining the above two situations, for example, when the controller 120 does not acquire the sight point acquisition signal sent by the sensing device 140 within the preset time, the image replacement command may be automatically generated.
Since the sight line acquisition data required to be obtained by the embodiment of the present invention is pupil data of the sight line point included in the current target picture seen by the eyes of the tester when the display device 130 displays the current target picture, if the controller 120 controls the image acquisition device 150 to acquire the face picture of the tester when the tester looks at the sight line point included in the current target picture a and gives feedback, the face picture is most likely to be a face picture when the tester looks at the sight line point included in the current target picture B presented at the next time point, and the accuracy of the sight line acquisition data is reduced. In order to avoid the above problem and ensure the accuracy of the sight line acquisition data as much as possible, the controller 120 needs to control the image acquisition device 150 to acquire a face image of the tester while displaying the current target image.
Step S120: and the controller caches the coordinates of the sight points corresponding to the face picture acquired by the image acquisition equipment and the current target picture in the display equipment in a one-to-one correspondence manner to form alternative picture information.
Regardless of the manner in which the current target picture is determined, the controller 120 may obtain the coordinates of the gaze point corresponding to the current target picture in the display device 130. Of course, it is worth noting that one vertex of the display interface of the display device 130 may be defined as the origin of coordinates to determine the coordinates of the gaze point within the display device.
The number of the image acquisition devices 150 may be one, and after the image acquisition devices 150 acquire the face picture, the coordinates of the sight points included in the face picture and the current target picture in the display device 130 may be cached in a one-to-one correspondence manner, so as to form alternative picture information. As time goes by, the image capturing device 150 may capture a plurality of face pictures, each of which corresponds to a different current target picture, and finally form a plurality of candidate picture information to be stored in the memory 110.
Optionally, in order to speed up the acquisition of the gaze acquisition data, the number of image acquisition devices 150 may also be multiple, and each image acquisition device 150 has different device coordinates with respect to the display device 130.
In this embodiment, for the same current target picture, when the controller 120 controls the image capturing device 150 to capture a face picture of the tester, a plurality of face pictures can be obtained according to the difference of the image capturing device 150. Similarly, when the candidate picture information is formed subsequently, for the coordinates of the same sight point in the display device 130, the controller 120 may obtain multiple sets of candidate picture information corresponding to the image capturing devices 150 one to one according to the differences of the image capturing devices 150.
Since the face image acquired by the image acquisition device 150 may not meet the requirement of forming the sight line data, optionally, a preset condition of the face orientation state may be stored in the memory 110 in advance. The preset condition of the face orientation state may be: in the face picture corresponding to the image acquisition device 150, the distance between the face and the display device 130 is within a preset distance range; or in the face picture corresponding to the image acquisition device 150, the orientation of the face is within a preset angle range; or in the face picture corresponding to the image acquisition device 150, human eyes are not shielded; any two or all three of the above three conditions may be satisfied simultaneously.
Before the controller 120 caches the coordinates of the sight points corresponding to the face picture acquired by one image acquisition device 150 and the current target picture in the display device in a one-to-one correspondence manner to form alternative picture information, the controller 120 may determine whether the face picture corresponding to the image acquisition device 150 meets the preset condition of the face orientation state; when the condition is met, the controller 120 caches the coordinates of the sight point in the display device 130 corresponding to the face picture acquired by the image acquisition device 150 and the current target picture in a one-to-one correspondence manner, so as to form alternative picture information. If not, the controller 120 may re-determine the next current target picture and re-capture the face picture of the tester via the image capture device 150.
Of course, optionally, the controller 120 may further control the display device 130 to issue a capture error prompt before the next current target picture is determined again, so that the tester can adjust his posture. The acquisition error prompt may be prompt content for correcting the posture of the tester displayed according to a specific situation that the preset condition of the face orientation state is not satisfied, for example: "too far away, please get close to the display device".
The following description will be made with respect to the controller 120 re-determining the next current target picture.
As an alternative embodiment, when the controller 120 re-determines the next current target picture, the next current target picture may be the same as the last current target picture, i.e. the controller 120 may directly re-display the last current target picture.
As another alternative, when the controller 120 re-determines the next current target picture, the next current target picture may be different from the last current target picture.
In this embodiment, if the target picture generation data includes a plurality of pictures, the controller 120 determines a picture as the next current target picture after excluding the current target picture from the plurality of pictures.
In this embodiment, if the target picture generation data includes a picture, a sight point identifier, and a random number generation function, the controller 120 may obtain the next target sight point coordinate generated by the random number generation function, so as to determine a target picture as the next current target picture.
In this embodiment, if the target picture generation data includes a picture, a sight point identifier, and a plurality of sight point coordinates, the controller 120 determines a new target sight point coordinate after excluding the target sight point coordinate corresponding to the current target picture from the plurality of sight point coordinates, thereby determining a target picture as the next current target picture.
In this embodiment, if the target picture generation data includes a picture, a sight point identifier, a random number generation function, and a plurality of sight point coordinates, the controller 120 may superimpose the random coordinates onto the target sight point coordinates corresponding to the current target picture after the random number generation function generates a random coordinate, so as to form a new target sight point coordinate, thereby determining a target picture as the next current target picture.
Step S130: and when the controller acquires a sight point acquisition signal sent by the induction equipment, determining target picture information from at least one piece of alternative picture information corresponding to the image acquisition equipment.
The sight-line point acquisition signal may be a signal generated when the sensing device 140 detects that the tester touches the sight-line point with a finger or a stylus, or may be a signal generated when the tester clicks the sight-line point with a mouse.
When the controller 120 acquires the sight point acquisition signal, determining a target picture information from at least one candidate picture information acquired by the image acquisition device 150 may include:
the controller 120 may obtain the generation time of the sight-line point acquisition signal, and then subtract a preset value from the generation time to obtain a comparison time. The preset value represents the time interval for triggering the gaze point to acquire the signal after the tester sees the gaze point, and can be set to 300 milliseconds generally. Then, the controller 120 selects one candidate picture information from at least one candidate picture information acquired by the image acquisition device 150 as the target picture information, wherein the storage time of the target picture information is closest to the comparison time.
Optionally, when the controller 120 detects that the mouse moves in the display device 130 or detects that the finger or the stylus slides on the sensing device 140, the controller 120 may further collect coordinates of an operation position of the mouse, the finger or the stylus in the display device 130, and store the coordinates as reference coordinates in the memory 110.
Of course, it is worth pointing out that, if the controller 120 detects that the finger or the stylus pen slides on the sensing device 140, at this time, the controller may further detect a staying time of the finger or the stylus pen at the same position of the sensing device 140, if the staying time does not exceed the preset staying time, the operating position coordinate at this time is saved as a reference coordinate, if the staying time exceeds the preset staying time, the finger or the stylus pen is pressed for a long time on the sensing device 140, and at this time, the sight point acquisition signal is triggered.
In this embodiment, when the controller 120 caches the coordinates of the sight point corresponding to the face picture acquired by the image acquisition device 150 and the current target picture in the display device 130 in a one-to-one correspondence manner to form the candidate picture information, the controller 120 may also cache the face picture acquired by the image acquisition device 150, the reference coordinates corresponding to the face picture acquired by the image acquisition device 150, and the coordinates of the sight point corresponding to the current target picture in the display device 130 in a one-to-one correspondence manner to form the candidate picture information.
It should be noted that there may be no reference coordinate corresponding to the time point when a certain face picture is captured, because the stylus or mouse or the finger of the captured person is not in a sliding state. At this time, the controller 120 may use, as the reference coordinate of the face picture, a reference coordinate corresponding to a time point at which the reference coordinate is stored, which is closest to the time point at which the face picture is acquired, among the already stored reference coordinates. Accordingly, when the controller 120 determines a target picture information from the at least one candidate picture information corresponding to the image capturing device 150, the controller 120 may select one candidate picture information from the at least one candidate picture information captured by the image capturing device 150 as the target picture information. The distance between the reference coordinate corresponding to the target picture information and the coordinate of the sight point included in the target picture information in the display device 150 is greater than a preset value (that is, the reference coordinate is out of the sight point coordinate range), and the storage time of the target picture information is closest to the comparison time, so that a picture that the testee does not move yet but is about to move to the sight point can be found, and the state of the human eyes is most concentrated on the sight point.
Because the coordinate of the point in contact with the sensing device 140 is deviated from the coordinate of the sight point corresponding to the current target picture when the tester clicks the sight point by using a finger, a stylus, or a mouse to trigger generation of the sight point acquisition signal, optionally, before determining one target picture information from at least one candidate picture information corresponding to the image acquisition device, the controller 120 determines whether an error between the coordinate triggering the sight point acquisition signal and the coordinate of the sight point corresponding to the current target picture in the display device exceeds a preset error threshold; if not, the controller 120 determines a target picture information from at least one candidate picture information corresponding to the image capturing device 150; when yes, the controller 120 re-determines the next current target picture.
The case where the controller 120 re-determines the next current target picture here is the same as the case where the controller 120 re-determines the next current target picture described in step S120 above, and is not described here again to avoid repetition.
Step S140: and the controller stores the target picture information corresponding to the image acquisition equipment and the equipment coordinates into the memory to form sight line acquisition data.
Since the device coordinates of the image capturing device 150 with respect to the display device 130 are stored in the memory 110 in advance, after the target picture information is determined, the controller 120 stores the device coordinates of the image capturing device 150 corresponding to the target picture information in the memory 110, and forms the line-of-sight capturing data.
Wherein one line of sight acquisition data is uniquely determined by the device coordinates, the face picture and the coordinates of the line of sight point within the display device 130.
According to the sight line acquisition method applied to the embodiment of the invention, a picture replacement instruction is obtained through a controller, a current target picture comprising a sight line point is determined based on pre-stored target picture generation data, the display equipment is controlled to display the current target picture, and the image acquisition equipment is controlled to acquire a face picture of a tester; and then, caching the coordinates of the sight point corresponding to the face picture acquired by the image acquisition equipment and the current target picture in the display equipment in a one-to-one correspondence manner to form alternative picture information, subsequently determining one piece of target picture information from at least one piece of alternative picture information corresponding to the image acquisition equipment when the controller acquires a sight point acquisition signal sent by the induction equipment, and storing the target picture information corresponding to the image acquisition equipment and the equipment coordinates in the memory to form sight acquisition data. When the sight line point is clicked, the portrait before the clicking moment is acquired, and the acquired sight line acquisition data is ensured to be correct as much as possible, so that the accuracy of the sight line data sample is improved; in addition, the whole process can be completed by only one tester, so that the condition that the action of a director is inconsistent with that of the tester is avoided, or the tester does not listen to the command condition of the director, or the condition that the director needs to supervise is avoided.
Referring to fig. 4, in response to the view line acquiring method provided in fig. 2, an embodiment of the invention further provides a view line acquiring apparatus 400 applied to the controller 120 in the view line acquiring system 100, where the view line acquiring apparatus 400 may include: a selection display module 410, a caching module 420, a determination module 430, and a saving module 440.
The selection display module 410 is used for determining a current target picture comprising a sight point based on target picture generation data when a picture replacement instruction is obtained, controlling the display device to display the current target picture, and controlling the image acquisition device to acquire a portrait picture of a tester;
a cache module 420, configured to cache coordinates of a sight point in the display device, where the sight point corresponds to the face picture acquired by the image acquisition device and the current target picture, in a one-to-one correspondence manner, so as to form alternative picture information;
a determining module 430, configured to determine, when a gaze point acquisition signal sent by the sensing device is acquired, target picture information from at least one piece of candidate picture information corresponding to the image acquisition device, where the gaze point acquisition signal is a signal generated by the sensing device when it is detected that the tester clicks the gaze point;
and a saving module 440, configured to save the target picture information and the device coordinates corresponding to the image capturing device to the memory to form the gaze capturing data.
Optionally, the target picture generation data includes a picture, a sight point identifier, and a plurality of sight point coordinates, and the selection display module 410 is configured to determine a target sight point coordinate from the plurality of sight point coordinates when the picture replacement instruction is obtained; and displaying the sight point identification at the position of the target sight point coordinate on the picture to obtain the current target picture.
Optionally, the plurality of sight point coordinates are generated by a function stored in the memory, or generated by a function stored outside the memory and then sent to the memory.
Optionally, the target picture generation data includes a picture, a sight point identifier, and a random number generation function, and the selection display module 410 is configured to control the random number generation function to randomly generate a target sight point coordinate when the picture replacement instruction is acquired; and displaying the sight point identification at the position of the target sight point coordinate on the picture to obtain the current target picture.
Optionally, the target picture generation data further includes a plurality of sight point coordinates, and the selection display module 410 is configured to control the random number generation function to randomly generate one coordinate; and superposing the coordinate to one of the plurality of sight point coordinates to generate a target sight point coordinate.
Optionally, the target picture generation data includes a plurality of pictures, each of the pictures includes a preset sight point, and the selection display module 410 is configured to determine a target picture from the plurality of pictures as the current target picture when the picture change instruction is acquired.
Optionally, the selection display module 410 is configured to randomly select one picture from the multiple target pictures to determine that the picture is the current target picture when the picture replacement instruction is obtained.
Optionally, the multiple target pictures are stored in the memory in advance according to a time sequence, and the selection display module 410 is configured to, when the picture replacement instruction is obtained, sequentially select one picture from the multiple target pictures according to the sequence of the storage times of the multiple target pictures and determine the selected picture as the current target picture.
Optionally, the image replacement instruction is automatically generated at preset time intervals or generated by the device when the sight point acquisition signal is acquired.
Optionally, the memory stores preset conditions of the face orientation state in advance, and the device further includes a determining module and a re-determining module. And the judging module is used for judging whether the face picture corresponding to the image acquisition equipment meets the preset condition of the face azimuth state. If the result of the determination is yes, the caching module 420 caches the coordinates of the sight point in the display device, where the sight point corresponds to the face picture acquired by the image acquisition device and the current target picture, in a one-to-one correspondence manner, so as to form alternative picture information, otherwise, the re-determining module is configured to re-determine the next current target picture.
Optionally, the re-determining module may be configured to re-determine the next current target picture after the display device issues the acquisition error prompt.
Optionally, the preset condition of the face orientation state may be: in the face picture corresponding to the image acquisition equipment, the distance between the face and the display equipment is within a preset distance range, and/or in the face picture corresponding to the image acquisition equipment, the orientation of the face is within a preset angle range, and/or the eyes are not shielded.
The judging module is further configured to judge whether an error between the coordinate triggering the sight point acquisition signal and the coordinate of the sight point corresponding to the current target picture in the display device exceeds a preset error threshold. If the determination result is negative, the determining module 430 determines a target picture information from at least one candidate picture information corresponding to the image capturing device; and when the judgment result is yes, the re-determining module is used for re-determining the next current target picture.
Optionally, the next current target picture and the last current target picture may be the same or different.
Optionally, the determining module 430 is configured to acquire, by the controller, a generation time of the gaze point acquisition signal; the controller subtracts a preset value from the generation time to obtain comparison time; and the controller selects one piece of alternative picture information from at least one piece of alternative picture information acquired by the image acquisition equipment as the target picture information, wherein the storage time of the target picture information is closest to the comparison time.
Optionally, the sight line point acquisition signal is a signal generated when a finger or a touch pen touches the sight line point or a signal generated when a mouse clicks the sight line point.
Optionally, when the controller 120 detects that the mouse moves in the display device 130 or detects that the finger or the stylus slides on the sensing device 140, the controller 120 may further collect coordinates of an operation position of the mouse, the finger or the stylus in the display device 130, and store the coordinates as reference coordinates in the memory 110.
Of course, it is worth pointing out that, if the controller 120 detects that the finger or the stylus pen slides on the sensing device 140, at this time, the controller 120 may further detect a staying time of the finger or the stylus pen at the same position of the sensing device 140, if the staying time does not exceed the preset staying time, the coordinate at this time is saved as the reference coordinate, if the staying time exceeds the preset staying time, the finger or the stylus pen is pressed for a long time on the sensing device 140, and at this time, the sight line point acquisition signal is triggered.
In this embodiment, when the controller 120 caches the coordinates of the sight point corresponding to the face picture acquired by the image acquisition device 150 and the current target picture in the display device 130 in a one-to-one correspondence manner to form the candidate picture information, the controller 120 may also cache the face picture acquired by the image acquisition device 150, the reference coordinates corresponding to the face picture acquired by the image acquisition device 150, and the coordinates of the sight point corresponding to the current target picture in the display device 130 in a one-to-one correspondence manner to form the candidate picture information.
Accordingly, when a target picture information is determined from at least one candidate picture information corresponding to the image capturing device 150, the controller 120 may select one candidate picture information from the at least one candidate picture information captured by the image capturing device 150 as the target picture information. The distance between the reference coordinate corresponding to the target picture information and the coordinate of the sight point included in the target picture information in the display device 150 is greater than a preset value (that is, the reference coordinate is out of the sight point coordinate range), and the storage time of the target picture information is closest to the comparison time, so that a picture that the testee does not move yet but is about to move to the sight point can be found, and the state of the human eyes is most concentrated on the sight point.
The device provided in this embodiment has the same implementation principle and technical effect as those of the foregoing embodiments, and for the sake of brief description, reference may be made to the corresponding contents in fig. 1 to 3 in the foregoing method embodiments without reference to the device embodiment.
In addition, an embodiment of the present invention further provides a sight line acquisition system, including: the system comprises a controller, a display device coupled with the controller, a sensing device, a memory and an image acquisition device, wherein the memory is pre-stored with device coordinates of the image acquisition device relative to the display device and target picture generation data, and the memory is further stored with a computer program, and when the computer program is executed by the controller, the sight line acquisition system is enabled to execute the method of any one of the embodiments in the first embodiment. The structure of the sight line acquisition system can be as shown in fig. 1.
For the specific implementation process of the sight line acquisition system, reference is made to the foregoing embodiments, and details are not repeated here.
In addition, an embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, and when the computer program runs on a computer, the computer is caused to execute the gaze acquisition method provided in any one of the embodiments of the present invention.
In addition, an embodiment of the present invention further provides a computer program, where the computer program may be stored in a cloud or a local storage medium, and when the computer program runs on a computer, the computer is enabled to execute the gaze acquisition method provided in any embodiment of the present invention.
In summary, according to the sight line acquisition method, the sight line acquisition device, the sight line acquisition system and the computer-readable storage medium provided by the embodiments of the present invention, when the image replacement instruction is acquired by the controller, a current target image including a sight line point is determined based on pre-stored target image generation data, and the display device is controlled to display the current target image and the image acquisition device is controlled to acquire a face image of a tester; and then the controller caches the coordinates of the sight point corresponding to the face picture acquired by the image acquisition equipment and the current target picture in the display equipment in a one-to-one correspondence manner to form alternative picture information, subsequently when the controller acquires a sight point acquisition signal sent by the induction equipment, the controller determines one target picture information from at least one alternative picture information corresponding to the image acquisition equipment, and stores the target picture information corresponding to the image acquisition equipment and the equipment coordinates corresponding to the image acquisition equipment in the memory to form sight acquisition data. When the sight line point is clicked, the portrait before the clicking moment is acquired, and the acquired sight line acquisition data is ensured to be correct as much as possible, so that the accuracy of the sight line data sample is improved; in addition, the whole process can be completed by only one tester, so that the condition that the action of a director is inconsistent with that of the tester is avoided, or the tester does not listen to the command condition of the director, or the condition that the director needs to supervise is avoided.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (15)

1. A sight collection method is applied to a controller in a sight collection system, the sight collection system further comprises a display device, a sensing device, a memory and an image collection device, the display device, the sensing device, the memory and the image collection device are coupled with the controller, device coordinates of the image collection device relative to the display device and target picture generation data are stored in the memory in advance, and the method comprises the following steps:
when a picture replacing instruction is obtained, determining a current target picture comprising a sight point based on the target picture generation data, controlling the display equipment to display the current target picture, and controlling the image acquisition equipment to acquire a face picture of a tester;
caching the coordinates of the sight points corresponding to the face picture acquired by the image acquisition equipment and the current target picture in the display equipment in a one-to-one correspondence manner to form alternative picture information;
when a sight point acquisition signal sent by the induction equipment is acquired, determining target picture information from at least one piece of alternative picture information corresponding to the image acquisition equipment, wherein the sight point acquisition signal is a signal generated when the induction equipment detects that the tester clicks the sight point;
and storing the target picture information corresponding to the image acquisition equipment and the equipment coordinates into the memory to form sight line acquisition data.
2. The method of claim 1, wherein the target picture generation data includes a picture, a gaze point identifier, and a plurality of gaze point coordinates, and wherein determining a current target picture including a gaze point based on the target picture generation data when the change picture command is obtained comprises:
when the picture replacing instruction is obtained, determining a target sight point coordinate from the plurality of sight point coordinates;
and displaying the sight point identification at the position of the target sight point coordinate on the picture to obtain the current target picture.
3. The method according to claim 2, wherein the plurality of gaze point coordinates are generated by a function stored in the memory or are transmitted to the memory after being generated by a function stored outside the memory.
4. The method of claim 1, wherein the target picture generation data includes a picture, a sight point identifier, and a random number generation function, and wherein determining a current target picture including a sight point based on the target picture generation data when the change picture command is obtained comprises:
when the picture replacing instruction is obtained, controlling the random number generation function to randomly generate a target sight point coordinate;
and displaying the sight point identification at the position of the target sight point coordinate on the picture to obtain the current target picture.
5. The method of claim 4, wherein the target picture generation data further includes a plurality of gaze point coordinates, and wherein controlling the random number generation function to randomly generate a target gaze point coordinate comprises:
controlling the random number generation function to randomly generate a coordinate;
and superposing the randomly generated coordinates to one of the plurality of sight point coordinates to generate a target sight point coordinate.
6. The method according to claim 1, wherein the target picture generation data includes a plurality of pictures, each of the pictures includes a preset sight point, and when the picture change instruction is obtained, determining a current target picture including a sight point based on the target picture generation data includes:
when the picture replacing instruction is acquired, randomly selecting one picture from the multiple pictures as the current target picture; or
When the picture replacing instruction is obtained, one picture is sequentially selected from the multiple pictures as the current target picture according to the sequence of the storage time of the multiple pictures, wherein the multiple pictures are stored in the memory in advance according to the time sequence.
7. The method according to any one of claims 1 to 6, wherein the command for changing the picture is automatically generated by the controller at preset time intervals or generated by the controller when the sight-line point acquisition signal is acquired.
8. The method according to any one of claims 1 to 6, wherein a preset condition of a face orientation state is pre-stored in the memory, and before the coordinates of the sight point corresponding to the face picture acquired by the image acquisition device and the current target picture in the display device are cached in a one-to-one correspondence manner to form the alternative picture information, the method further comprises:
judging whether a face picture corresponding to the image acquisition equipment meets the preset condition of the face azimuth state or not;
if so, caching the coordinates of the sight points corresponding to the face picture acquired by the image acquisition equipment and the current target picture in the display equipment in a one-to-one correspondence manner to form alternative picture information;
if not, the next current target picture is determined again.
9. The method according to any one of claims 1-6, wherein before determining a target picture information from at least one candidate picture information corresponding to the image capturing device, the method further comprises:
judging whether the error between the coordinates triggering the sight point acquisition signal and the coordinates of the sight point corresponding to the current target picture in the display equipment exceeds a preset error threshold value or not;
if not, determining target picture information from at least one alternative picture information corresponding to the image acquisition equipment;
if yes, the next current target picture is determined again.
10. The method according to any one of claims 1-6, wherein the determining a target picture information from at least one candidate picture information corresponding to the image capturing device comprises:
acquiring the generation time of the sight point acquisition signal;
subtracting a preset value from the generation time to obtain comparison time;
and selecting one piece of alternative picture information from at least one piece of alternative picture information acquired by the image acquisition equipment as the target picture information, wherein the storage time of the target picture information is closest to the comparison time.
11. The method according to claim 10, wherein the gaze point acquisition signal is a signal generated by a finger or a stylus touching the gaze point or a signal generated by a mouse clicking the gaze point.
12. The method of claim 11, wherein the controller collects coordinates of an operation position of the mouse, the finger or the stylus within the display device and saves the coordinates as reference coordinates in the memory when the controller detects that the mouse moves within the display device or detects that the finger or the stylus slides on the sensing device; the alternative picture information also comprises a corresponding reference coordinate when the face picture is collected; and the distance between the reference coordinate corresponding to the target picture information and the coordinate of the sight point included in the target picture information in the display equipment is greater than a preset value.
13. A sight collection device is applied to a controller in a sight collection system, the sight collection system further comprises a display device, a sensing device, a memory and an image collection device, the display device, the sensing device, the memory and the image collection device are coupled with the controller, device coordinates of the image collection device relative to the display device and target picture generation data are stored in the memory in advance, and the sight collection device comprises:
the selection display module is used for determining a current target picture comprising a sight point based on the target picture generation data when a picture replacing instruction is obtained, controlling the display equipment to display the current target picture and controlling the image acquisition equipment to acquire a face picture of a tester;
the cache module is used for correspondingly caching the coordinates of the sight points, corresponding to the face picture acquired by the image acquisition equipment and the current target picture, in the display equipment one by one to form alternative picture information;
the determining module is used for determining target picture information from at least one piece of alternative picture information corresponding to the image acquisition device when a sight point acquisition signal sent by the sensing device is acquired, wherein the sight point acquisition signal is a signal generated by the sensing device when the sight point is detected to be clicked by the tester;
and the storage module is used for storing the target picture information and the equipment coordinates corresponding to the image acquisition equipment into the storage to form sight line acquisition data.
14. A gaze acquisition system, comprising: a controller, a display device coupled to the controller, a sensing device, a memory, and an image capturing device, the memory having pre-stored therein device coordinates of the image capturing device relative to the display device and target picture generation data, the memory further having stored therein a computer program that, when executed by the controller, causes the gaze acquisition system to perform the method of any of claims 1-12.
15. A computer-readable storage medium, in which a computer program is stored which, when run on a computer, causes the computer to carry out the method according to any one of claims 1-12.
CN201811166331.7A 2018-09-30 2018-09-30 Sight line acquisition method, device, system and computer readable storage medium Active CN109389082B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811166331.7A CN109389082B (en) 2018-09-30 2018-09-30 Sight line acquisition method, device, system and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811166331.7A CN109389082B (en) 2018-09-30 2018-09-30 Sight line acquisition method, device, system and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109389082A CN109389082A (en) 2019-02-26
CN109389082B true CN109389082B (en) 2021-05-04

Family

ID=65426586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811166331.7A Active CN109389082B (en) 2018-09-30 2018-09-30 Sight line acquisition method, device, system and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN109389082B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112363626B (en) * 2020-11-25 2021-10-01 广东魅视科技股份有限公司 Large screen interaction control method based on human body posture and gesture posture visual recognition
CN112633165A (en) * 2020-12-23 2021-04-09 远光软件股份有限公司 Vehicle compartment-based sampling supervision method and system, storage medium and electronic equipment
CN114706484A (en) * 2022-04-18 2022-07-05 Oppo广东移动通信有限公司 Sight line coordinate determination method and device, computer readable medium and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1623498A (en) * 2004-12-14 2005-06-08 江苏科技大学 Vision detecting device based on computer media technology and its detecting method
CN101419672A (en) * 2008-12-03 2009-04-29 中国科学院计算技术研究所 Device and method for synchronistically acquiring human face image and gazing angle
CN202615668U (en) * 2012-03-07 2012-12-19 安徽工程大学 Teaching demonstration device
CN103870058A (en) * 2012-12-17 2014-06-18 Lg伊诺特有限公司 Method of designing random pattern, apparatus for designing random pattern, and optical substrate including random pattern according to the same method
CN104966070A (en) * 2015-06-30 2015-10-07 北京汉王智远科技有限公司 Face recognition based living body detection method and apparatus
CN108171152A (en) * 2017-12-26 2018-06-15 深圳大学 Deep learning human eye sight estimation method, equipment, system and readable storage medium storing program for executing
CN108519824A (en) * 2018-04-13 2018-09-11 京东方科技集团股份有限公司 A kind of virtual reality display device, equipment and sight angle computational methods

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6899271B2 (en) * 2003-05-05 2005-05-31 Symbol Technologies, Inc. Arrangement for and method of collecting and displaying information in real time along a line of sight
CN108200340A (en) * 2018-01-12 2018-06-22 深圳奥比中光科技有限公司 The camera arrangement and photographic method of eye sight line can be detected
CN108427503B (en) * 2018-03-26 2021-03-16 京东方科技集团股份有限公司 Human eye tracking method and human eye tracking device
CN108447159B (en) * 2018-03-28 2020-12-18 百度在线网络技术(北京)有限公司 Face image acquisition method and device and entrance and exit management system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1623498A (en) * 2004-12-14 2005-06-08 江苏科技大学 Vision detecting device based on computer media technology and its detecting method
CN101419672A (en) * 2008-12-03 2009-04-29 中国科学院计算技术研究所 Device and method for synchronistically acquiring human face image and gazing angle
CN202615668U (en) * 2012-03-07 2012-12-19 安徽工程大学 Teaching demonstration device
CN103870058A (en) * 2012-12-17 2014-06-18 Lg伊诺特有限公司 Method of designing random pattern, apparatus for designing random pattern, and optical substrate including random pattern according to the same method
CN104966070A (en) * 2015-06-30 2015-10-07 北京汉王智远科技有限公司 Face recognition based living body detection method and apparatus
CN108171152A (en) * 2017-12-26 2018-06-15 深圳大学 Deep learning human eye sight estimation method, equipment, system and readable storage medium storing program for executing
CN108519824A (en) * 2018-04-13 2018-09-11 京东方科技集团股份有限公司 A kind of virtual reality display device, equipment and sight angle computational methods

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Determine Your Line of Sight;Jesse Sostrin;《The Manager’s Dilemma》;20151231;第55-67页 *
运动物体的头戴式视线追踪控制系统设计;千承辉等;《Microcontrollers & Embedded Systems》;20170831(第8期);第56-60页 *

Also Published As

Publication number Publication date
CN109389082A (en) 2019-02-26

Similar Documents

Publication Publication Date Title
US10585473B2 (en) Visual gestures
US9727135B2 (en) Gaze calibration
US9001006B2 (en) Optical-see-through head mounted display system and interactive operation
CN109389082B (en) Sight line acquisition method, device, system and computer readable storage medium
JP2017054201A5 (en)
WO2014106219A1 (en) User centric interface for interaction with visual display that recognizes user intentions
KR20160108388A (en) Eye gaze detection with multiple light sources and sensors
US9916043B2 (en) Information processing apparatus for recognizing user operation based on an image
EP3021206B1 (en) Method and device for refocusing multiple depth intervals, and electronic device
WO2015133889A1 (en) Method and apparatus to combine ocular control with motion control for human computer interaction
KR101631011B1 (en) Gesture recognition apparatus and control method of gesture recognition apparatus
US20160048214A1 (en) Using distance between objects in touchless gestural interfaces
JP6604271B2 (en) Gaze position detection device, gaze position detection method, and computer program for gaze position detection
JP2016514865A (en) Real-world analysis visualization
US9400575B1 (en) Finger detection for element selection
JP2012238293A (en) Input device
KR20160061699A (en) Electronic device and method for controlling dispaying
US20150153834A1 (en) Motion input apparatus and motion input method
US10185399B2 (en) Image processing apparatus, non-transitory computer-readable recording medium, and image processing method
US20150185851A1 (en) Device Interaction with Self-Referential Gestures
JP2017219942A (en) Contact detection device, projector device, electronic blackboard system, digital signage device, projector device, contact detection method, program and recording medium
JP2018109899A (en) Information processing apparatus, operation detecting method, and computer program
US10416814B2 (en) Information processing apparatus to display an image on a flat surface, method of controlling the same, and storage medium
KR102325684B1 (en) Eye tracking input apparatus thar is attached to head and input method using this
CN109344757B (en) Data acquisition method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230331

Address after: 1201, China Vision Valley Building, 88 Ruixiang Road, Guandou Street, Jiujiang District, Wuhu City, Anhui Province, 241005

Patentee after: Wuhu Maichi Zhixing Technology Co.,Ltd.

Address before: 313, block a, No.2, south academy of Sciences Road, Haidian District, Beijing

Patentee before: BEIJING KUANGSHI TECHNOLOGY Co.,Ltd.