WO2019128676A1 - 过滤光斑的方法和装置 - Google Patents

过滤光斑的方法和装置 Download PDF

Info

Publication number
WO2019128676A1
WO2019128676A1 PCT/CN2018/119880 CN2018119880W WO2019128676A1 WO 2019128676 A1 WO2019128676 A1 WO 2019128676A1 CN 2018119880 W CN2018119880 W CN 2018119880W WO 2019128676 A1 WO2019128676 A1 WO 2019128676A1
Authority
WO
WIPO (PCT)
Prior art keywords
spot
type
image
vector
light spot
Prior art date
Application number
PCT/CN2018/119880
Other languages
English (en)
French (fr)
Inventor
刘伟
任冬淳
宫小虎
杨孟
聂凤梅
Original Assignee
北京七鑫易维信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京七鑫易维信息技术有限公司 filed Critical 北京七鑫易维信息技术有限公司
Priority to US16/347,821 priority Critical patent/US11250588B2/en
Publication of WO2019128676A1 publication Critical patent/WO2019128676A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates to the field of gaze tracking, and in particular to a method and apparatus for filtering a spot.
  • VR Virtual Reality
  • the VR device can perform line-of-sight estimation of the distant device of the fixation point according to the pupil center coordinates and corneal reflection in the eyeball-based 3D approximate sphere model. If the VR device uses multiple cameras and multiple light sources, then only a single point correction is required to estimate the line of sight. However, due to the difference in the relative positions of the light source and the camera, the camera in some positions may not capture the image, or the captured image may be poor, resulting in the inability to effectively filter out the interference spot or the stray spot, further making the spot There is no exact match with the light source.
  • the embodiment of the present application provides a method and apparatus for filtering a spot to solve at least the technical problem that the prior art cannot accurately filter out the stray spot.
  • a method of filtering a light spot comprising: acquiring a first image and a reference image matching the first image, wherein a first type of spot is displayed on the first image; determining according to the reference image a second type of spot of the first image, wherein the second type of spot is a spot obtained by estimating a spot on the first image based on the reference image; a first position according to the first type of spot and a second type of the second type of spot The position determines a matching result of the first type of spot and the second type of spot; and the first type of spot is filtered according to the matching result.
  • an apparatus for filtering a light spot comprising: an obtaining module configured to acquire a first image and a reference image matching the first image, wherein the first type is displayed on the first image a first determining module, configured to determine a second type of spot of the first image according to the reference image, wherein the second type of spot is a spot obtained by estimating a spot on the first image based on the reference image; and the second determining module And determining, according to the first position of the first type of spot and the second position of the second type of spot, a matching result of the first type of spot and the second type of spot; the filtering module is configured to filter the first type of spot according to the matching result .
  • a storage medium including a stored program, wherein the program performs a method of filtering a spot.
  • a processor configured to execute a program, wherein a method of filtering a spot is executed while the program is running.
  • a gaze tracking device including a device for filtering a light spot.
  • the first image and the reference image matching the first image are acquired by using a map matching manner, and the second type of light spot of the first image is determined according to the reference image, and then according to the first type of light spot.
  • the first position and the second position of the second type of spot determine a matching result of the first type of spot and the second type of spot, and finally filter the first type of spot according to the matching result, wherein the first type is displayed on the first image
  • the spot, the second type of spot is a spot obtained by estimating the spot on the first image based on the reference image to achieve the purpose of accurately filtering out the stray spot, thereby realizing the technical effect of accurately matching the light source and the spot, thereby solving the problem.
  • the prior art cannot accurately filter out the technical problems of stray spots.
  • FIG. 1 is a flow chart of a method of filtering a spot according to an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of an optional gaze tracking device according to an embodiment of the present application.
  • FIG. 3 is a flow chart of an alternative method of filtering a spot according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of matching of an optional first type of spot and a second type of spot according to an embodiment of the present application
  • FIG. 5 is a schematic structural diagram of an apparatus for filtering a light spot according to an embodiment of the present application.
  • an embodiment of a method of filtering a spot is provided, it being noted that the steps illustrated in the flowchart of the figures may be performed in a computer system such as a set of computer executable instructions, and, although The logical order is shown in the flowcharts, but in some cases the steps shown or described may be performed in a different order than the ones described herein.
  • FIG. 1 is a flow chart of a method for filtering a light spot according to an embodiment of the present application. As shown in FIG. 1, the method includes the following steps:
  • Step S102 acquiring a first image and a reference image matching the first image, wherein the first type of light spot is displayed on the first image.
  • the first image is an image of a human eye collected by the camera, and has a first type of light spot on the image, wherein the first type of light spot includes not only a light spot corresponding to the light source, that is, the light source is
  • the cornea reflects the light spot projected on the camera and also includes stray spots.
  • the above-mentioned stray spot is an abnormal spot and needs to be filtered out.
  • the reference image refers to an image that matches the first image, wherein the reference image also includes an image of the human eye and a spot corresponding to the light source. There is no stray spot on the reference image.
  • the main body of the method performed in all the embodiments in the present application is a gaze tracking device, wherein the gaze tracking device may be, but not limited to, a virtual reality device, a smart terminal capable of gaze tracking, for example, a mobile phone, a computer, and a wearable device. Equipment, etc.
  • FIG. 2 shows a schematic structural view of an optional gaze tracking device.
  • a is an eyeball
  • b is a corneal surface
  • c is a corneal curvature center
  • d is an eyeball rotation center
  • p is a pupil center.
  • r is the pupil radius
  • O 1 is the camera
  • I 1 and I 2 are the two light sources
  • u 21 and u 11 are the imaging points of the light source after the light source is reflected by the cornea.
  • the light source is reflected by the cornea and reaches the imaging point of the camera, that is, the light spot.
  • the gaze tracking device can obtain the first image by acquiring the image captured by the camera, and the reference image can be trained from different initial positions by the method of bilinear model training described in "Seperating style and content with bilinear models". Extracted from a plurality of images acquired from different viewing angles, wherein the reference image is the image that best matches the first image.
  • Step S104 determining a second type of spot of the first image according to the reference image, wherein the second type of spot is a spot obtained by estimating the spot on the first image based on the reference image.
  • the second type in the first image can be roughly determined based on the reference image. Spot.
  • the second type of spot of the first image represents a spot corresponding to the light source.
  • the stray spot in the first type of spot can be filtered according to the second type of spot.
  • the relative position of the spot and the pupil in the first image can be determined by the relative position of the spot and the pupil in the reference image. Since the position of the pupil can be accurately obtained by a correlation algorithm of image processing, the first image can be obtained by The position of the middle pupil is matched with the position of the pupil in the reference image, and further, the position of the second type of spot in the first image can be roughly determined according to the spot in the reference image.
  • Step S106 determining a matching result of the first type of spot and the second type of spot according to the first position of the first type of spot and the second position of the second type of spot.
  • the distance between the position of each spot in the first type of spot and the position of each spot in the second type of spot may be Determining the distance weight value between the two, after obtaining the distance weight value between the first position of each first type of spot and the second position of each second type of spot, using a matching algorithm to complete the first type of spot Matches with the second type of spot.
  • KM algorithm Kuhn-Munkras algorithm, that is, the bipartite graph best matching algorithm introduced in the book "Matching Theory" can be used to match the first type of spot and the second type of spot.
  • Step S108 filtering the first type of spot according to the matching result.
  • the second type of spot is The spot corresponding to the spot also forms a vector with the other second type of spot, and determines whether the spot in the first type of spot is an abnormal spot according to the angle between the two vectors.
  • the first type of spot includes five spots, A, B, C, D, and E.
  • the spots corresponding to the second type of spot in the first type of spot are A, C, and E, and the corresponding second type of spot.
  • a and C and E respectively form a vector And vector A' and C' and E' form a vector And vector Vector And vector The angle between them is ⁇ , vector And vector The angle between the angles is ⁇ .
  • the number of abnormal vector angles of the spot A can be determined, and then whether the spot A is an abnormal spot is determined according to the number of abnormal vectors of the spot A. . If A is an abnormal spot, the spot A is filtered out. Wherein, if ⁇ is greater than a preset vector angle, the vector is determined Is an exception vector, again, if ⁇ is greater than the preset vector angle, then the vector is determined Is the exception vector.
  • the first image and the reference image are acquired, and the second type of light spot of the first image is determined according to the reference image, and then according to the first position of the first type of light spot and the first
  • the second position of the second type of spot determines the distance weight value between the first type of spot and the second type of spot, and matches the first type of spot and the second type of spot according to the distance weight value to obtain a matching result, and finally according to the matching result.
  • the first type of spot is filtered, wherein a first type of spot is displayed on the first image, and the second type of spot is a spot obtained by estimating a spot on the first image based on the reference image.
  • the method of determining the second position of the second type of spot in the first image by using the reference image is simple, and the accuracy of determining the position of the second type of spot is high, thereby further improving the filtering.
  • the accuracy of the stray spot after determining the position of the second type of spot, the normal spot in the first type of spot on the first image, that is, the position of the spot corresponding to the light source is roughly determined, and then according to the first type of spot and The distance weight value between the second type of spots determines the matching result of the first type of spot and the second type of spot, and further removes the stray spot in the first type of spot according to the matching result. It can be seen from the above that the above process is equivalent to filtering the stray spots twice, thereby further achieving the purpose of accurately filtering the stray spots.
  • the above embodiments can achieve the purpose of accurately filtering out the stray spots, thereby realizing the technical effect of accurately matching the spot of the light source, thereby solving the technical problem that the prior art cannot accurately filter out the stray spots.
  • the reference image can be obtained by:
  • Step S1022 constructing a bilinear model
  • Step S1024 input the first image to the bilinear model
  • Step S1026, determining output data of the bilinear model
  • step S1028 a reference image is determined based on the output data.
  • the bilinear model can be constructed by an image training method. After obtaining the bilinear model, the first image can be used as an input of the bilinear model, and after receiving the first image, the bilinear model The first image processing analysis finds an image that best matches the first image and outputs the image, wherein the output image of the bilinear model is the reference image that matches the first image.
  • the method for constructing a bilinear model by the method of image training specifically includes the following steps:
  • Step S1022a collecting a plurality of images, wherein each of the plurality of images includes a spot corresponding to the light source;
  • Step S1022b Acquire parameter information of each image, where the parameter information includes at least one of: a starting position of a spot in each image and a shooting angle of each image;
  • step S1022c a bilinear model is obtained according to the parameter information of each image and the relative position of the spot and the pupil in each image.
  • the shooting angles of each of the plurality of images may be different, so that the relative positions of the spots and the pupils determined by the same cornea at different shooting angles are accurate.
  • the reference image that best matches the first image may be determined according to the parameter information of the first image.
  • the first image is input into the bilinear model, and the reference image outputted to the bilinear model is obtained, and the gaze tracking device can determine the first image according to the reference image.
  • the second type of spot of an image the specific steps are as follows:
  • Step S1040 Obtain a relative position of each spot in the reference image and the pupil
  • Step S1042 acquiring a position of the pupil in the first image
  • Step S1044 determining the position of the second type of spot according to the position of the pupil in the first image and the relative position of each spot in the reference image and the pupil.
  • the gaze tracking device can determine the relative position of the spot and the pupil in the reference image, and can also obtain the position of the pupil in the first image, the relative position of the spot and the pupil in the reference image, and the second type of spot in the first image.
  • the relative position of the pupil is the same, whereby the position of the second type of spot can be determined based on the relative position of the spot and the pupil in the reference image and the position of the pupil in the first image.
  • the relative position of the spot A' in the reference image to the pupil is (30, 40)
  • the position coordinate of the pupil in the first image is (500, 500)
  • the spot A' in the first image matches the spot A' in the reference image.
  • the positional coordinates of the second type of spot A are (530, 540).
  • the number of spots corresponding to the light source and the position of the spot in the first image are uncertain, and therefore, in order to more effectively determine the second type of spot in the first image, it is required.
  • the first type of spot and the second type of spot in the first image are matched by the graph model method, and the abnormal spot (ie, the stray spot) in the first image is removed according to the matching result, and the specific method is shown in FIG. 3 .
  • FIG. 3 shows an alternative flow chart of a method for filtering a spot.
  • the gaze tracking device after obtaining the position determining the second type of spot, the gaze tracking device first calculates a distance weight value of each of the first type of spot and each of the second type of spots, and then according to the obtained distance.
  • the weight value uses the KM algorithm to match the first type of spot and the second type of spot, and finally removes the abnormal spot that does not satisfy the condition according to the matching result.
  • determining the matching result of the first type of light spot and the second type of light spot according to the first position of the first type of light spot and the second position of the second type of light spot specifically includes the following steps:
  • Step S1060 determining a distance weight value between the first type of spot and the second type of spot according to the first position of the first type of spot and the second position of the second type of spot;
  • Step S1062 matching the first type of spot and the second type of spot according to the distance weight value to obtain a matching result.
  • the distance weight value between the first type of spot and the second type of spot may be determined according to a distance weight value c ii ' between each spot in the first type of spot and each spot in the second type of spot.
  • the specific formula is as follows:
  • is the distance weight value between the first type of spot and the second type of spot, and the value of ⁇ may be 1, c ii ' is each spot in the first type of spot and the second type of spot
  • the distance weight value between each spot, V(G i ) is the position coordinate of the i-th first-type spot, and V(G i' ) is the position coordinate of the i-th second-type spot.
  • ⁇ in the above formula is the distance threshold.
  • the distance weight value obtained by the above method is applied to the KM algorithm, and the matching degree of the first type spot and the second type spot are relatively low. Therefore, the present application adopts EM.
  • the algorithm (Expectation Maximization Algorithm) is used to correct the distance weight value ⁇ between the first type of spot and the second type of spot, and the corrected distance weight value is ⁇ * .
  • the specific formula is as follows:
  • V(G i ) is the first position of the i-th first-type spot G i
  • V(G i′ ) is the second position of the i-th second-type spot G i′
  • is a distance threshold.
  • ⁇ * is the distance weight value between the first type of spot and the second type of spot
  • q ii ' is the distance weight value between the i-th first-type spot and the i-th second-type spot
  • I is the first A set of light spots
  • I' is a set of second type spots
  • Y i' is a position weight value of the i'th second type of spot.
  • the first type of spot and the second type of spot may be matched according to the KM algorithm, where The steps of the KM algorithm are as follows:
  • the specific method includes the following steps:
  • Step S1080 determining a matching combination of the first type of light spot and the second type of light spot, wherein the matching combination includes a plurality of corresponding relationships between the first type of light spot and the second type of light spot;
  • Step S1082 respectively calculating a weight sum of the distance weight values between each of the first type of spots and each of the second type of spots in each correspondence relationship;
  • Step S1084 Determine a weight and a maximum correspondence in the plurality of correspondences, wherein the correspondence between the weight and the maximum time indicates that the first type of spot matches the second type of spot.
  • the first type of spot includes five spots A, B, C, D, and E, wherein the matching combination of the second type of spots corresponding to the first type of spot includes the following three types:
  • the weight value between the spots A and A' is a 1
  • the weight value between the spots B and B' is b 1
  • the weight value between the spots C and C' is c 1
  • the weight value between the spots D and D' is d 1
  • the weight value between the spots E and E' is e 1 .
  • combining a corresponding weight sum to be L 1 ⁇ 1 a 1 + ⁇ 1 b 1 + ⁇ 1 c 1 + ⁇ 1 d 1 + ⁇ 1 d 1 , where ⁇ 1 , ⁇ 1 , ⁇ 1 , ⁇ 1 And ⁇ 1 are weighting coefficients.
  • the weight corresponding to the combination two is L 2
  • the weight of the combination three is L 3 . If the weights and the size relationships of the three combinations are:
  • the spot corresponding to the combination two is selected as the second type of spot matching the first type of spot, that is, the spots A', B', F', D' and E' are selected as A second type of spot that matches the first type of spot A, B, C, D, and E.
  • the first type of spot corresponding to each spot in the second type of spot can be found, that is, the matching of the first type of spot and the second type of spot is completed.
  • the first type of spot can be filtered according to the matching result, as follows:
  • Step S1102 dividing the first type of light spot into a first light spot and at least one second light spot, and acquiring a first vector group composed of each of the first light spot and the at least one second light spot;
  • Step S1104 dividing the second type of light spot into a third light spot and at least one fourth light spot, and acquiring a second vector group consisting of the third light spot and each of the at least one fourth light spot;
  • Step S1106 calculating a vector angle between each vector in the first vector group and a corresponding vector in the second vector group;
  • Step S1108 Determine whether the first spot is an abnormal spot according to the vector angle, and filter the abnormal spot when the first spot is an abnormal spot.
  • step S1108 determining whether the first spot is an abnormal spot according to the vector angle comprises the following steps:
  • Step S1108a determining whether a vector angle between each vector in the first vector group and a corresponding vector in the second vector group is greater than a preset vector angle
  • Step S1108b in the case that the vector angle is greater than the preset vector angle, determining that the vector angle is an abnormal vector angle
  • Step S1108c determining the number of abnormal vector angles
  • step S1108d if the number of abnormal vector angles is greater than a preset number, the first spot is determined to be an abnormal spot.
  • the first type of spot that matches the second type of spot is A, B, C, and D
  • the corresponding second type of spot is A', B', C', and D', respectively.
  • the first vector group includes: Corresponding second vector group includes among them, versus The vector angle between them is ⁇ 1 . versus The vector angle between them is ⁇ 2 , versus The vector angle between them is ⁇ 3 .
  • the number of abnormal vector angles when A is the first spot is 1.
  • the number of abnormal vector angles when B, C, and D are used as the first spot respectively is calculated.
  • the number of abnormal vector angles when B, C, and D are the first spots is 3, 2, respectively.
  • the black solid dot indicates the second type of spot
  • the white dot indicates the first type of spot
  • the white dot indicated by the dotted frame is the abnormal spot.
  • FIG. 5 is a schematic structural diagram of a device for filtering a light spot according to an embodiment of the present application.
  • the apparatus includes: an obtaining module 501, The first determining module 503, the second determining module 505, and the filtering module 507.
  • the obtaining module 501 is configured to acquire a first image and a reference image that matches the first image, wherein the first type of light spot is displayed on the first image, and the first determining module 503 is configured to determine the first image according to the reference image.
  • a second type of spot of an image wherein the second type of spot is a spot obtained by estimating a spot on the first image based on the reference image; and the second determining module 505 is configured to be according to the first position of the first type of spot and The second position of the second type of spot determines a matching result of the first type of spot and the second type of spot;
  • the filtering module 507 is configured to filter the first type of spot according to the matching result.
  • the foregoing obtaining module 501, the first determining module 503, the second determining module 505, and the filtering module 507 correspond to the steps S102 to S108 in Embodiment 1, the examples implemented by the four modules and the corresponding steps, and
  • the application scenario is the same, but is not limited to the content disclosed in the above embodiment 1.
  • the first type of spot comprises a spot and a stray spot corresponding to the light source
  • the second type of spot comprises a spot corresponding to the light source
  • the obtaining module includes: a building module, an input module, a third determining module, and a fourth determining module.
  • the building module is configured to construct a bilinear model
  • the input module is configured to input the first image to the bilinear model
  • the third determining module is configured to determine the output data of the bilinear model
  • the fourth determining module Set to determine the reference image based on the output data.
  • the foregoing building module, the input module, the third determining module, and the fourth determining module correspond to steps S1022 to S1028 in Embodiment 1, and the four modules are the same as the examples and application scenarios implemented by the corresponding steps. However, it is not limited to the contents disclosed in the above embodiment 1.
  • the building module includes: an acquisition module, a first acquisition module, and a processing module.
  • the acquiring module is configured to collect a plurality of images, wherein each of the plurality of images includes a spot corresponding to the light source;
  • the first acquiring module is configured to acquire parameter information of each image, wherein the parameter information At least one of the following: the starting position of the spot in each image and the shooting angle of each image;
  • the processing module is set to obtain a double according to the parameter information of each image and the relative position of the spot and the pupil in each image Linear model.
  • the first determining module includes: a second acquiring module, a third acquiring module, and a fifth determining module.
  • the second obtaining module is configured to acquire a relative position of each spot in the reference image and the pupil;
  • the third acquiring module is configured to acquire a position of the pupil in the first image;
  • the fifth determining module is configured to be according to the first The position of the pupil in the image and the relative position of each spot in the reference image to the pupil determine the position of the second type of spot.
  • step S1040 corresponds to step S1044 in the first embodiment
  • step S1044 corresponds to step S1044 in the first embodiment
  • the three modules are the same as the examples and application scenarios implemented by the corresponding steps, but It is not limited to the contents disclosed in the above embodiment 1.
  • the second determining module comprises: a sixth determining module and a matching module.
  • the sixth determining module is configured to determine a distance weight value between the first type of spot and the second type of spot according to the first position of the first type of spot and the second position of the second type of spot; the matching module is set according to The distance weight value matches the first type of spot and the second type of spot to obtain a matching result.
  • the sixth determining module and the matching module correspond to the steps S1060 to S1062 in the first embodiment, and the two modules are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the foregoing embodiment 1. Public content.
  • the matching module includes: a seventh determining module, a first calculating module, and an eighth determining module.
  • the seventh determining module is configured to determine a matching combination of the first type of light spot and the second type of light spot, wherein the matching combination includes multiple correspondences between the first type of light spot and the second type of light spot;
  • the first calculation module is set to Calculating a weight sum of distance weight values between each first type spot and each second type spot in each correspondence relationship;
  • an eighth determining module configured to determine weights and maximum correspondences in the plurality of corresponding relationships Wherein, the correspondence between the weight and the maximum time characterizes that the first type of spot matches the second type of spot.
  • step S1080 corresponds to step S1084 in Embodiment 1
  • the three modules are the same as the examples and application scenarios implemented by the corresponding steps, but It is not limited to the contents disclosed in the above embodiment 1.
  • the filtering module includes: a fourth acquiring module, a fifth acquiring module, a second calculating module, and a ninth determining module.
  • the fourth obtaining module is configured to divide the first type of light spot into the first light spot and the at least one second light spot, and acquire a first vector group composed of the first light spot and each of the at least one second light spot;
  • a fifth obtaining module configured to divide the second type of light spot into a third light spot and at least one fourth light spot, and acquire a second vector group consisting of the third light spot and each of the at least one fourth light spot;
  • the second calculation module And configured to calculate a vector angle between each vector in the first vector group and a corresponding vector in the second vector group;
  • the ninth determining module is configured to determine whether the first spot is an abnormal spot according to the vector angle, and When the first spot is an abnormal spot, the abnormal spot is filtered.
  • the foregoing fourth obtaining module, the fifth obtaining module, the second calculating module, and the ninth determining module correspond to the steps S1102 to S1108 in the first embodiment, and the four modules and the corresponding steps are implemented by the example and The application scenario is the same, but is not limited to the content disclosed in the above embodiment 1.
  • the eighth determining module includes: a tenth determining module, an eleventh determining module, a twelfth determining module, and a thirteenth determining module.
  • the tenth determining module is configured to determine whether a vector angle between each vector in the first vector group and a corresponding vector in the second vector group is greater than a preset vector angle; the eleventh determining module is set to be When the vector angle is greater than the preset vector angle, the vector angle is determined as the abnormal vector angle; the twelfth determining module is set to determine the number of the abnormal vector angle; and the thirteenth determining module is set to the number of the abnormal vector angle When the number is greater than the preset number, the first spot is determined to be an abnormal spot.
  • the tenth determining module, the eleventh determining module, the twelfth determining module, and the thirteenth determining module correspond to step S1108a to step S1108d in the first embodiment, and the four modules are implemented by corresponding steps.
  • the example is the same as the application scenario, but is not limited to the content disclosed in the above embodiment 1.
  • a storage medium comprising a stored program, wherein the program executes the method of filtering a spot in Embodiment 1 above.
  • the various functional modules provided by the embodiments of the present application may be operated in a device for filtering spots or a similar computing device, or may be stored as part of a storage medium.
  • a computer program is stored in the storage medium, wherein the computer program is set to be a method that can be used to execute a filtering spot at runtime.
  • the storage medium is configured to store program code for performing the following steps: acquiring the first image and the reference image matching the first image, wherein the first image is displayed on the first image a type of spot; determining a second type of spot of the first image based on the reference image, wherein the second type of spot is a spot obtained by estimating a spot on the first image based on the reference image; according to the first position of the first type of spot And the second position of the second type of spot determines a matching result of the first type of spot and the second type of spot; and the first type of spot is filtered according to the matching result.
  • the storage medium may also be provided as program code of various preferred or optional method steps provided by the method of filtering the spot.
  • a processor configured to execute a program, wherein the method of filtering the light spot in Embodiment 1 above is executed while the program is running.
  • the processor may execute an operation program of a method of filtering a spot.
  • the processor may be configured to perform the steps of: acquiring a first image and a reference image that matches the first image, wherein the first type of spot is displayed on the first image; Determining a second type of spot of the first image according to the reference image, wherein the second type of spot is a spot obtained by estimating a spot on the first image based on the reference image; according to the first position of the first type of spot and the second type The second position of the spot determines a matching result of the first type of spot and the second type of spot; and the first type of spot is filtered according to the matching result.
  • the above processor can execute various functional applications and data processing by running a software program and a module stored in the memory, that is, a method of determining parameters in the above-described gaze tracking device.
  • the storage medium may include a flash disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk.
  • a gaze tracking device comprising the device for filtering a spot in Embodiment 2.
  • the disclosed technical contents may be implemented in other manners.
  • the device embodiments described above are only schematic.
  • the division of the unit may be a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .
  • the solution provided by the embodiment of the present application can be applied to the line of sight tracking, and the problem of the stray spot can not be accurately filtered by using the method of pattern matching, thereby solving the technical problem that the prior art cannot accurately remove the stray spot, accurately filtering out the stray spot, and improving the light source and The accuracy of the spot matching.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

一种过滤光斑的方法和装置。其中,该方法包括:获取第一图像以及与第一图像相匹配的基准图像(S102),其中,在第一图像上显示有第一类光斑;根据基准图像确定第一图像的第二类光斑(S104),其中,第二类光斑为基于基准图像对第一图像上的光斑进行估计所得到的光斑;根据第一类光斑的第一位置以及第二类光斑的第二位置确定第一类光斑和第二类光斑的匹配结果(S106);根据匹配结果对第一类光斑进行过滤(S108)。

Description

过滤光斑的方法和装置 技术领域
本申请涉及视线追踪领域,具体而言,涉及一种过滤光斑的方法和装置。
背景技术
VR(Virtual Reality,虚拟现实)技术是一种可以创建和体验虚拟世界的计算机技术,其在视线追踪领域得到了广泛的应用。
在实际应用中,VR设备可根据基于眼球的3D近似圆球模型中的瞳孔中心坐标和角膜反射,对注视点的远距离设备进行视线估计。如果VR设备使用多相机多光源,则只需要进行单点校正即可估计视线。然而,由于光源和相机的相对位置的不同,可能导致处于某些位置上的相机不能捕获到图像,或者捕获到的图像不佳,从而导致不能有效的滤除干扰光斑或者杂光斑,进一步使得光斑与光源不能进行精确的匹配。
针对上述相关技术不能准确滤除杂光斑的问题,目前尚未提出有效的解决方案。
发明内容
本申请实施例提供了一种过滤光斑的方法和装置,以至少解决现有技术不能准确滤除杂光斑的技术问题。
根据本申请实施例,提供了一种过滤光斑的方法,包括:获取第一图像以及与第一图像相匹配的基准图像,其中,在第一图像上显示有第一类光斑;根据基准图像确定第一图像的第二类光斑,其中,第二类光斑为基于基准图像对第一图像上的光斑进行估计所得到的光斑;根据第一类光斑的第一位置以及第二类光斑的第二位置确定第一类光斑和第二类光斑的匹配结果;根据匹配结果对第一类光斑进行过滤。
根据本申请实施例,还提供了一种过滤光斑的装置,包括:获取模块,设置为获取第一图像以及与第一图像相匹配的基准图像,其中,在第一图像上显示有第一类光斑;第一确定模块,设置为根据基准图像确定第一图像的第二类光斑,其中,第二类光斑为基于基准图像对第一图像上的光斑进行估计所得到的光斑;第二确定模块,设置为根据第一类光斑的第一位置以及第二类光斑的第二位置确定第一类光斑和第二类光斑的匹配结果;过滤模块,设置为根据匹配结果对第一类光斑进行过滤。
根据本申请实施例,还提供了一种存储介质,该存储介质包括存储的程序,其中,程序执行过滤光斑的方法。
根据本申请实施例,还提供了一种处理器,该处理器设置为运行程序,其中,程序运行时执行过滤光斑的方法。
根据本申请实施例,还提供了一种视线追踪设备,该视线追踪设备包括过滤光斑的装置。
在本申请实施例中,采用图匹配的方式,通过获取第一图像以及与第一图像相匹配的基准图像,并根据基准图像确定第一图像的第二类光斑,然后根据第一类光斑的第一位置以及第二类光斑的第二位置确定第一类光斑和第二类光斑的匹配结果,最后根据匹配结果对第一类光斑进行过滤,其中,在第一图像上显示有第一类光斑,第二类光斑为基于基准图像对第一图像上的光斑进行估计所得到的光斑达到了准确滤除杂光斑的目的,从而实现了对光源和光斑进行精确匹配的技术效果,进而解决了现有技术不能准确滤除杂光斑的技术问题。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的一种过滤光斑的方法流程图;
图2是根据本申请实施例的一种可选的视线追踪设备的结构示意图;
图3是根据本申请实施例的一种可选的过滤光斑的方法流程图;
图4是根据本申请实施例的一种可选的第一类光斑与第二类光斑的匹配示意图;以及
图5是根据本申请实施例的一种过滤光斑的装置结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于 本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
实施例1
根据本申请实施例,提供了一种过滤光斑的方法实施例,需要说明的是,在附图的流程图示出的步骤可以在诸如一组计算机可执行指令的计算机系统中执行,并且,虽然在流程图中示出了逻辑顺序,但是在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤。
图1是根据本申请其中一实施例的过滤光斑的方法流程图,如图1所示,该方法包括如下步骤:
步骤S102,获取第一图像以及与第一图像相匹配的基准图像,其中,在第一图像上显示有第一类光斑。
需要说明的是,上述第一图像为相机采集到的包含人的眼睛的图像,并且在该图像上具有第一类光斑,其中,第一类光斑不仅包含与光源相对应的光斑,即光源经角膜反射在相机上投影得到的光斑,还包括杂光斑。上述杂光斑为异常光斑,需要被滤除。基准图像是指与第一图像相匹配的图像,其中,基准图像也包含人的眼睛的图像以及与光源相对应的光斑。基准图像上不具有杂光斑。
可选的,本申请中所有实施例中执行方法的主体为视线追踪设备,其中,视线追踪设备可以为但不限于虚拟现实设备、可进行视线追踪的智能终端、例如,手机、电脑、可穿戴设备等。
具体的,图2示出了一种可选的视线追踪设备的结构示意图,在图2中,a为眼球,b为角膜表面,c为角膜曲率中心,d为眼球旋转中心,p为瞳孔中心,r为瞳孔半径,O 1为相机,I 1和I 2为两个光源,u 21和u 11为光源经角膜反射后达到相机的成像点。其中,光源经角膜反射后达到相机的成像点即为上述光斑。由此,视线追踪设备可通过获取相机拍摄到的图像来得到第一图像,而基准图像可通过《Seperating style and content with bilinear models》一文中所介绍的双线性模型训练的方法从不同初始 位置和不同视角采集到的多张图像中提取出,其中,基准图像是与第一图像最匹配的图像。
步骤S104,根据基准图像确定第一图像的第二类光斑,其中,第二类光斑为基于基准图像对第一图像上的光斑进行估计所得到的光斑。
需要说明的是,由于基准图像中的光斑为与光源相对应的光斑,并且,基准图像与第一图像的匹配度最高,因此,以基准图像为基准可粗略确定第一图像中的第二类光斑。其中,第一图像的第二类光斑表示与光源相对应的光斑。由此,可根据第二类光斑来滤除第一类光斑中的杂光斑。
具体的,可通过基准图像中光斑与瞳孔的相对位置来确定第一图像中的光斑与瞳孔的相对位置,由于瞳孔的位置可通过图像处理的相关算法准确得到,因此,可通过将第一图像中瞳孔的位置与基准图像中瞳孔的位置进行匹配,进而,可根据基准图像中的光斑粗略确定第一图像中第二类光斑的位置。
步骤S106,根据第一类光斑的第一位置以及第二类光斑的第二位置确定第一类光斑和第二类光斑的匹配结果。
需要说明的是,在获取到第一类光斑和第二类光斑的位置之后,可根据第一类光斑中每个光斑的位置与第二类光斑中每个光斑的位置之间的距离,来确定两者之间的距离权重值,在获得每个第一类光斑的第一位置与每个第二类光斑的第二位置之间的距离权重值之后,使用匹配算法来完成第一类光斑与第二类光斑的匹配。
此外,还需要说明的是,可采用《Matching Theory》一书中所介绍的KM算法(Kuhn-Munkras算法,即二分图最佳匹配算法)来对第一类光斑和第二类光斑进行匹配。
步骤S108,根据匹配结果对第一类光斑进行过滤。
需要说明的是,在获取到第一类光斑与第二类光斑的匹配结果之后,将匹配出的第一类光斑中的某一个光斑与其他的光斑组成向量,同时,将第二类光斑中的与该光斑对应的光斑也与其他的第二类光斑组成向量,并根据两个向量之间的夹角的大小来确定第一类光斑中的该光斑是否为异常光斑。例如,第一类光斑包括五个光斑,分别为A、B、C、D和E,第一类光斑中与第二类光斑相对应的光斑为A、C和E,对应的第二类光斑为A′、C′和E′,则A与C和E分别组成向量
Figure PCTCN2018119880-appb-000001
和向量
Figure PCTCN2018119880-appb-000002
A′与C′和E′分别组成向量
Figure PCTCN2018119880-appb-000003
和向量
Figure PCTCN2018119880-appb-000004
则向量
Figure PCTCN2018119880-appb-000005
和向量
Figure PCTCN2018119880-appb-000006
之间的夹角为θ,向量
Figure PCTCN2018119880-appb-000007
和向量
Figure PCTCN2018119880-appb-000008
之间的夹角为δ,通过对比θ、δ与预设向量角的大小可确定光斑A的异常向量角的个数, 进而根据光斑A的异常向量的个数来确定光斑A是否为异常光斑。如果A为异常光斑,则对光斑A进行滤除。其中,如果θ大于预设向量角,则确定向量
Figure PCTCN2018119880-appb-000009
为异常向量,同样,如果δ大于预设向量角,则确定向量
Figure PCTCN2018119880-appb-000010
为异常向量。
基于上述步骤S102至步骤S108所限定的方案,可以获知,通过获取第一图像以及基准图像,并根据基准图像确定第一图像的第二类光斑,然后根据第一类光斑的第一位置以及第二类光斑的第二位置确定第一类光斑和第二类光斑之间的距离权重值,并根据距离权重值对第一类光斑和第二类光斑进行匹配,得到匹配结果,最后根据匹配结果对第一类光斑进行过滤,其中,在第一图像上显示有第一类光斑,第二类光斑为基于基准图像对第一图像上的光斑进行估计所得到的光斑。
容易注意到的是,由于采用基准图像确定第一图像中的第二类光斑的第二位置的方法简单,并且确定第二类光斑的位置的准确率较高,由此,可以进一步提高滤除杂光斑的精度。另外,在确定了第二类光斑的位置之后,即粗略的确定了第一图像上的第一类光斑中的正常光斑,即与光源相对应的光斑的位置,然后再根据第一类光斑与第二类光斑之间的距离权重值来确定第一类光斑与第二类光斑的匹配结果,并根据匹配结果进一步剔除第一类光斑中的杂光斑。由上述内容可知,上述过程相当于对杂光斑进行了两次过滤,从而进一步达到了对杂光斑进行准确滤除的目的。
由上述内容可知,上述实施例可以达到准确滤除杂光斑的目的,从而实现了对光源的光斑进行精确匹配的技术效果,进而解决了现有技术不能准确滤除杂光斑的技术问题。
在一种可选的实施例中,可通过如下方法获取基准图像:
步骤S1022,构建双线性模型;
步骤S1024,将第一图像输入至双线性模型;
步骤S1026,确定双线性模型的输出数据;
步骤S1028,根据输出数据确定基准图像。
具体的,可通过图像训练的方法来构建双线性模型,在得到双线性模型之后,可将第一图像作为双线性模型的输入,双线性模型在接收到第一图像之后,对第一图像处理分析,查找到与第一图像最匹配的图像,并输出该图像,其中,双线性模型的输出图像即为与第一图像相匹配的基准图像。
需要说明的是,通过图像训练的方法构建双线性模型的方法具体包括如下步骤:
步骤S1022a,采集多张图像,其中,多张图像中的每张图像均包含与光源相对应的光斑;
步骤S1022b,获取每张图像的参数信息,其中,参数信息至少包括如下之一:每张图像中的光斑的起始位置以及每张图像的拍摄视角;
步骤S1022c,根据每张图像的参数信息以及每张图像中的光斑与瞳孔的相对位置得到双线性模型。
需要说明的是,采集多张图像中的每张图像的拍摄视角可以是不同的,从而可以保证对同一角膜在不同拍摄视角下确定的光斑与瞳孔的相对位置是准确的。由此,当双线性模型接收到第一图像之后,可根据第一图像的参数信息来确定与第一图像最匹配的基准图像。
此外,还需要说明的是,在构建好双线性模型之后,将第一图像输入至双线性模型中,并获取到双线性模型输出的基准图像,视线追踪设备可根据基准图像确定第一图像的第二类光斑,具体步骤如下:
步骤S1040,获取基准图像中的每个光斑与瞳孔的相对位置;
步骤S1042,获取第一图像中的瞳孔的位置;
步骤S1044,根据第一图像中的瞳孔的位置以及基准图像中的每个光斑与瞳孔的相对位置确定第二类光斑的位置。
具体的,视线追踪设备可确定基准图像中的光斑与瞳孔的相对位置,同时也可得到第一图像中的瞳孔的位置,基准图像中光斑与瞳孔的相对位置和第一图像中第二类光斑与瞳孔的相对位置相同,由此,根据基准图像中光斑与瞳孔的相对位置以及第一图像中的瞳孔位置即可确定第二类光斑的位置。例如,基准图像中的光斑A’与瞳孔的相对位置为(30,40),第一图像中瞳孔的位置坐标为(500,500),则第一图像中与基准图像中的光斑A’相匹配的第二类光斑A的位置坐标为(530,540)。
需要说明的是,在实际应用中,第一图像中与光源相对应的光斑的个数以及光斑的位置是不确定的,因此,为了更有效地确定第一图像中的第二类光斑,需要通过图模型方法来对第一图像中的第一类光斑和第二类光斑进行匹配,并根据匹配结果去除第一图像中的异常光斑(即杂光斑),具体方法如图3所示。
具体的,图3示出了一种可选的过滤光斑的方法流程图。由图3可知,在获取确定第二类光斑的位置之后,视线追踪设备首先计算第一类光斑中的每个光斑与第二类光斑中的每个光斑的距离权重值,然后根据得到的距离权重值采用KM算法对第一类光 斑和第二类光斑进行匹配,最后根据匹配结果去除不满足条件的异常光斑。
在一种可选的实施例中,根据第一类光斑的第一位置以及第二类光斑的第二位置确定第一类光斑和第二类光斑的匹配结果具体包括如下步骤:
步骤S1060,根据第一类光斑的第一位置以及第二类光斑的第二位置确定第一类光斑与第二类光斑之间的距离权重值;
步骤S1062,根据距离权重值对第一类光斑和第二类光斑进行匹配,得到匹配结果。
具体的,可根据第一类光斑中的每个光斑与第二类光斑中的每个光斑之间的距离权重值c ii′来确定第一类光斑与第二类光斑之间的距离权重值γ,具体公式如下:
Figure PCTCN2018119880-appb-000011
在上式中,γ为第一类光斑与第二类光斑之间的距离权重值,γ的取值可为1,c ii′为第一类光斑中的每个光斑与第二类光斑中的每个光斑之间的距离权重值,V(G i)为第i个第一类光斑的位置坐标,V(G i′)为第i′个第二类光斑的位置坐标。
由上式可知,当第i个第一类光斑与第i′个第二类光斑的距离超过距离阈值时,c ii′的权重值为0,即当第i个第一类光斑与第i′个第二类光斑的距离无穷大时,每个权重值均接近于0,即下式成立:
Figure PCTCN2018119880-appb-000012
其中,上式中的σ为距离阈值。
需要说明的是,在实际情况下,将上述方法得到的距离权重值应用到KM算法中,得到的第一类光斑与第二类光斑的匹配度都比较低,由此,本申请采用通过EM算法(Expectation Maximization Algorithm,即期望最大化算法)来对第一类光斑与第二类光斑之间的距离权重值γ进行修正,修正后的距离权重值为γ *,具体公式如下:
Figure PCTCN2018119880-appb-000013
Figure PCTCN2018119880-appb-000014
Figure PCTCN2018119880-appb-000015
y ii′=q ii′
其中,V(G i)为第i个第一类光斑G i的第一位置,V(G i′)为第i′个第二类光斑G i′的第二位置,σ为距离阈值,γ *为第一类光斑与第二类光斑之间的距离权重值,q ii′为第i个第一类光斑与第i′个第二类光斑之间的距离权重值,I为第一类光斑的集合,I′为第二类光斑的集合,Y i′为第i′个第二类光斑的位置权重值。
在一种可选的实施例中,在确定了第一类光斑与第二类光斑之间的距离权重值之后,可根据KM算法来对第一类光斑和第二类光斑进行匹配,其中,KM算法的步骤如下:
(1)初始化可行顶标的值;
(2)在等价子图中寻找完备匹配;
(3)若未找到完备匹配则修改可行顶标的值;
(4)重复步骤(2)和(3),直至找到所有相等子图的完备匹配位置。
将上述KM算法应用到本申请中,可得到第一类光斑与第二类光斑之间的最佳匹配,即根据距离权重值对第一类光斑和第二类光斑进行匹配,得到匹配结果,具体方法包括如下步骤:
步骤S1080,确定第一类光斑与第二类光斑的匹配组合,其中,匹配组合包含第一类光斑与第二类光斑的多个对应关系;
步骤S1082,分别计算每个对应关系中的每个第一类光斑与每个第二类光斑之间的距离权重值的权重和;
步骤S1084,确定多个对应关系中权重和最大的对应关系,其中,权重和最大时所对应的对应关系表征第一类光斑与第二类光斑相匹配。
在一种可选的实施例中,第一类光斑包括A、B、C、D和E五个光斑,其中,与第一类光斑相对应的第二类光斑的匹配组合包含以下三种:
(1)组合一:A′、B′、C′、D′和E′;
(2)组合二:A′、B′、F′、D′和E′;
(3)组合三:A′、B′、C′、G′和E′。
以组合一为例进行说明,光斑A与A′之间的权重值为a 1,光斑B与B′之间的权重值为b 1,光斑C与C′之间的权重值为c 1,光斑D与D′之间的权重值为d 1,光斑E与E′之间的权重值为e 1。则组合一所对应的权重和为L 1=α 1a 11b 11c 11d 11d 1,其中,α 1、β 1、γ 1、μ 1和ε 1为加权系数。同样,可得到组合二对应的权重和为L 2,组合三对应的权重和为L 3。如果三个组合所对应的权重和的大小关系为:
L 2>L 3>L 1
由于组合二对应的权重和最大,因此,选择组合二所对应的光斑作为与第一类光斑相匹配的第二类光斑,即选择光斑A′、B′、F′、D′和E′作为与第一类光斑A、B、C、D和E相匹配的第二类光斑。
需要说明的是,通过上述过程可找到与第二类光斑中的每个光斑相对应的第一类光斑,即完成了第一类光斑与第二类光斑的匹配。在完成第一类光斑与第二类光斑的匹配之后,可根据匹配结果对第一类光斑进行过滤,具体方法如下:
步骤S1102,将第一类光斑划分为第一光斑和至少一个第二光斑,并获取第一光斑与至少一个第二光斑中的每个光斑组成的第一向量组;
步骤S1104,将第二类光斑划分为第三光斑和至少一个第四光斑,并获取第三光斑与至少一个第四光斑中的每个光斑组成的第二向量组;
步骤S1106,计算第一向量组中的每个向量与第二向量组中的对应的向量之间的向量角;
步骤S1108,根据向量角确定第一光斑是否为异常光斑,并在第一光斑为异常光斑的情况下对异常光斑进行过滤处理。
其中,步骤S1108,根据向量角确定第一光斑是否为异常光斑包括如下步骤:
步骤S1108a,确定第一向量组中的每个向量与第二向量组中的对应的向量之间的向量角是否大于预设向量角;
步骤S1108b,在向量角大于预设向量角的情况下,确定向量角为异常向量角;
步骤S1108c,确定异常向量角的个数;
步骤S1108d,在异常向量角的个数大于预设数量的情况下,确定第一光斑为异常光斑。
在一种可选的实施例中,与第二类光斑匹配的第一类光斑为A、B、C和D,对应的第二类光斑分别为A′、B′、C′和D′。首先,将A作为第一光斑,则B、C和D作为第二光斑,A′为第三光斑,则B′、C′和D′作为第四光斑,则第一向量组包括:
Figure PCTCN2018119880-appb-000016
对应的第二向量组包括
Figure PCTCN2018119880-appb-000017
其中,
Figure PCTCN2018119880-appb-000018
Figure PCTCN2018119880-appb-000019
之间的向量角为θ 1
Figure PCTCN2018119880-appb-000020
Figure PCTCN2018119880-appb-000021
之间的向量角为θ 2
Figure PCTCN2018119880-appb-000022
Figure PCTCN2018119880-appb-000023
之间的向量角为θ 3。在得到第一向量组和第二向量组之后,确定第一向量组中的每个向量和第二向量组中的对应的向量之间的向量角,与预设向量角θ之间的大小关系,例如,如果θ 1>θ,θ 2<θ,θ 3<θ,则A作为第一光斑时的异常向量角的个数为1。同样,根据上述方法,分别计算B、C和D作为第一光斑时的异常向量角的个数,例如,B、C和D作为第一光斑时的异常向量角的个数分别为3、2、0,在预设数量为2的情况下,由于B作为第一光斑时的异常向量角个数大于2,则确定B为异常光斑,并对其进行滤除。如图4所示,黑色实点表示第二类光斑,白色点表示第一类光斑,虚线框所标出的白色点为上述异常光斑。
实施例2
根据本申请实施例,还提供了一种过滤光斑的装置实施例,其中,图5是根据本申请实施例的过滤光斑的装置结构示意图,如图5所示,该装置包括:获取模块501、第一确定模块503、第二确定模块505以及过滤模块507。
其中,获取模块501,设置为获取第一图像以及与第一图像相匹配的基准图像,其中,在第一图像上显示有第一类光斑;第一确定模块503,设置为根据基准图像确定第一图像的第二类光斑,其中,第二类光斑为基于基准图像对第一图像上的光斑进行估计所得到的光斑;第二确定模块505,设置为根据第一类光斑的第一位置以及第二类光斑的第二位置确定第一类光斑和第二类光斑的匹配结果;过滤模块507,设置为根据匹配结果对第一类光斑进行过滤。
需要说明的是,上述获取模块501、第一确定模块503、第二确定模块505以及过滤模块507对应于实施例1中的步骤S102至步骤S108,四个模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。
在一种可选的实施例中,第一类光斑包括与光源相对应的光斑和杂光斑,第二类光斑包括与光源相对应的光斑。
在一种可选的实施例中,获取模块包括:构建模块、输入模块、第三确定模块以及第四确定模块。其中,构建模块,设置为构建双线性模型;输入模块,设置为将第一图像输入至双线性模型;第三确定模块,设置为确定双线性模型的输出数据;第四 确定模块,设置为根据输出数据确定基准图像。
需要说明的是,上述构建模块、输入模块、第三确定模块以及第四确定模块对应于实施例1中的步骤S1022至步骤S1028,四个模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。
在一种可选的实施例中,构建模块包括:采集模块、第一获取模块以及处理模块。其中,采集模块,设置为采集多张图像,其中,多张图像中的每张图像均包含与光源相对应的光斑;第一获取模块,设置为获取每张图像的参数信息,其中,参数信息至少包括如下之一:每张图像中的光斑的起始位置以及每张图像的拍摄视角;处理模块,设置为根据每张图像的参数信息以及每张图像中的光斑与瞳孔的相对位置得到双线性模型。
需要说明的是,上述采集模块、第一获取模块以及处理模块对应于实施例1中的步骤S1022a至步骤S1022c,三个模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。
在一种可选的实施例中,第一确定模块包括:第二获取模块、第三获取模块以及第五确定模块。其中,第二获取模块,设置为获取基准图像中的每个光斑与瞳孔的相对位置;第三获取模块,设置为获取第一图像中的瞳孔的位置;第五确定模块,设置为根据第一图像中的瞳孔的位置以及基准图像中的每个光斑与瞳孔的相对位置确定第二类光斑的位置。
需要说明的是,上述第二获取模块、第三获取模块以及第五确定模块对应于实施例1中的步骤S1040至步骤S1044,三个模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。
在一种可选的实施例中,第二确定模块包括:第六确定模块以及匹配模块。其中,第六确定模块,设置为根据第一类光斑的第一位置以及第二类光斑的第二位置确定第一类光斑与第二类光斑之间的距离权重值;匹配模块,设置为根据距离权重值对第一类光斑和第二类光斑进行匹配,得到匹配结果。
需要说明的是,上述第六确定模块以及匹配模块对应于实施例1中的步骤S1060至步骤S1062,两个模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。
在一种可选的实施例中,匹配模块包括:第七确定模块、第一计算模块以及第八确定模块。其中,第七确定模块,设置为确定第一类光斑与第二类光斑的匹配组合,其中,匹配组合包含第一类光斑与第二类光斑的多个对应关系;第一计算模块,设置 为分别计算每个对应关系中的每个第一类光斑与每个第二类光斑之间的距离权重值的权重和;第八确定模块,设置为确定多个对应关系中权重和最大的对应关系,其中,权重和最大时所对应的对应关系表征第一类光斑与第二类光斑相匹配。
需要说明的是,上述第七确定模块、第一计算模块以及第八确定模块对应于实施例1中的步骤S1080至步骤S1084,三个模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。
在一种可选的实施例中,过滤模块包括:第四获取模块、第五获取模块、第二计算模块以及第九确定模块。其中,第四获取模块,设置为将第一类光斑划分为第一光斑和至少一个第二光斑,并获取第一光斑与至少一个第二光斑中的每个光斑组成的第一向量组;第五获取模块,设置为将第二类光斑划分为第三光斑和至少一个第四光斑,并获取第三光斑与至少一个第四光斑中的每个光斑组成的第二向量组;第二计算模块,设置为计算第一向量组中的每个向量与第二向量组中的对应的向量之间的向量角;第九确定模块,设置为根据向量角确定第一光斑是否为异常光斑,并在第一光斑为异常光斑的情况下对异常光斑进行过滤处理。
需要说明的是,上述第四获取模块、第五获取模块、第二计算模块以及第九确定模块对应于实施例1中的步骤S1102至步骤S1108,四个模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。
在一种可选的实施例中,第八确定模块包括:第十确定模块、第十一确定模块、第十二确定模块以及第十三确定模块。其中,第十确定模块,设置为确定第一向量组中的每个向量与第二向量组中的对应的向量之间的向量角是否大于预设向量角;第十一确定模块,设置为在向量角大于预设向量角的情况下,确定向量角为异常向量角;第十二确定模块,设置为确定异常向量角的个数;第十三确定模块,设置为在异常向量角的个数大于预设数量的情况下,确定第一光斑为异常光斑。
需要说明的是,上述第十确定模块、第十一确定模块、第十二确定模块以及第十三确定模块对应于实施例1中的步骤S1108a至步骤S1108d,四个模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例1所公开的内容。
实施例3
根据本申请其中一实施例的另一方面,还提供了一种存储介质,该存储介质包括存储的程序,其中,程序执行上述实施例1中的过滤光斑的方法。
本申请实施例所提供的各个功能模块可以在过滤光斑的装置或者类似的运算装置中运行,也可以作为存储介质的一部分进行存储。
可选地,在本实施例中,上述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时可以用于执行过滤光斑的方法。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:获取第一图像以及与第一图像相匹配的基准图像,其中,在第一图像上显示有第一类光斑;根据基准图像确定第一图像的第二类光斑,其中,第二类光斑为基于基准图像对第一图像上的光斑进行估计所得到的光斑;根据第一类光斑的第一位置以及第二类光斑的第二位置确定第一类光斑和第二类光斑的匹配结果;根据匹配结果对第一类光斑进行过滤。
可选地,在本实施例中,存储介质还可以被设置为过滤光斑的方法提供的各种优选地或可选的方法步骤的程序代码。
实施例4
根据本申请其中一实施例的另一方面,还提供了一种处理器,该处理器设置为运行程序,其中,程序运行时执行上述实施例1中的过滤光斑的方法。
在发明本实施例中,上述处理器可以执行过滤光斑的方法的运行程序。
可选地,在本实施例中,处理器可以被设置为执行下述步骤:获取第一图像以及与第一图像相匹配的基准图像,其中,在第一图像上显示有第一类光斑;根据基准图像确定第一图像的第二类光斑,其中,第二类光斑为基于基准图像对第一图像上的光斑进行估计所得到的光斑;根据第一类光斑的第一位置以及第二类光斑的第二位置确定第一类光斑和第二类光斑的匹配结果;根据匹配结果对第一类光斑进行过滤。
上述处理器可以通过运行存储在存储器内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的视线追踪设备中确定参数的方法。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指示确定瞳孔位置的装置相关的硬件来完成,该程序可以存储于一确定瞳孔位置的装置可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
如上参照附图以示例的方式描述了根据本发明的过滤光斑的方法和装置。但是,本领域技术人员应当理解,对于上述本发明所提出的过滤光斑的方法和装置,还可以在不脱离本发明内容的基础上做出各种改进。因此,本发明的保护范围应当由所附的权利要求书的内容确定。
实施例5
根据本申请其中一实施例的另一方面,还提供了一种视线追踪设备,该视线追踪 设备包括实施例2中的过滤光斑的装置。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的技术内容,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,可以为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。
工业实用性
本申请实施例提供的方案,可以应用于视线追踪,通过采用图匹配的方式滤除杂 光斑,解决了现有技术不能准确滤除杂光斑的技术问题,准确滤除了杂光斑,提高了光源与光斑匹配的精确度。

Claims (21)

  1. 一种过滤光斑的方法,包括:
    获取第一图像以及与所述第一图像相匹配的基准图像,其中,在所述第一图像上显示有第一类光斑;
    根据所述基准图像确定所述第一图像的第二类光斑,其中,所述第二类光斑为基于所述基准图像对所述第一图像上的光斑进行估计所得到的光斑;
    根据所述第一类光斑的第一位置以及第二类光斑的第二位置确定所述第一类光斑和所述第二类光斑的匹配结果;
    根据所述匹配结果对所述第一类光斑进行过滤。
  2. 根据权利要求1所述的方法,其中,所述第一类光斑包括与光源相对应的光斑和杂光斑,所述第二类光斑包括所述与光源相对应的光斑。
  3. 根据权利要求2所述的方法,其中,获取与所述第一图像相匹配的基准图像,包括:
    构建双线性模型;
    将所述第一图像输入至所述双线性模型;
    确定所述双线性模型的输出数据;
    根据所述输出数据确定所述基准图像。
  4. 根据权利要求3所述的方法,其中,构建双线性模型,包括:
    采集多张图像,其中,所述多张图像中的每张图像均包含与所述光源相对应的光斑;
    获取所述每张图像的参数信息,其中,所述参数信息至少包括如下之一:所述每张图像中的光斑的起始位置以及所述每张图像的拍摄视角;
    根据所述每张图像的参数信息以及所述每张图像中的光斑与瞳孔的相对位置得到所述双线性模型。
  5. 根据权利要求1所述的方法,其中,根据所述基准图像确定所述第一图像的第二类光斑,包括:
    获取所述基准图像中的每个光斑与瞳孔的相对位置;
    获取所述第一图像中的瞳孔的位置;
    根据所述第一图像中的瞳孔的位置以及所述基准图像中的所述每个光斑与所述瞳孔的相对位置确定所述第二类光斑的位置。
  6. 根据权利要求1所述的方法,其中,根据所述第一类光斑的第一位置以及第二类光斑的第二位置确定所述第一类光斑和所述第二类光斑的匹配结果,包括:
    根据所述第一类光斑的第一位置以及所述第二类光斑的第二位置确定所述第一类光斑与所述第二类光斑之间的距离权重值;
    根据所述距离权重值对所述第一类光斑和所述第二类光斑进行匹配,得到匹配结果。
  7. 根据权利要求6所述的方法,其中,根据所述距离权重值对所述第一类光斑和所述第二类光斑进行匹配,得到匹配结果,包括:
    确定所述第一类光斑与所述第二类光斑的匹配组合,其中,所述匹配组合包含所述第一类光斑与所述第二类光斑的多个对应关系;
    分别计算每个对应关系中的每个第一类光斑与每个第二类光斑之间的距离权重值的权重和;
    确定所述多个对应关系中所述权重和最大的对应关系,其中,所述权重和最大时所对应的对应关系表征所述第一类光斑与所述第二类光斑相匹配。
  8. 根据权利要求7所述的方法,其中,根据所述匹配结果对所述第一类光斑进行过滤,包括:
    将第一类光斑划分为第一光斑和至少一个第二光斑,并获取所述第一光斑与所述至少一个第二光斑中的每个光斑组成的第一向量组;
    将第二类光斑划分为第三光斑和至少一个第四光斑,并获取第三光斑与所述至少一个第四光斑中的每个光斑组成的第二向量组;
    计算所述第一向量组中的每个向量与第二向量组中的对应的向量之间的向量角;
    根据所述向量角确定所述第一光斑是否为异常光斑,并在所述第一光斑为所述异常光斑的情况下对所述异常光斑进行过滤处理。
  9. 根据权利要求8所述的方法,其中,根据所述向量角确定所述第一光斑是否为异常光斑,包括:
    确定所述第一向量组中的每个向量与第二向量组中的对应的向量之间的向量角是否大于预设向量角;
    在所述向量角大于所述预设向量角的情况下,确定所述向量角为异常向量角;
    确定所述异常向量角的个数;
    在所述异常向量角的个数大于预设数量的情况下,确定所述第一光斑为所述异常光斑。
  10. 一种过滤光斑的装置,包括:
    获取模块,设置为获取第一图像以及与第一图像相匹配的基准图像,其中,在所述第一图像上显示有第一类光斑;
    第一确定模块,设置为根据所述基准图像确定所述第一图像的第二类光斑,其中,所述第二类光斑为基于所述基准图像对所述第一图像上的光斑进行估计所得到的光斑;
    第二确定模块,设置为根据所述第一类光斑的第一位置以及第二类光斑的第二位置确定所述第一类光斑和所述第二类光斑的匹配结果;
    过滤模块,设置为根据所述匹配结果对所述第一类光斑进行过滤。
  11. 根据权利要求10所述的装置,其中,所述第一类光斑包括与光源相对应的光斑和杂光斑,所述第二类光斑包括所述与光源相对应的光斑。
  12. 根据权利要求11所述的装置,其中,所述获取模块包括:
    构建模块,设置为构建双线性模型;
    输入模块,设置为将所述第一图像输入至所述双线性模型;
    第三确定模块,设置为确定所述双线性模型的输出数据;
    第四确定模块,设置为根据所述输出数据确定所述基准图像。
  13. 根据权利要求12所述的装置,其中,所述构建模块包括:
    采集模块,设置为采集多张图像,其中,所述多张图像中的每张图像均包含与所述光源相对应的光斑;
    第一获取模块,设置为获取所述每张图像的参数信息,其中,所述参数信息至少包括如下之一:所述每张图像中的光斑的起始位置以及所述每张图像的拍摄视角;
    处理模块,设置为根据所述每张图像的参数信息以及所述每张图像中的光斑与瞳孔的相对位置得到所述双线性模型。
  14. 根据权利要求10所述的装置,其中,所述第一确定模块包括:
    第二获取模块,设置为获取所述基准图像中的每个光斑与瞳孔的相对位置;
    第三获取模块,设置为获取所述第一图像中的瞳孔的位置;
    第五确定模块,设置为根据所述第一图像中的瞳孔的位置以及所述基准图像中的所述每个光斑与所述瞳孔的相对位置确定所述第二类光斑的位置。
  15. 根据权利要求10所述的装置,其中,所述第二确定模块包括:
    第六确定模块,设置为根据所述第一类光斑的第一位置以及所述第二类光斑的第二位置确定所述第一类光斑与所述第二类光斑之间的距离权重值;
    匹配模块,设置为根据所述距离权重值对所述第一类光斑和所述第二类光斑进行匹配,得到匹配结果。
  16. 根据权利要求15所述的装置,其中,所述匹配模块包括:
    第七确定模块,设置为确定所述第一类光斑与所述第二类光斑的匹配组合,其中,所述匹配组合包含所述第一类光斑与所述第二类光斑的多个对应关系;
    第一计算模块,设置为分别计算每个对应关系中的每个第一类光斑与每个第二类光斑之间的距离权重值的权重和;
    第八确定模块,设置为确定所述多个对应关系中所述权重和最大的对应关系,其中,所述权重和最大时所对应的对应关系表征所述第一类光斑与所述第二类光斑相匹配。
  17. 根据权利要求16所述的装置,其中,所述过滤模块包括:
    第四获取模块,设置为将第一类光斑划分为第一光斑和至少一个第二光斑,并获取所述第一光斑与所述至少一个第二光斑中的每个光斑组成的第一向量组;
    第五获取模块,设置为将第二类光斑划分为第三光斑和至少一个第四光斑,并获取第三光斑与所述至少一个第四光斑中的每个光斑组成的第二向量组;
    第二计算模块,设置为计算所述第一向量组中的每个向量与第二向量组中的对应的向量之间的向量角;
    第九确定模块,设置为根据所述向量角确定所述第一光斑是否为异常光斑,并在所述第一光斑为所述异常光斑的情况下对所述异常光斑进行过滤处理。
  18. 根据权利要求17所述的装置,其中,所述第八确定模块包括:
    第十确定模块,设置为确定所述第一向量组中的每个向量与第二向量组中的对应的向量之间的向量角是否大于预设向量角;
    第十一确定模块,设置为在所述向量角大于所述预设向量角的情况下,确定所述向量角为异常向量角;
    第十二确定模块,设置为确定所述异常向量角的个数;
    第十三确定模块,设置为在所述异常向量角的个数大于预设数量的情况下,确定所述第一光斑为所述异常光斑。
  19. 一种存储介质,所述存储介质包括存储的程序,其中,所述程序执行权利要求1至9中任意一项所述的过滤光斑的方法。
  20. 一种处理器,所述处理器设置为运行程序,其中,所述程序运行时执行权利要求1至9中任意一项所述的过滤光斑的方法。
  21. 一种视线追踪设备,所述视线追踪设备包括权利要求10至18中任意一项所述的装置。
PCT/CN2018/119880 2017-12-27 2018-12-07 过滤光斑的方法和装置 WO2019128676A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/347,821 US11250588B2 (en) 2017-12-27 2018-12-07 Method and apparatus for filtering glints

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711455781.3 2017-12-27
CN201711455781.3A CN108257112B (zh) 2017-12-27 2017-12-27 过滤光斑的方法和装置

Publications (1)

Publication Number Publication Date
WO2019128676A1 true WO2019128676A1 (zh) 2019-07-04

Family

ID=62724269

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/119880 WO2019128676A1 (zh) 2017-12-27 2018-12-07 过滤光斑的方法和装置

Country Status (4)

Country Link
US (1) US11250588B2 (zh)
CN (1) CN108257112B (zh)
TW (1) TWI691937B (zh)
WO (1) WO2019128676A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368784A (zh) * 2020-03-16 2020-07-03 广州文远知行科技有限公司 一种目标识别方法、装置、计算机设备和存储介质

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257112B (zh) * 2017-12-27 2020-08-18 北京七鑫易维信息技术有限公司 过滤光斑的方法和装置
JP2020025745A (ja) * 2018-08-13 2020-02-20 日本電信電話株式会社 瞳孔特徴量抽出装置、瞳孔特徴量抽出方法、プログラム
CN109034108B (zh) * 2018-08-16 2020-09-22 北京七鑫易维信息技术有限公司 一种视线估计的方法、装置和系统
CN114428398B (zh) * 2020-10-29 2023-12-26 北京七鑫易维信息技术有限公司 一种光斑与光源的匹配方法、装置、设备及存储介质
CN112995502B (zh) * 2021-02-07 2023-04-07 维沃移动通信有限公司 图像处理方法、装置和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544420A (zh) * 2013-08-15 2014-01-29 马建 用于智能眼镜的防伪虹膜身份认证方法
CN104182938A (zh) * 2014-08-18 2014-12-03 国家电网公司 一种全天空云图的太阳光斑修复方法
CN104732191A (zh) * 2013-12-23 2015-06-24 北京七鑫易维信息技术有限公司 利用交比不变性实现虚拟显示屏视线追踪的装置及其方法
CN105844712A (zh) * 2016-03-16 2016-08-10 山东大学 一种改进的面向3d打印的半色调投影与模型生成方法
CN108257112A (zh) * 2017-12-27 2018-07-06 北京七鑫易维信息技术有限公司 过滤光斑的方法和装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2568253B1 (en) * 2010-05-07 2021-03-10 Shenzhen Taishan Online Technology Co., Ltd. Structured-light measuring method and system
EP2751609B1 (en) * 2011-08-30 2017-08-16 Microsoft Technology Licensing, LLC Head mounted display with iris scan profiling
CN102999230A (zh) * 2012-11-05 2013-03-27 南京芒冠光电科技股份有限公司 一种电子白板设备自动像素映射方法
TWI557004B (zh) * 2014-01-10 2016-11-11 Utechzone Co Ltd Identity authentication system and its method
WO2015113479A1 (zh) * 2014-01-28 2015-08-06 北京中科虹霸科技有限公司 一种具有人机交互机制的移动终端虹膜识别装置和方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544420A (zh) * 2013-08-15 2014-01-29 马建 用于智能眼镜的防伪虹膜身份认证方法
CN104732191A (zh) * 2013-12-23 2015-06-24 北京七鑫易维信息技术有限公司 利用交比不变性实现虚拟显示屏视线追踪的装置及其方法
CN104182938A (zh) * 2014-08-18 2014-12-03 国家电网公司 一种全天空云图的太阳光斑修复方法
CN105844712A (zh) * 2016-03-16 2016-08-10 山东大学 一种改进的面向3d打印的半色调投影与模型生成方法
CN108257112A (zh) * 2017-12-27 2018-07-06 北京七鑫易维信息技术有限公司 过滤光斑的方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368784A (zh) * 2020-03-16 2020-07-03 广州文远知行科技有限公司 一种目标识别方法、装置、计算机设备和存储介质
CN111368784B (zh) * 2020-03-16 2024-04-02 广州文远知行科技有限公司 一种目标识别方法、装置、计算机设备和存储介质

Also Published As

Publication number Publication date
TWI691937B (zh) 2020-04-21
US20210287396A1 (en) 2021-09-16
US11250588B2 (en) 2022-02-15
CN108257112A (zh) 2018-07-06
TW201928875A (zh) 2019-07-16
CN108257112B (zh) 2020-08-18

Similar Documents

Publication Publication Date Title
WO2019128676A1 (zh) 过滤光斑的方法和装置
CN110147744B (zh) 一种人脸图像质量评估方法、装置及终端
CN112330526A (zh) 一种人脸转换模型的训练方法、存储介质及终端设备
US11004179B2 (en) Image blurring methods and apparatuses, storage media, and electronic devices
CN106981078B (zh) 视线校正方法、装置、智能会议终端及存储介质
WO2019128675A1 (zh) 视线追踪设备中确定参数的方法和装置
WO2021008205A1 (zh) 图像处理
JP2009157767A (ja) 顔画像認識装置、顔画像認識方法、顔画像認識プログラムおよびそのプログラムを記録した記録媒体
WO2019144710A1 (zh) 确定瞳孔位置的方法和装置
CN108550167B (zh) 深度图像生成方法、装置及电子设备
US10791321B2 (en) Constructing a user's face model using particle filters
TWI680309B (zh) 匹配光源與光斑的方法和裝置
CN110866873A (zh) 内腔镜图像的高光消除方法及装置
CN112261399B (zh) 胶囊内窥镜图像三维重建方法、电子设备及可读存储介质
WO2021017308A1 (zh) 舌象匹配方法、电子装置、计算机设备及存储介质
US9786030B1 (en) Providing focal length adjustments
CN111222448B (zh) 图像转换方法及相关产品
CN115841602A (zh) 基于多视角的三维姿态估计数据集的构建方法及装置
CN111462337B (zh) 一种图像处理方法、设备及计算机可读存储介质
CN113763421A (zh) 一种目标跟踪方法、装置及计算机可读存储介质
CN112711984A (zh) 注视点定位方法、装置和电子设备
CN113129252A (zh) 一种图像评分方法及电子设备
WO2024113275A1 (zh) 凝视点获取方法和装置、电子设备和存储介质
CN112529848B (zh) 全景图更新方法、装置、可读介质以及设备
CN113473227B (zh) 图像处理方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18896774

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02.10.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18896774

Country of ref document: EP

Kind code of ref document: A1