CN115471557B - Monocular camera image target point three-dimensional positioning method, pupil positioning method and device - Google Patents

Monocular camera image target point three-dimensional positioning method, pupil positioning method and device Download PDF

Info

Publication number
CN115471557B
CN115471557B CN202211156609.9A CN202211156609A CN115471557B CN 115471557 B CN115471557 B CN 115471557B CN 202211156609 A CN202211156609 A CN 202211156609A CN 115471557 B CN115471557 B CN 115471557B
Authority
CN
China
Prior art keywords
reference point
coordinates
coordinate system
point
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211156609.9A
Other languages
Chinese (zh)
Other versions
CN115471557A (en
Inventor
李凯文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Boshi Medical Technology Co ltd
Original Assignee
Nanjing Boshi Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Boshi Medical Technology Co ltd filed Critical Nanjing Boshi Medical Technology Co ltd
Priority to CN202211156609.9A priority Critical patent/CN115471557B/en
Publication of CN115471557A publication Critical patent/CN115471557A/en
Application granted granted Critical
Publication of CN115471557B publication Critical patent/CN115471557B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/68Analysis of geometric attributes of symmetry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The method is characterized in that the obtained three-dimensional coordinates can more accurately and intuitively represent the relative position between the pupil and the monocular camera, the pupil is more accurately and rapidly positioned, the change of the pixel gradient is represented by using a section of arc accumulated pixel mean value in a divided area, the edge of the iris or the pupil can be more accurately identified, the effect is more accurate, the identification effect is not influenced even if the iris is blocked by the canthus and the eyelid, and the stability is high. And combining with the accurate movement of the monocular camera to know the distance, the image characteristics can be accurately reflected to the actual three-dimensional coordinates, additional measurement of other parameters is not needed, the operation is convenient, the error is small, the precision is high, the acquired image can meet the conditions as long as the acquired image contains one eye, the requirement on image shooting is low, and the application range is wide.

Description

Monocular camera image target point three-dimensional positioning method, pupil positioning method and device
Technical Field
The present disclosure relates to the field of target detection technologies, and in particular, to a method and an apparatus for three-dimensional positioning of a target point of an image of a monocular camera.
Background
In many computer vision applications (e.g., in the field of photoperiod measurement), it is generally necessary to locate the three-dimensional position of the target point such that the instrument performs a corresponding operation on the region of the target point based on the three-dimensional position of the target point.
Currently, the three-dimensional automatic positioning method of the target point mainly relies on adding an additional light path to position the distance of the target point in space, or adding a distance sensor to detect the front-back distance of the target point from the instrument, and the like. However, this approach adds hardware and increases the cost of positioning.
Disclosure of Invention
In order to solve the above technical problems, the embodiments of the present application provide a monocular camera image target point three-dimensional positioning method, pupil positioning method and device, so as to achieve the purpose of positioning the three-dimensional position of the target point, and the technical scheme is as follows:
in one aspect, the present application provides a method for three-dimensionally positioning an image target point of a monocular camera, including:
acquiring a first target image acquired by a monocular camera, and determining coordinates of a first reference point and a second reference point in the first target image;
acquiring a second target image acquired by the monocular camera after moving a first distance, and determining coordinates of the first reference point and the second reference point in the second target image; wherein the relative positions of the first reference point and the second reference point are kept unchanged before and after the monocular camera moves;
Based on camera parameters of the monocular camera, the first distance, coordinates of the first reference point and the second reference point in the first target image and the second target image respectively, obtaining a mapping relation between coordinates of the two reference points in the images and actual three-dimensional coordinates of the reference points, and obtaining three-dimensional coordinates of target point positioning according to the mapping relation; the target point is a first reference point or a second reference point or a point with a known position relation with the first reference point and the second reference point.
Optionally, the obtaining a mapping relationship between coordinates of two reference points in the image and actual three-dimensional coordinates of the reference points based on the camera parameters of the monocular camera, the first distance, the coordinates of the first reference point and the coordinates of the second reference point in the first target image and the second target image, and obtaining the three-dimensional coordinates of the target point positioning according to the mapping relationship includes:
establishing a three-dimensional model based on camera parameters of the monocular camera, coordinates of a first reference point and a second reference point in an image, an actual distance between an unknown first reference point and the second reference point, and actual three-dimensional coordinates of the unknown first reference point and the second reference point;
Substituting and solving a three-dimensional model by using the coordinates of the first distance, the first reference point and the second reference point in the first target image and the second target image respectively to obtain the actual three-dimensional coordinates of the first reference point or the second reference point as the three-dimensional coordinates of the target point positioning, or obtaining the three-dimensional coordinates of the target point positioning according to the actual three-dimensional coordinates of the first reference point and the second reference point and the position relation between the target point and the first reference point and the position relation between the target point and the actual three-dimensional coordinates of the second reference point.
Optionally, when the straight line where the first reference point and the second reference point are located is parallel to the monocular camera imaging plane, the three-dimensional model is built by:
wherein each coordinate axis of the three-dimensional model is consistent with a world coordinate system calibrated by a monocular camera, and (X, Y, Z) is a three-dimensional coordinate of the first reference point or the second reference point in the three-dimensional coordinate system, and (u) 1 ,v 1 ) Is the pixel coordinates of the first reference point in the image acquired by the monocular camera, (u) 2 ,v 2 ) Is the pixel coordinate, m, of the second reference point in the image acquired by the monocular camera 11 、m 13 、m 22 、m 23 、m 33 And C represents the actual distance between the first reference point and the second reference point, and theta is the included angle between the X-axis of the three-dimensional model and the straight line where the first reference point and the second reference point are located.
Optionally, when the target point is a pupil center, and the first reference point and the second reference point are two intersection points where a preset straight line passing through the pupil center intersects with an iris edge, the determining coordinates of the first reference point and the second reference point in the first target image or the second target image includes:
determining coordinates of pupil centers in the first target image and the second target image under a pixel coordinate system based on the first target image or the second target image;
determining a distance between the pupil center and an iris edge in the pixel coordinate system based on coordinates of the pupil center in the pixel coordinate system;
and determining coordinates of a first reference point and a second reference point in the first target image or the second target image based on the coordinates of the pupil center under a pixel coordinate system, the distance between the pupil center and the iris edge under the pixel coordinate system and the direction of a preset straight line.
Optionally, the determining, based on the coordinates of the pupil center in the pixel coordinate system, a distance between the pupil center and an iris edge in the pixel coordinate system includes:
Setting a searching radius range by taking the pupil center as a searching center, and dividing a searching area into N sub-areas according to a central angle;
searching in the subarea according to a current searching radius, determining coordinates corresponding to a plurality of central angle angles under the current searching radius, and determining a pixel value of a pixel point corresponding to each coordinate;
determining the average value of pixel values of a plurality of central angle angles under the current searching radius as the pixel average value corresponding to the current searching radius;
determining an absolute value of a difference value of a pixel mean value corresponding to the current search radius and a pixel mean value corresponding to a previous search radius adjacent to the current search radius;
updating the current searching radius to be the sum of the current searching radius and a set searching step length, and repeatedly executing the step of searching in the subarea with the current searching radius until the updated current searching radius exceeds the set searching radius range;
determining the minimum search radius of two adjacent search radii corresponding to the maximum absolute value in the absolute values corresponding to the subareas;
and determining the distance between the pupil center and the iris edge under the pixel coordinate system based on the minimum search radius corresponding to each sub-region.
Optionally, the determining, based on the minimum search radius corresponding to each sub-region, a distance between the pupil center and the iris edge in the pixel coordinate system includes:
determining the mode of the minimum search radius corresponding to each sub-region;
or determining an average of the minimum search radii corresponding to each sub-region;
or selecting a group of candidate minimum search radii with the phase difference value not exceeding a threshold value from the minimum search radii corresponding to each sub-region, and determining the average value of the group of candidate minimum search radii;
determining the mode, the average, or the average of the set of candidate minimum search radii as the distance between the pupil center and iris edge in the pixel coordinate system.
Another aspect of the present application provides a pupil positioning method, including:
acquiring an eye image acquired by a monocular camera;
determining coordinates of a first reference point and a second reference point under a pixel coordinate system, wherein the first reference point and the second reference point are two intersection points at which a preset straight line passing through the pupil center intersects with the iris edge, and the pupil center is a target point;
According to a pre-established mapping relation between coordinates of a target point in an image acquired by a monocular camera in the image and actual three-dimensional coordinates of the target point, performing three-dimensional positioning on the target point;
the mapping relation is obtained based on the monocular camera image target point three-dimensional positioning method.
Optionally, the method further comprises:
determining the radius of the pupil center under the pixel coordinate system;
determining a distance between the first reference point and the second reference point in the three-dimensional coordinate system based on the mapping relation;
and determining the diameter of the pupil under the three-dimensional coordinate system based on the radius of the pupil under the pixel coordinate system, the distance between the pupil center and the iris edge under the pixel coordinate system and the distance between the first reference point and the second reference point under the three-dimensional coordinate system, and outputting the diameter.
A third aspect of the present application provides a monocular camera image target point three-dimensional positioning device, including:
the first determining module is used for acquiring a first target image acquired by the monocular camera and determining coordinates of a first reference point and a second reference point in the first target image;
The second determining module is used for acquiring a second target image acquired by the monocular camera after moving a first distance and determining coordinates of the first reference point and the second reference point in the second target image; wherein the relative positions of the first reference point and the second reference point are kept unchanged before and after the monocular camera moves;
the third determining module is used for obtaining a mapping relation between coordinates of two reference points in an image and actual three-dimensional coordinates of the reference points based on camera parameters of the monocular camera, the first distance, coordinates of the first reference point and coordinates of the second reference point in the first target image and the second target image respectively, and obtaining the three-dimensional coordinates of target point positioning according to the mapping relation; the target point is a first reference point or a second reference point or a point with a known position relation with the first reference point and the second reference point.
A fourth aspect of the present application provides a pupil positioning device, comprising:
the acquisition module is used for acquiring the eye images acquired by the monocular camera;
the coordinate determining module is used for determining coordinates of a first reference point and a second reference point under a pixel coordinate system, wherein the first reference point and the second reference point are two intersection points where a preset straight line passing through the pupil center intersects with the iris edge, and the pupil center is a target point;
The positioning module is used for carrying out three-dimensional positioning on the target point according to a pre-established mapping relation between the coordinate of the target point in the image acquired by the monocular camera and the actual three-dimensional coordinate of the target point;
the mapping relation is obtained based on the monocular camera image target point three-dimensional positioning method.
Compared with the prior art, the beneficial effects of this application are:
in the method, the first target image acquired by the monocular camera is acquired, the coordinates of the first reference point and the second reference point in the first target image are determined by utilizing the characteristic that the relative positions of the first reference point and the second reference point are kept unchanged before and after the monocular camera moves, the second target image acquired by the monocular camera after moving by a first distance is acquired, the coordinates of the first reference point and the second reference point in the second target image are determined, and the mapping relation between the coordinates of the two reference points in the image and the actual three-dimensional coordinates of the reference point can be obtained more stably and accurately by combining the accurate moving distance (namely the first distance) of the monocular camera and camera parameters, additional measurement of other parameters is not needed, additional optical path design and hardware equipment are not needed, and the three-dimensional coordinates of the target point positioning can be obtained according to the mapping relation.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is a flowchart of a method for three-dimensional positioning of a monocular camera image target point provided in embodiment 1 of the present application;
fig. 2 is a flowchart of a method for three-dimensional positioning of a monocular camera image target point provided in embodiment 2 of the present application;
fig. 3 is a flowchart of a method for three-dimensional positioning of a monocular camera image target point provided in embodiment 3 of the present application;
fig. 4 is a flowchart of a pupil positioning method provided in embodiment 4 of the present application;
fig. 5 is a flowchart of a pupil positioning method provided in embodiment 5 of the present application;
fig. 6 is a schematic structural diagram of a monocular camera image target point three-dimensional positioning device provided in the present application;
fig. 7 is a schematic logic structure diagram of a pupil positioning device provided in the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
In order that the above-recited objects, features and advantages of the present application will become more readily apparent, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings.
As shown in fig. 1, a flowchart of a monocular camera image target point three-dimensional positioning method provided in embodiment 1 of the present application may include the following steps:
step S11, a first target image acquired by a monocular camera is acquired, and coordinates of a first reference point and a second reference point in the first target image are determined.
In this embodiment, the first reference point and the second reference point may be, but are not limited to: two pixel points with a relatively fixed position relationship in an image acquired by the monocular camera. For example, the first reference point may be the pupil center, and the second reference point is a pixel point on the first target edge. Alternatively, the first reference point and the second reference point may be: two intersection points where a preset straight line passing through the pupil center intersects with the first target edge.
The first target edge includes an iris edge or a pupil edge in a mydriatic state.
It can be appreciated that the iris size is not easily changed, and if the first target edge is an iris edge, the stability of the coordinates of the determined first reference point and the determined second reference point in the first target image can be ensured.
Of course, the pupil in the mydriatic state remains unchanged in size within a period of time, and if the first target edge is the pupil edge in the mydriatic state, the stability of the coordinates of the determined first reference point and the determined second reference point in the first target image can be ensured.
Step S12, acquiring a second target image acquired by the monocular camera after moving a first distance, and determining coordinates of the first reference point and the second reference point in the second target image; wherein the relative position of the first reference point and the second reference point is kept unchanged before and after the monocular camera moves.
Step S13, based on camera parameters of the monocular camera, the first distance, coordinates of the first reference point and the second reference point in the first target image and the second target image respectively, a mapping relation between coordinates of the two reference points in the images and actual three-dimensional coordinates of the reference points is obtained, and three-dimensional coordinates of target point positioning are obtained according to the mapping relation.
The target point may be a first reference point or a second reference point, or a point having a known positional relationship with the first reference point and the second reference point.
Corresponding to the implementation of the target point being the first reference point, the three-dimensional coordinates of the target point positioning may include, but are not limited to:
and obtaining the three-dimensional coordinate of the first reference point positioning based on the coordinate of the first reference point in the image and the mapping relation.
Corresponding to the implementation of the target point being the second reference point, the three-dimensional coordinates of the target point positioning may include, but are not limited to:
and obtaining the three-dimensional coordinates of the second reference point positioning based on the coordinates of the second reference point in the image and the mapping relation.
The embodiment in which the target point is a point having a known positional relationship with the first reference point and the second reference point, according to the mapping relationship, obtains a three-dimensional coordinate of the target point positioning may include, but is not limited to:
determining three-dimensional coordinates of the first reference point and the second reference point based on the coordinates of the first reference point and the second reference point in the image and the mapping relation;
and obtaining the three-dimensional coordinates of the target point positioning based on the three-dimensional coordinates of the first reference point and the second reference point and the known position relation between the target point and the first reference point and the known position relation between the target point and the second reference point.
For example, if the first reference point and the second reference point are two points of intersection where a preset straight line passing through the pupil center intersects the iris edge, the point having a known positional relationship with the first reference point and the second reference point may be the pupil center, and the known positional relationship may be approximated as: the pupil center is located at the midpoint of the two intersection points.
According to the embodiment, the purpose of locating the three-dimensional position of the pupil can be achieved on the basis that no additional light path design and hardware equipment are needed, and the located three-dimensional position of the pupil can be used for aiming at the pupil area by the vision measuring instrument so as to conduct corresponding measurement.
As another optional embodiment 2 of the present application, mainly a refinement of the pupil positioning method described in the foregoing embodiment 1, as shown in fig. 2, the method may include, but is not limited to, the following steps:
step S21, a first target image acquired by a monocular camera is acquired, and coordinates of a first reference point and a second reference point in the first target image are determined.
Step S22, acquiring a second target image acquired by the monocular camera after moving a first distance, and determining coordinates of the first reference point and the second reference point in the second target image.
Wherein the relative position of the first reference point and the second reference point is kept unchanged before and after the monocular camera moves.
The detailed procedure of steps S21-S22 can be referred to in the related description of steps S11-S12 in embodiment 1, and will not be described herein.
Step S23, a three-dimensional model is built based on camera parameters of the monocular camera, coordinates of a first reference point and a second reference point in an image, an unknown actual distance between the first reference point and the second reference point, and unknown actual three-dimensional coordinates of the first reference point and the second reference point.
In this embodiment, multiple frames of images acquired by a monocular camera may be acquired, two frames of images are selected as a group, the distance between the first reference point and the second reference point is determined according to the above steps, and based on multiple groups of results, the actual distance between the unknown first reference point and the second reference point is determined more accurately. For example, an average operation is performed on the distances between the first reference point and the second reference point in the multiple sets of results, an average result is obtained, and the average result is used as an unknown actual distance between the first reference point and the second reference point.
The actual three-dimensional coordinates of the first reference point and the second reference point, which are unknown, can be understood as: the three-dimensional coordinates of the unknown first reference point and the second reference point in the three-dimensional coordinate system.
In this embodiment, when the straight line where the first reference point and the second reference point are located is parallel to the monocular camera imaging plane, this step may include, but is not limited to:
s231, taking a monocular camera as an origin of the three-dimensional coordinate system, and establishing a mapping formula between the monocular camera from a pixel coordinate system to the three-dimensional coordinate system as follows
Order the
Wherein, (u,v) is the pixel coordinate of the point in the image acquired by the monocular camera, (X, Y, Z) is the three-dimensional coordinate of the corresponding point in the three-dimensional coordinate system, Z c Is a scale factor, R is a rotation matrix, T is a translation matrix, R, T is an external parameter of camera parameters, in this embodiment, the camera does not perform rotation translation transformation, thereforeThe camera matrix represents an internal reference matrix among camera parameters of the monocular camera.
The monocular camera in this embodiment is a calibrated monocular camera, where the reference matrix of the monocular camera is obtained during the calibration process of the monocular camera. The process of calibrating the monocular camera may include, but is not limited to:
1: the preparation stage: a checkerboard image is prepared, and the number of black and white crossing points in the transverse and vertical directions is preferably different. The checkerboard image can be placed standing and can be adhered to a piece of steel plate or hard paperboard.
2: the prepared checkerboard is placed in front of the assembled monocular camera, the monocular camera can shoot the whole content of the checkerboard, and the position or the posture of the checkerboard is adjusted every time an image is shot, so that N Zhang Qipan-grid images are shot.
3: for each checkerboard image, using a corner detection algorithm to detect the corner coordinates of black and white cross points in the image, the corner detection can be performed by using a findchessbard filters algorithm, and thus a set of corner coordinates is obtained. Because one checkerboard image corresponds to one pose of the checkerboard under a real world coordinate system, three-dimensional coordinates (x, y, z) corresponding to corner points can be obtained at the same time, the three-dimensional coordinates of the corner point at the upper left corner are set to be (0, 0) from the corner point at the upper left corner, and the three-dimensional coordinates of adjacent corner points are amplified to be side, namely the side length side of each square in the checkerboard is in mm. Taking two adjacent crossing points on the left and right as an example, the coordinates are (x, y, z) and (x+side, y, z) respectively. To facilitate the subsequent calibration process, z is set to 0.
4: and 3, establishing a one-to-one correspondence between the checkerboard image and corresponding points on the real checkerboard. The point pairs with corresponding relation can be processed by using a calibration function calibretecamera, and an internal reference matrix cameraMatrix of the monocular camera can be calculated. The internal matrix camera matrix is a 3*3 size matrix.
S232, setting the actual distance between the unknown first reference point and the unknown second reference point as C, and expanding the mapping formula to obtain a formula I
(u 1 ,v 1 ) Is the pixel coordinates of the first reference point in the image acquired by the monocular camera, (u) 2 ,v 2 ) Is the pixel coordinates of the second reference point in the image acquired by the monocular camera.
S233, simplifying the formula one to obtain a three-dimensional model
Wherein, each coordinate axis of the three-dimensional model is consistent with the world coordinate system calibrated by the monocular camera, and (X, Y, Z) is the three-dimensional coordinate of the first reference point or the second reference point in the three-dimensional coordinate system, m 11 、m 13 、m 22 、m 23 、m 33 And C represents the actual distance between the first reference point and the second reference point, and theta is the included angle between the X-axis of the three-dimensional model and the straight line where the first reference point and the second reference point are located. If the first reference point and the second reference point are two points in the horizontal direction, θ is 0, and v1 and v2 are equal, the formula can be further simplified as:
the world coordinate system calibrated by the monocular camera can be referred to the related actual coordinate system in the prior art, and will not be described herein.
And S24, substituting and solving a three-dimensional model by using the first distance and the coordinates of the first reference point and the second reference point in the first target image and the second target image respectively to obtain the actual three-dimensional coordinates of the first reference point or the second reference point as the three-dimensional coordinates of the target point positioning, or obtaining the three-dimensional coordinates of the target point positioning according to the actual three-dimensional coordinates of the first reference point and the second reference point and the position relation between the target point and the first reference point and the position relation between the target point and the actual three-dimensional coordinates of the second reference point.
It can be understood that, using the coordinates of the first distance, the first reference point and the second reference point in the first target image and the second target image, respectively, substituting and solving the three-dimensional model can be understood as follows:
substituting the coordinates of the first distance, the first reference point and the second reference point in the first target image and the second target image into the three-dimensional model to solve the value of C, substituting the coordinates of the first reference point and the second reference point in the first target image and the second target image into the three-dimensional model with the known value of C, and solving to obtain the actual three-dimensional coordinates of the first reference point or the second reference point.
The embodiment achieves the purpose of positioning the three-dimensional position of the target point on the basis of not adding additional light path design and hardware equipment.
As another alternative embodiment 3 of the present application, mainly a refinement of the method described in the foregoing embodiment 1, as shown in fig. 3, the method may include, but is not limited to, the following steps:
step S31, a first target image acquired by a monocular camera is acquired, and the coordinates of the pupil center in the first target image under a pixel coordinate system are determined based on the first target image.
And S32, determining the distance between the pupil center and the iris edge in the first target image under the pixel coordinate system based on the coordinates of the pupil center in the first target image under the pixel coordinate system.
This step may include, but is not limited to:
s321, setting a search radius range by taking the pupil center in the first target image as a search center, and dividing a search area into N sub-areas according to a central angle.
Specifically, the set search radius range may be set according to different objects. For example, if some person has an iris diameter of about 11.4mm and a pupil diameter in the range of 2.5-5mm under normal conditions, the set search radius range may be set to be: [2R,4.5R ], the search step is 1 pixel distance, where R is the pupil radius size, to ensure that the iris edge can be searched. In addition, the search radius range may be set according to practical situations, such as a mydriatic state, intense light stimulation, etc., in which the pupil diameter exceeds the normal range.
In this embodiment, the sub-region is a plurality of sector ranges of 360 degrees/N. The size of N is not limited in this application. Specifically, N may be, but is not limited to, 10.
S322, searching in the subarea according to the current searching radius, determining coordinates corresponding to a plurality of central angle angles under the current searching radius, and determining a pixel value of a pixel point corresponding to each coordinate.
In this embodiment, the coordinates corresponding to the central angle under the current search radius may be determined by searching in the sub-area with the current search radius according to the following formula:
wherein θ is the central angle, r is the current search radius, (x pupil ,y pupil ) Is the coordinate of the pupil center under the pixel coordinate system, and (x, y) is the coordinate corresponding to the central angle under the current searching radius.
In this embodiment, the difference between every two adjacent central angle angles in the plurality of central angle angles may be 2 degrees, so that the occurrence of coincidence of coordinates determined every other degree may be avoided. For example, in the case where each area is a sector of 36 degrees, coordinates corresponding to 18 central angle angles under the current search radius are determined.
S323, determining the average value of the pixel values of a plurality of central angle angles under the current searching radius as the pixel average value corresponding to the current searching radius.
For example, in the case that each sub-area is a sector of 36 degrees in step S322, an embodiment of coordinates corresponding to 18 central angle angles under the current search radius is determined, and this step may obtain the pixel mean value corresponding to the current search radius through the following formula:
Representing the pixel mean value corresponding to the current search radius,
pixel value r representing central angle j Is the current search radius.
S324, determining an absolute value of a difference value of the pixel mean value corresponding to the current searching radius and the pixel mean value corresponding to the previous searching radius adjacent to the current searching radius.
In this embodiment, the absolute value of the difference value of the pixel mean value corresponding to the current search radius and the pixel mean value corresponding to the previous search radius adjacent to the current search radius may be obtained by the following formula:
representing an absolute value;
representing the pixel mean value corresponding to the current searching radius; />Representing a pixel mean value corresponding to a previous search radius adjacent to the current search radius.
And S325, updating the current search radius to be the sum of the current search radius and a set search step length, and repeating the step of searching in the subarea with the current search radius until the updated current search radius exceeds the set search radius range.
S326, determining two adjacent search radii corresponding to the maximum absolute value in the absolute values corresponding to the subareas; in this embodiment, for convenience in processing, the smallest search radius of the two adjacent search radii is used for the subsequent step, or the largest search radius of the two adjacent search radii may be used for the subsequent step, or the average value of the two adjacent search radii may be used for the subsequent step, and step S327 is also adjusted accordingly, which is not described herein.
S327, determining the distance between the pupil center and the iris edge in the first target image under the pixel coordinate system based on the minimum search radius corresponding to each sub-region.
By executing steps S321-S327, pixels of a plurality of points on an arc segment corresponding to each search radius are used for counting each sub-region, and the condition of a plurality of sub-regions is comprehensively considered, so that the gradient change of the pixels is more accurately represented.
This step may include, but is not limited to:
s3271, determining the mode of the minimum search radius corresponding to each sub-region;
or, S3272, determining an average number of the minimum search radii corresponding to each sub-region;
or, S3273, selecting a group of candidate minimum search radii with a phase difference value not exceeding a threshold value from the minimum search radii corresponding to each sub-region, and determining an average value of the group of candidate minimum search radii;
S3274, determining the mode, the average, or the average of the set of candidate minimum search radii as a distance between a pupil center and an iris edge in the first target image under the pixel coordinate system.
And step S33, determining coordinates of a first reference point and a second reference point in the first target image based on the coordinates of the pupil center in the first target image under a pixel coordinate system, the distance between the pupil center and the iris edge in the first target image under the pixel coordinate system and the direction of a preset straight line, wherein the first reference point and the second reference point are two intersection points where the preset straight line passing through the pupil center intersects with the iris edge.
This step may include, but is not limited to:
determining coordinates of a first reference point and a second reference point in the first target image based on the direction of a preset straight line through the following formula:
(x A ,y A ) Representing the coordinates of a first reference point in said first target image, (x) B ,y B ) Representing coordinates of a second reference point in said first target image, (x) pupil ,y pupil ) And representing the coordinates of the pupil center in the first target image under a pixel coordinate system, wherein R represents the distance between the pupil center and the iris edge in the first target image under the pixel coordinate system.
Step S34, a second target image acquired by the monocular camera after moving a first distance is acquired, and the coordinates of the pupil center in the second target image under a pixel coordinate system are determined based on the second target image.
And step S35, determining the distance between the pupil center and the iris edge in the second target image under the pixel coordinate system based on the coordinates of the pupil center in the second target image under the pixel coordinate system.
The detailed process of this step may refer to the related description of step S32, which is not described herein.
And S36, determining the coordinates of the first reference point and the second reference point in the second target image based on the coordinates of the pupil center in the second target image in the pixel coordinate system, the distance between the pupil center and the iris edge in the second target image in the pixel coordinate system and the direction of a preset straight line.
Step S37, obtaining a mapping relationship between coordinates of two reference points in the image and actual three-dimensional coordinates of the reference points based on camera parameters of the monocular camera, the first distance, coordinates of the first reference point and coordinates of the second reference point in the first target image and coordinates of the second target image respectively, obtaining three-dimensional coordinates of the first reference point and the second reference point according to the mapping relationship, and obtaining a three-dimensional coordinate of pupil center positioning according to a determined position relationship that pupil centers are located at points of the first reference point and the second reference point.
On the basis of obtaining the three-dimensional coordinates of the pupil center positioning, the three-dimensional coordinates of the pupil center positioning and the three-dimensional coordinates of the monocular camera can be compared to determine the relative position between the pupil center and the monocular camera.
Determining the relative position between the pupil center and the monocular camera may include, but is not limited to: a relative movement direction and a relative movement distance between the pupil center and the monocular camera are determined.
The detailed process of step S37 can be referred to the related description of step S13 in embodiment 1, and will not be repeated here.
The embodiment achieves the purpose of positioning the three-dimensional position of the pupil center on the basis of not adding additional light path design and hardware equipment.
In another embodiment of the present application, as shown in fig. 4, a flowchart of a pupil positioning method provided in embodiment 4 of the present application may include the following steps:
and S41, acquiring an eye image acquired by the monocular camera.
Step S42, determining coordinates of a first reference point and a second reference point in a pixel coordinate system, wherein the first reference point and the second reference point are two intersection points where a preset straight line passing through the pupil center intersects with the iris edge, and the pupil center is a target point.
This step may include, but is not limited to:
s421, determining the coordinates of the pupil center under the pixel coordinate system from the eye image.
S422, determining the distance between the pupil center and the iris edge under the pixel coordinate system based on the coordinates of the pupil center under the pixel coordinate system.
The detailed process of this step can be referred to the related description of step S32 in embodiment 3, and will not be described here.
S423, determining coordinates of a first reference point and a second reference point under a pixel coordinate system based on the coordinates of the pupil center under the pixel coordinate system and the distance between the pupil center and the iris edge under the pixel coordinate system.
And S43, carrying out three-dimensional positioning on the target point according to a pre-established mapping relation between coordinates of the target point in the image acquired by the monocular camera and actual three-dimensional coordinates of the target point.
The mapping relation between the coordinates of the target point in the image acquired by the monocular camera and the actual three-dimensional coordinates of the target point can include, but is not limited to:
wherein, each coordinate axis of the three-dimensional model is consistent with the world coordinate system calibrated by the monocular camera, and (X, Y, Z) is the three-dimensional coordinate of the first reference point or the second reference point in the three-dimensional coordinate system, m 11 、m 13 、m 22 、m 23 、m 33 And C represents the actual distance between the first reference point and the second reference point, and theta is the included angle between the X-axis of the three-dimensional model and the straight line where the first reference point and the second reference point are located.
This step may include, but is not limited to:
s431, inputting coordinates of a first reference point and a second reference point in a pixel coordinate system into the three-dimensional model to obtain three-dimensional coordinates of the first reference point or the second reference point in the three-dimensional coordinate system;
s432, performing three-dimensional positioning on the target point based on the position relation between the first reference point or the second reference point and the target point and the three-dimensional coordinates of the first reference point or the second reference point in a three-dimensional coordinate system.
The mapping relation is obtained based on the monocular camera image target point three-dimensional positioning method as described in any one of embodiments 1-3.
The embodiment achieves the purpose of three-dimensional positioning of the pupil center on the basis of no need of adding additional light path design and hardware equipment.
As another optional embodiment 5 of the present application, mainly the expansion of the pupil positioning method described in the foregoing embodiment 4, as shown in fig. 5, the method may include, but is not limited to, the following steps:
Step S51, acquiring an eye image acquired by a monocular camera;
step S52, determining coordinates of a first reference point and a second reference point in a pixel coordinate system, wherein the first reference point and the second reference point are two intersection points where a preset straight line passing through the pupil center intersects with the iris edge, and the pupil center is a target point;
step S53, three-dimensional positioning is carried out on the target point according to a pre-established mapping relation between coordinates of the target point in an image acquired by the monocular camera and actual three-dimensional coordinates of the target point;
the mapping relation is obtained based on the monocular camera image target point three-dimensional positioning method according to any one of the embodiments 1-3.
The detailed procedure of steps S51-S53 can be referred to the related description of steps S41-S43 in embodiment 4, and will not be repeated here.
And S54, determining the radius of the pupil center under the pixel coordinate system.
Step S55, determining a distance between the first reference point and the second reference point in the three-dimensional coordinate system based on the mapping relation.
Step S56, determining a diameter of the pupil in the three-dimensional coordinate system based on a radius of the pupil in the pixel coordinate system, a distance between the pupil center and the iris edge in the pixel coordinate system, and a distance between the first reference point and the second reference point in the three-dimensional coordinate system, and outputting the diameter.
This step may include, but is not limited to:
the diameter of the pupil in the three-dimensional coordinate system is determined by the following formula:
r represents the radius of the pupil center under the pixel coordinate system, R iris Representing the distance between the pupil center and the iris edge in the pixel coordinate system, C representing the distance between the first reference point and the second reference point in the three-dimensional coordinate system, C pupil Representing the diameter of the pupil in the three-dimensional coordinate system.
The method comprises the steps of determining the radius of a pupil of the pupil center under the pixel coordinate system, determining the distance between a first reference point and a second reference point under the three-dimensional coordinate system based on the mapping relation, determining the diameter of the pupil under the three-dimensional coordinate system based on the radius of the pupil center under the pixel coordinate system, the distance between the pupil center and the iris edge under the pixel coordinate system and the distance between the first reference point and the second reference point under the three-dimensional coordinate system, and outputting the diameter to realize real-time output of the pupil size.
Next, description will be made on the monocular camera image target point three-dimensional positioning device provided by the application, and the monocular camera image target point three-dimensional positioning device described below and the monocular camera image target point three-dimensional positioning method described above can be referred to correspondingly.
Referring to fig. 6, the monocular camera image target point three-dimensional positioning device includes: the first determination module 100, the second determination module 200, and the third determination module 300.
A first determining module 100, configured to acquire a first target image acquired by a monocular camera, and determine coordinates of a first reference point and a second reference point in the first target image;
a second determining module 200, configured to acquire a second target image acquired by the monocular camera after moving a first distance, and determine coordinates of the first reference point and the second reference point in the second target image; wherein the relative positions of the first reference point and the second reference point are kept unchanged before and after the monocular camera moves;
a third determining module 300, configured to obtain a mapping relationship between coordinates of two reference points in an image and actual three-dimensional coordinates of the reference points based on camera parameters of the monocular camera, the first distance, coordinates of the first reference point and coordinates of the second reference point in the first target image and coordinates of the second target image, and obtain three-dimensional coordinates of the target point location according to the mapping relationship; the target point is a first reference point or a second reference point or a point with a known position relation with the first reference point and the second reference point.
In this embodiment, the third determining module may specifically be configured to:
establishing a three-dimensional model based on camera parameters of the monocular camera, coordinates of a first reference point and a second reference point in an image, an actual distance between an unknown first reference point and the second reference point, and actual three-dimensional coordinates of the unknown first reference point and the second reference point;
substituting and solving a three-dimensional model by using the coordinates of the first distance, the first reference point and the second reference point in the first target image and the second target image respectively to obtain the actual three-dimensional coordinates of the first reference point or the second reference point as the three-dimensional coordinates of the target point positioning, or obtaining the three-dimensional coordinates of the target point positioning according to the actual three-dimensional coordinates of the first reference point and the second reference point and the position relation between the target point and the first reference point and the position relation between the target point and the actual three-dimensional coordinates of the second reference point.
In this embodiment, when the straight line where the first reference point and the second reference point are located is parallel to the monocular camera imaging plane, the three-dimensional model may be set up as follows:
wherein each coordinate axis of the three-dimensional model is consistent with a world coordinate system calibrated by a monocular camera, and (X, Y, Z) is a three-dimensional coordinate of the first reference point or the second reference point in the three-dimensional coordinate system, and (u) 1 ,v 1 ) Is the pixel coordinates of the first reference point in the image acquired by the monocular camera, (u) 2 ,v 2 ) Is the pixel coordinate, m, of the second reference point in the image acquired by the monocular camera 11 、m 13 、m 22 、m 23 、m 33 And C represents the actual distance between the first reference point and the second reference point, and theta is the included angle between the X-axis of the three-dimensional model and the straight line where the first reference point and the second reference point are located.
In this embodiment, when the target point is a pupil center, and the first reference point and the second reference point are two intersection points where a preset straight line passing through the pupil center intersects with an iris edge, the first determining module may be specifically configured to:
determining coordinates of a pupil center in the first target image under a pixel coordinate system based on the first target image;
determining a distance between the pupil center and an iris edge in the pixel coordinate system based on coordinates of the pupil center in the pixel coordinate system;
and determining the coordinates of a first reference point and a second reference point in the first target image based on the coordinates of the pupil center in a pixel coordinate system, the distance between the pupil center and the iris edge in the pixel coordinate system and the direction of a preset straight line.
In this embodiment, when the target point is the pupil center, and the first reference point and the second reference point are two intersection points where a preset straight line passing through the pupil center intersects with the iris edge, the first determining module may be specifically configured to:
determining coordinates of the pupil center in the second target image under a pixel coordinate system based on the second target image;
determining a distance between the pupil center and an iris edge in the pixel coordinate system based on coordinates of the pupil center in the pixel coordinate system;
and determining the coordinates of a first reference point and a second reference point in the second target image based on the coordinates of the pupil center in a pixel coordinate system, the distance between the pupil center and the iris edge in the pixel coordinate system and the direction of a preset straight line.
In this embodiment, the process of determining the distance between the pupil center and the iris edge in the pixel coordinate system based on the coordinates of the pupil center in the pixel coordinate system may specifically include:
setting a searching radius range by taking the pupil center as a searching center, and dividing a searching area into N sub-areas according to a central angle;
Searching in the subarea according to a current searching radius, determining coordinates corresponding to a plurality of central angle angles under the current searching radius, and determining a pixel value of a pixel point corresponding to each coordinate;
determining the average value of pixel values of a plurality of central angle angles under the current searching radius as the pixel average value corresponding to the current searching radius;
determining an absolute value of a difference value of a pixel mean value corresponding to the current search radius and a pixel mean value corresponding to a previous search radius adjacent to the current search radius;
updating the current searching radius to be the sum of the current searching radius and a set searching step length, and repeatedly executing the step of searching in the subarea with the current searching radius until the updated current searching radius exceeds the set searching radius range;
determining the minimum search radius of two adjacent search radii corresponding to the maximum absolute value in the absolute values corresponding to the subareas;
and determining the distance between the pupil center and the iris edge under the pixel coordinate system based on the minimum search radius corresponding to each sub-region.
The determining the distance between the pupil center and the iris edge in the pixel coordinate system based on the minimum search radius corresponding to each sub-region specifically may include:
Determining the mode of the minimum search radius corresponding to each sub-region;
or determining an average of the minimum search radii corresponding to each sub-region;
or selecting a group of candidate minimum search radii with the phase difference value not exceeding a threshold value from the minimum search radii corresponding to each sub-region, and determining the average value of the group of candidate minimum search radii;
determining the mode, the average, or the average of the set of candidate minimum search radii as the distance between the pupil center and iris edge in the pixel coordinate system.
Next, the pupil positioning device provided in the present application will be described, and the pupil positioning device described below and the pupil positioning method described above may be referred to correspondingly.
Referring to fig. 7, the pupil positioning device includes: an acquisition module 400, a coordinate determination module 500, and a positioning module 600.
An acquisition module 400, configured to acquire an eye image acquired by a monocular camera;
the coordinate determining module 500 is configured to determine coordinates of a first reference point and a second reference point in a pixel coordinate system, where the first reference point and the second reference point are two intersection points where a preset straight line passing through a pupil center intersects an iris edge, and the pupil center is a target point;
The positioning module 600 is configured to perform three-dimensional positioning on a target point according to a mapping relationship between coordinates of the target point in an image acquired by a monocular camera, which is established in advance, in the image and actual three-dimensional coordinates of the target point;
the mapping relationship is obtained based on the monocular camera image target point three-dimensional positioning method as described in embodiment 4 or 5.
In this embodiment, the pupil positioning device may further include:
a fourth determining module, configured to determine a radius of a pupil to which the pupil center belongs in the pixel coordinate system;
a fifth determining module, configured to determine a distance between the first reference point and the second reference point in the three-dimensional coordinate system based on the mapping relationship;
a sixth determining module, configured to determine a diameter of the pupil in the three-dimensional coordinate system based on a radius of the pupil in the pixel coordinate system, a distance between the center of the pupil and an iris edge in the pixel coordinate system, and a distance between the first reference point and the second reference point in the three-dimensional coordinate system;
and the output module is used for outputting the diameter.
It should be noted that, in each embodiment, the differences from the other embodiments are emphasized, and the same similar parts between the embodiments are referred to each other. For the apparatus class embodiments, the description is relatively simple as it is substantially similar to the method embodiments, and reference is made to the description of the method embodiments for relevant points.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present application.
From the above description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in software plus a necessary general purpose hardware platform. Based on such understanding, the technical solutions of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in the embodiments or some parts of the embodiments of the present application.
The above describes in detail a monocular camera image target point three-dimensional positioning method, pupil positioning method and device provided in the present application, and specific examples are applied to illustrate the principles and embodiments of the present application, where the descriptions of the above examples are only used to help understand the method and core ideas of the present application; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. A method for three-dimensionally locating an image target point of a monocular camera, comprising:
acquiring a first target image acquired by a monocular camera, and determining coordinates of a first reference point and a second reference point in the first target image;
acquiring a second target image acquired by the monocular camera after moving a first distance, and determining coordinates of the first reference point and the second reference point in the second target image; wherein the relative positions of the first reference point and the second reference point are kept unchanged before and after the monocular camera moves;
establishing a mapping formula between the monocular camera from a pixel coordinate system to a three-dimensional coordinate system based on camera parameters of the monocular camera, and establishing a three-dimensional model when a straight line where the first reference point and the second reference point are located is parallel to the monocular camera imaging plane:
Wherein each coordinate axis of the three-dimensional model is consistent with a world coordinate system calibrated by a monocular camera, and (X, Y, Z) is a three-dimensional coordinate of the first reference point or the second reference point in the three-dimensional coordinate system, and (u) 1 ,v 1 ) Is the pixel coordinates of the first reference point in the image acquired by the monocular camera, (u) 2 ,v 2 ) Is the pixel of the second reference point in the image acquired by the monocular cameraCoordinates, m 11 、m 13 、m 22 、m 23 、m 33 C represents the actual distance between the first reference point and the second reference point, and theta is the included angle between the X-axis of the three-dimensional model and the straight line where the first reference point and the second reference point are located;
substituting coordinates of the first distance, the first reference point and the second reference point in the first target image and the second target image into a three-dimensional model to solve three-dimensional coordinates of target point positioning; the target point is a first reference point or a second reference point or a point with a known position relation with the first reference point and the second reference point.
2. The method according to claim 1, wherein the solving the three-dimensional coordinates of the target point location is specifically solving the actual three-dimensional coordinates of the first reference point or the second reference point as the three-dimensional coordinates of the target point location, or further obtaining the three-dimensional coordinates of the target point location according to the actual three-dimensional coordinates of the first reference point and the second reference point, and the position relationship between the target point and the first reference point and the position relationship between the target point and the second reference point.
3. The method of claim 2, wherein there is an unknown actual distance between the first reference point and the second reference point, and a relationship exists between actual three-dimensional coordinates of the first reference point and the second reference point, the relationship is established based on a projected relationship of the actual distance, and the relationship is brought into the mapping formula to establish the three-dimensional model.
4. The method according to claim 1, wherein when the target point is a pupil center and the first reference point and the second reference point are two intersection points where a preset straight line passing through the pupil center intersects with an iris edge, the determining coordinates of the first reference point and the second reference point in the first target image or the second target image includes:
determining coordinates of pupil centers in the first target image and the second target image under a pixel coordinate system based on the first target image or the second target image;
determining a distance between the pupil center and an iris edge in the pixel coordinate system based on coordinates of the pupil center in the pixel coordinate system;
and determining coordinates of a first reference point and a second reference point in the first target image or the second target image based on the coordinates of the pupil center under a pixel coordinate system, the distance between the pupil center and the iris edge under the pixel coordinate system and the direction of a preset straight line.
5. The method of claim 4, wherein the determining a distance between the pupil center and an iris edge in the pixel coordinate system based on coordinates of the pupil center in the pixel coordinate system comprises:
setting a searching radius range by taking the pupil center as a searching center, and dividing a searching area into N sub-areas according to a central angle;
searching in the subarea according to a current searching radius, determining coordinates corresponding to a plurality of central angle angles under the current searching radius, and determining a pixel value of a pixel point corresponding to each coordinate;
determining the average value of pixel values of a plurality of central angle angles under the current searching radius as the pixel average value corresponding to the current searching radius;
determining an absolute value of a difference value of a pixel mean value corresponding to the current search radius and a pixel mean value corresponding to a previous search radius adjacent to the current search radius;
updating the current searching radius to be the sum of the current searching radius and a set searching step length, and repeatedly executing the step of searching in the subarea with the current searching radius until the updated current searching radius exceeds the set searching radius range;
Determining the minimum search radius of two adjacent search radii corresponding to the maximum absolute value in the absolute values corresponding to the subareas;
and determining the distance between the pupil center and the iris edge under the pixel coordinate system based on the minimum search radius corresponding to each sub-region.
6. The method of claim 5, wherein said determining a distance between the pupil center and iris edge in the pixel coordinate system based on the minimum search radius for each of the sub-regions comprises:
determining the mode of the minimum search radius corresponding to each sub-region;
or determining an average of the minimum search radii corresponding to each sub-region;
or selecting a group of candidate minimum search radii with the phase difference value not exceeding a threshold value from the minimum search radii corresponding to each sub-region, and determining the average value of the group of candidate minimum search radii;
determining the mode, the average, or the average of the set of candidate minimum search radii as the distance between the pupil center and iris edge in the pixel coordinate system.
7. A pupil positioning method, comprising:
Acquiring an eye image acquired by a monocular camera;
determining coordinates of a first reference point and a second reference point under a pixel coordinate system, wherein the first reference point and the second reference point are two intersection points at which a preset straight line passing through the pupil center intersects with the iris edge, and the pupil center is a target point;
according to a pre-established three-dimensional model, performing three-dimensional positioning on the target point;
the three-dimensional model is obtained based on the monocular camera image target point three-dimensional positioning method according to any one of claims 1 to 6.
8. The method of claim 7, wherein the method further comprises:
determining the radius of the pupil center under the pixel coordinate system;
determining a distance between the first reference point and the second reference point in the three-dimensional coordinate system based on the three-dimensional model;
and determining the diameter of the pupil under the three-dimensional coordinate system based on the radius of the pupil under the pixel coordinate system, the distance between the pupil center and the iris edge under the pixel coordinate system and the distance between the first reference point and the second reference point under the three-dimensional coordinate system.
9. A monocular camera image target point three-dimensional positioning device, characterized by comprising:
the first determining module is used for acquiring a first target image acquired by the monocular camera and determining coordinates of a first reference point and a second reference point in the first target image;
the second determining module is used for acquiring a second target image acquired by the monocular camera after moving a first distance and determining coordinates of the first reference point and the second reference point in the second target image; wherein the relative positions of the first reference point and the second reference point are kept unchanged before and after the monocular camera moves;
the third determining module is configured to establish a mapping formula between the monocular camera from a pixel coordinate system to a three-dimensional coordinate system based on camera parameters of the monocular camera, and establish a three-dimensional model when a line where the first reference point and the second reference point are located is parallel to the monocular camera imaging plane:
wherein each coordinate axis of the three-dimensional model is consistent with a world coordinate system calibrated by a monocular camera, and (X, Y, Z) is a three-dimensional coordinate of the first reference point or the second reference point in the three-dimensional coordinate system, and (u) 1 ,v 1 ) Is the pixel coordinates of the first reference point in the image acquired by the monocular camera, (u) 2 ,v 2 ) Is the pixel coordinate, m, of the second reference point in the image acquired by the monocular camera 11 、m 13 、m 22 、m 23 、m 33 C represents the actual distance between the first reference point and the second reference point, and theta is the included angle between the X-axis of the three-dimensional model and the straight line where the first reference point and the second reference point are located;
substituting coordinates of the first distance, the first reference point and the second reference point in the first target image and the second target image into a three-dimensional model to solve three-dimensional coordinates of target point positioning; the target point is a first reference point or a second reference point or a point with a known position relation with the first reference point and the second reference point.
10. A pupil positioning device, comprising:
the acquisition module is used for acquiring the eye images acquired by the monocular camera;
the coordinate determining module is used for determining coordinates of a first reference point and a second reference point under a pixel coordinate system, wherein the first reference point and the second reference point are two intersection points where a preset straight line passing through the pupil center intersects with the iris edge, and the pupil center is a target point;
the positioning module is used for performing three-dimensional positioning on the target point according to a pre-established three-dimensional model;
The three-dimensional model is obtained based on the monocular camera image target point three-dimensional positioning method according to any one of claims 1 to 6.
CN202211156609.9A 2022-09-22 2022-09-22 Monocular camera image target point three-dimensional positioning method, pupil positioning method and device Active CN115471557B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211156609.9A CN115471557B (en) 2022-09-22 2022-09-22 Monocular camera image target point three-dimensional positioning method, pupil positioning method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211156609.9A CN115471557B (en) 2022-09-22 2022-09-22 Monocular camera image target point three-dimensional positioning method, pupil positioning method and device

Publications (2)

Publication Number Publication Date
CN115471557A CN115471557A (en) 2022-12-13
CN115471557B true CN115471557B (en) 2024-02-02

Family

ID=84335245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211156609.9A Active CN115471557B (en) 2022-09-22 2022-09-22 Monocular camera image target point three-dimensional positioning method, pupil positioning method and device

Country Status (1)

Country Link
CN (1) CN115471557B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180072517A (en) * 2017-07-19 2018-06-29 주식회사 쓰리이 Method for detecting borderline between iris and sclera
CN108846390A (en) * 2013-09-16 2018-11-20 眼验股份有限公司 Feature extraction and matching and template renewal for biological identification
CN111028205A (en) * 2019-11-21 2020-04-17 佛山科学技术学院 Eye pupil positioning method and device based on binocular ranging
CN111854620A (en) * 2020-07-16 2020-10-30 科大讯飞股份有限公司 Monocular camera-based actual pupil distance measuring method, device and equipment
CN112308932A (en) * 2020-11-04 2021-02-02 中国科学院上海微系统与信息技术研究所 Gaze detection method, device, equipment and storage medium
WO2022142591A1 (en) * 2020-12-30 2022-07-07 北京眼神智能科技有限公司 Strabismic pupil localization method, apparatus, computer-readable storage medium, and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050025927A (en) * 2003-09-08 2005-03-14 유웅덕 The pupil detection method and shape descriptor extraction method for a iris recognition, iris feature extraction apparatus and method, and iris recognition system and method using its
KR100826876B1 (en) * 2006-09-18 2008-05-06 한국전자통신연구원 Iris recognition method and apparatus for thereof
JP6930223B2 (en) * 2017-05-31 2021-09-01 富士通株式会社 Pupil detection computer program, pupil detection device and pupil detection method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846390A (en) * 2013-09-16 2018-11-20 眼验股份有限公司 Feature extraction and matching and template renewal for biological identification
KR20180072517A (en) * 2017-07-19 2018-06-29 주식회사 쓰리이 Method for detecting borderline between iris and sclera
CN111028205A (en) * 2019-11-21 2020-04-17 佛山科学技术学院 Eye pupil positioning method and device based on binocular ranging
CN111854620A (en) * 2020-07-16 2020-10-30 科大讯飞股份有限公司 Monocular camera-based actual pupil distance measuring method, device and equipment
CN112308932A (en) * 2020-11-04 2021-02-02 中国科学院上海微系统与信息技术研究所 Gaze detection method, device, equipment and storage medium
WO2022142591A1 (en) * 2020-12-30 2022-07-07 北京眼神智能科技有限公司 Strabismic pupil localization method, apparatus, computer-readable storage medium, and device
CN114764943A (en) * 2020-12-30 2022-07-19 北京眼神智能科技有限公司 Method and device for positioning strabismus pupil, computer readable storage medium and equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Mustafa A. Ghazi 等.Monocular vision-based motion capture system: A performance model.IEEE.2018,192-197. *
李贤辉 等.基于瞳孔定位的单目测距系统.智能计算机与应用.2016,73-76. *
贺力文 等.一种快速的虹膜定位算法.信息技术.2007,69-70+88. *

Also Published As

Publication number Publication date
CN115471557A (en) 2022-12-13

Similar Documents

Publication Publication Date Title
CN103649674B (en) Measuring equipment and messaging device
CN105069743B (en) Detector splices the method for real time image registration
Zhang et al. A robust and rapid camera calibration method by one captured image
CN106290256B (en) Quantitative background schlieren method based on video measuring
CN103776419B (en) A kind of binocular distance measurement method improving measurement range
EP3739545A1 (en) Image processing method and apparatus, vehicle-mounted head up display system, and vehicle
CN104165626B (en) Bionic compound eyes imageable target positioning system
CN111210468A (en) Image depth information acquisition method and device
CN104089628B (en) Self-adaption geometric calibration method of light field camera
CN103782232A (en) Projector and control method thereof
CN106558081B (en) The method for demarcating the circular cone catadioptric video camera of optical resonator system
CN110779491A (en) Method, device and equipment for measuring distance of target on horizontal plane and storage medium
CN110261069B (en) Detection method for optical lens
CN109238084A (en) A kind of Autonomous Seam Locating Method of miniature circular hole measurement
CN109035345A (en) The TOF camera range correction method returned based on Gaussian process
CN105261061B (en) A kind of method and device of identification redundant data
CN115471557B (en) Monocular camera image target point three-dimensional positioning method, pupil positioning method and device
CN113012238B (en) Method for quick calibration and data fusion of multi-depth camera
JP2623367B2 (en) Calibration method of three-dimensional shape measuring device
CN112629679B (en) High-precision measurement method suitable for background schlieren, electronic equipment and medium
JP2000205821A (en) Instrument and method for three-dimensional shape measurement
CN114062265B (en) Evaluation method for stability of support structure of vision system
CN108733913A (en) A kind of ophthalmology OCT equipment lateral resolution detection methods based on DWPSO algorithms
CN115471556B (en) Monocular camera image target point three-dimensional positioning method and device
CN112598736A (en) Map construction based visual positioning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant