CN116631044A - Feature point position detection method and electronic device - Google Patents

Feature point position detection method and electronic device Download PDF

Info

Publication number
CN116631044A
CN116631044A CN202210129549.5A CN202210129549A CN116631044A CN 116631044 A CN116631044 A CN 116631044A CN 202210129549 A CN202210129549 A CN 202210129549A CN 116631044 A CN116631044 A CN 116631044A
Authority
CN
China
Prior art keywords
feature point
feature
distance
feature points
dimensional position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210129549.5A
Other languages
Chinese (zh)
Inventor
李彦贤
黄士挺
黄昭世
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Acer Inc
Original Assignee
Acer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acer Inc filed Critical Acer Inc
Priority to CN202210129549.5A priority Critical patent/CN116631044A/en
Publication of CN116631044A publication Critical patent/CN116631044A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a feature point position detection method and an electronic device. The method comprises the following steps: acquiring a plurality of first relative positions of a plurality of characteristic points on a specific object relative to a first image capturing component; acquiring a plurality of second relative positions of the plurality of characteristic points on the specific object relative to the second imaging assembly; and in response to determining that the first imaging assembly is unreliable, estimating a current three-dimensional position of each feature point based on the historical three-dimensional position of each feature point and the plurality of second relative positions. Therefore, a user positioned in front of the 3D display can not see a three-dimensional image with serious 3D crosstalk due to unreliable certain image capturing components.

Description

Feature point position detection method and electronic device
Technical Field
The present invention relates to an image processing mechanism, and more particularly, to a feature point detection method and an electronic device.
Background
The current naked eye 3D display will firstly place the pixels of the left eye and the right eye at the pixel positions corresponding to the display panel, and then the liquid crystal in the 3D lens controls the light path to project the images of the left eye and the right eye into the eyes of the pair respectively. Because of the focus to the left and right eyes, 3D lenses typically have an arcuate design so that images of the left (right) eye can be focused and projected into the left (right) eye. However, due to the refractive path, some light may be projected into the wrong eye. That is, the image of the left (right) eye is misplaced into the right (left) eye, and this phenomenon is called 3D crosstalk (cross talk).
Generally, naked eye 3D displays are often configured with eye tracking systems for providing corresponding images to both eyes after the positions of the eyes of the user are obtained. Currently, most of the commonly used eye tracking methods use a two-pupil camera to perform face recognition, and use triangulation to obtain two eye positions. However, in some cases, the face recognition performed by the pupil camera may not accurately measure the positions of the eyes due to insufficient acquired facial feature points, and thus may affect the quality of subsequent three-dimensional image presentation.
Disclosure of Invention
In view of the above, the present invention provides a feature point position detection method and an electronic device, which can be used to solve the above-mentioned technical problems.
The invention provides a feature point position detection method, which is suitable for an electronic device comprising a first image capturing component and a second image capturing component, and comprises the following steps: obtaining a plurality of first relative positions of a plurality of characteristic points on a specific object relative to a first image capturing component; acquiring a plurality of second relative positions of the plurality of characteristic points on the specific object relative to the second imaging assembly; and in response to determining that the first imaging assembly is unreliable, estimating a current three-dimensional position of each feature point based on a historical three-dimensional position of each feature point and the plurality of second relative positions.
The invention provides an electronic device, which comprises a first image capturing component, a second image capturing component and a processor. The processor is coupled to the first image capturing component and the second image capturing component and configured to perform: obtaining a plurality of first relative positions of a plurality of characteristic points on a specific object relative to a first image capturing component; acquiring a plurality of second relative positions of the plurality of characteristic points on the specific object relative to the second imaging assembly; and in response to determining that the first imaging assembly is unreliable, estimating a current three-dimensional position of each feature point based on a historical three-dimensional position of each feature point and the plurality of second relative positions.
Drawings
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic diagram of an electronic device according to an embodiment of the invention;
FIG. 2 is a flow chart of a feature point location detection method according to an embodiment of the invention;
FIG. 3 is a schematic diagram of facial feature points shown in accordance with an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating estimating the current three-dimensional position of feature points according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating an application scenario for determining the current three-dimensional position of each feature point according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments of the present invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers will be used throughout the drawings and the description to refer to the same or like parts.
Referring to fig. 1, a schematic diagram of an electronic device according to an embodiment of the invention is shown. In various embodiments, the electronic device 100 may be implemented as a variety of smart devices and/or computer devices. In some embodiments, the electronic device 100 may be implemented as an eye tracking device. In an embodiment, the electronic device 100 may be externally connected to a 3D display (e.g. an open-hole 3D display) for providing the 3D display with relevant eye tracking information. In another embodiment, the electronic device 100 may also be implemented as a 3D display with eye tracking function.
After the eye tracking information is obtained, the electronic device 100 implemented as a 3D display can adjust the display content accordingly, so that the user viewing the 3D display can view the display content of the 3D display while feeling the lower 3D crosstalk.
In fig. 1, an electronic device 100 includes image capturing components 101, 102 and a processor 104. In various embodiments, the electronic device 100 may further include more image capturing components coupled to the processor 104, which is not limited to the embodiment shown in fig. 1.
In various embodiments, the first image capturing device 101 and the second image capturing device 102 are, for example, any image capturing device having a charge coupled device (Charge coupled device, CCD) lens and a complementary metal oxide semiconductor (Complementary metal oxide semiconductor transistors, CMOS) lens, but not limited thereto. In some embodiments, the first image capturing component 101 and the second image capturing component 102 may be integrally implemented as a dual-pupil camera on the electronic device 100, but may not be limited thereto.
The processor 104 is coupled to the first image capturing element 101 and the second image capturing element 102, and may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in conjunction with a digital signal processor core, a controller, a microcontroller, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array circuit (Field Programmable Gate Array, FPGA), any other type of integrated circuit, a state machine, an advanced reduced instruction set machine (Advanced RISC Machine, ARM) based processor, and the like.
In the embodiment of the present invention, the processor 104 accesses relevant modules and program codes to implement the eye tracking method according to the present invention, and details thereof are described below.
Referring to fig. 2, a flowchart of a feature point location detection method according to an embodiment of the invention is shown. The method of the present embodiment may be performed by the electronic device 100 of fig. 1, and details of the steps of fig. 2 are described below with respect to the components shown in fig. 1.
First, in step S210, the processor 104 obtains a plurality of first relative positions of a plurality of feature points on a specific object with respect to the first image capturing component 101. For convenience of explanation, it is assumed that the specific object under consideration is a face, and the plurality of feature points on the specific object are, for example, a plurality of facial feature points located on the face, but may not be limited thereto.
In one embodiment, the processor 104 may control the first imaging assembly 101 to capture a first image of the particular object under consideration. The processor 104 can then identify feature points on the specific object in the first image, and determine a plurality of first relative positions of the feature points with respect to the first imaging assembly 101 according to the feature points.
Referring to fig. 3, a schematic diagram of facial feature points is shown according to an embodiment of the invention. In fig. 3, it is assumed that the processor 104 finds a plurality of feature points as shown in fig. 3 on the first image after the first imaging component 101 captures the first image of the specific object under consideration (i.e., the face). In one embodiment, the processor 104 may find the illustrated plurality of feature points in the first image based on any known face recognition algorithm, and accordingly obtain a plurality of first relative positions of the feature points with respect to the first image capturing component 101.
In an embodiment, the first relative position corresponding to each feature point may be characterized, for example, as a unit vector corresponding to each feature point. Taking the feature point numbered 0 in fig. 3 (hereinafter referred to as feature point 0) as an example, the processor 104 can correspondingly generate a corresponding unit vector after finding the feature point 0, where the unit vector is a vector with a length of 1 pointing to the feature point 0 and a starting point being the three-dimensional position of the first image capturing device 101 (i.e. the position of the first image capturing device 101 in the three-dimensional space). Taking the feature point numbered 1 in fig. 3 (hereinafter referred to as feature point 1) as an example, the processor 104 can correspondingly generate a corresponding unit vector after finding the feature point 1, where the unit vector is a vector with a starting point of the three-dimensional position of the first image capturing element 101, a length of 1 and pointing to the feature point 1.
Based on the above principle, the processor 104 may find the unit vector corresponding to each feature point after obtaining each feature point in fig. 3.
In an embodiment, after the plurality of feature points in the first image are found, the processor 104 may further determine whether the first image capturing component 101 is reliable. In an embodiment, the processor 104 may determine whether the number of feature points in the first image is below a preset threshold. If so, there may be too few feature points in the first image, so the information obtained by the first image capturing component 101 may not be suitable for performing the subsequent determination. Accordingly, the processor 104 may accordingly determine that the first imaging assembly 101 is unreliable.
On the other hand, if the number of feature points in the first image is not lower than the preset threshold, it is enough to represent the feature points in the first image, so the information acquired by the first image capturing component 101 is suitable for performing the subsequent determination. Accordingly, the processor 104 may accordingly determine that the first imaging assembly 101 is reliable, but may not be limited thereto.
In addition, in step S220, the processor 104 obtains a plurality of second relative positions of the plurality of feature points on the specific object with respect to the second image capturing component 102. In one embodiment, the processor 104 may control the second imaging assembly 102 to capture a second image of the particular object under consideration. The processor 104 can then identify feature points on the particular object in the second image, and determine a plurality of second relative positions of the feature points with respect to the second imaging assembly 102.
Similar to the concept of fig. 3, the processor 104 may find the unit vector corresponding to each feature point as the second relative position corresponding to each feature point after finding the plurality of feature points based on the second image. Details relating to the description of fig. 3 are omitted here.
In addition, in an embodiment, after the plurality of feature points in the second image are found, the processor 104 may further determine whether the second image capturing component 102 is reliable. In an embodiment, the processor 104 may determine whether the number of feature points in the second image is below a preset threshold. If so, there may be too few feature points in the second image, so the information obtained by the second image capturing component 102 may not be suitable for performing the subsequent determination. Accordingly, the processor 104 may accordingly determine that the second imaging assembly 102 is unreliable.
On the other hand, if the number of feature points in the second image is not lower than the preset threshold, it is enough to represent the feature points in the second image, so the information acquired by the second image capturing component 102 is suitable for performing the subsequent determination. Accordingly, the processor 104 may accordingly determine that the second imaging assembly 102 is reliable, but may not be so limited.
In some embodiments, if the processor 104 determines that the first image capturing device 101 and the second image capturing device 102 are both reliable at a certain time point, the processor 104 may perform Feature matching (bundle adjustment) and beam adjustment based on the first relative position of the Feature point corresponding to the first image capturing device 101 and the second relative position of the Feature point corresponding to the second image capturing device 102. Therefore, the current three-dimensional position of each characteristic point on the specific object can be correspondingly found. For details, reference is made to the literature related to the beam adjustment method (e.g. "Chen, yu & Chen, yisong & Wang, guoping (2019). Bundle Adjustment recycled") and the description thereof is omitted.
In other embodiments, in response to determining that one of the first imaging assembly 101 or the second imaging assembly 102 is unreliable, the processor 104 may estimate the current three-dimensional position of each feature point based on the one of the two determined to be reliable and the historical three-dimensional position of each feature point on the particular object. For convenience of description, it is assumed that the first imaging element 101 is one that is determined to be unreliable, but it is merely used as an example and is not used to limit the possible embodiments of the present invention.
Accordingly, in step S230, in response to determining that the first image capturing device 101 is unreliable, the processor 104 estimates the current three-dimensional position of each feature point based on the historical three-dimensional position of each feature point and the plurality of second relative positions.
In some embodiments, the historical three-dimensional position of each feature point is, for example, the current three-dimensional position of a point in time at which each feature point was previously estimated/detected. For example, assuming that the processor 104 determines that the first imaging device 101 is unreliable at the t-th time point (t is an index value), the processor 104 may, for example, take the current three-dimensional position corresponding to each feature point at the t-k time points (k is a positive integer) as the historical three-dimensional position considered at the t-th time point, but the invention is not limited thereto.
In one embodiment, the processor 104 obtains the first distance of the feature points from each other based on the historical three-dimensional positions of the feature points. Then, the processor 104 estimates a second distance between the second imaging element 102 and each feature point based on the unit vector corresponding to each feature point and the first distance of each feature point. Next, the processor 104 estimates the current three-dimensional position of each feature point based on the three-dimensional position of the second imaging assembly 102 and the second distance corresponding to each feature point. To make the above concepts easier to understand, further description is given below with reference to fig. 4.
Fig. 4 is a schematic diagram illustrating estimating the current three-dimensional positions of each feature point according to an embodiment of the invention. In fig. 4, it is assumed that the second imaging component 102 has a three-dimensional position O at the t-th time point, and the processor 104 finds the feature point A, B, C based on the second image at the t-th time point. As previously mentioned, the processor 104 may find the unit vector corresponding to each feature point A, B, C as the second relative position corresponding to the feature point A, B, C after finding the feature point A, B, C.
In FIG. 4, the second relative position between the feature point A and the second imaging assembly 102 can be characterized as a unit vectorWhich is, for example, a vector having a three-dimensional position O as a starting point, a length of 1, and pointing to the feature point a. The second relative position between the feature point B and the second imaging assembly 102 can be characterized as a unit vector +.>Which is, for example, a vector having a three-dimensional position O as a starting point, a length of 1, and pointing to the feature point B. In addition, the second relative position between the feature point C and the second imaging device 102 can be characterized as a unit vector +.>Which is, for example, a vector having a three-dimensional position O as a starting point, a length of 1, and pointing to the feature point C.
In the embodiment of the present invention, it is assumed that the relative positions of the feature points A, B, C to each other are constant between the t-th time point and the t-k-th time points.
In this case, the processor 104 may, for example, obtain the first distance c between the feature points A, B based on the historical three-dimensional positions of the feature points A, B, obtain the first distance b between the feature points A, C based on the historical three-dimensional positions of the feature points A, C, and obtain the first distance a between the feature points B, C based on the historical three-dimensional positions of the feature points B, C.
Furthermore, in the context of FIG. 4, the processor 104 may determine in which direction the feature point A, B, C is located at the three-dimensional position O (which may be defined by a unit vectorKnown), the second distance x between the three-dimensional position O and the feature point a, the second distance y between the three-dimensional position O and the feature point B, and the second distance z between the three-dimensional position O and the feature point C cannot be known.
To obtain the second distances x, y, z, the processor 104 may establish a plurality of relationships that may be used to calculate the second distances x, y, z based on the geometric relationships shown in FIG. 4.
In an embodiment, the processor 104 may be based on a unit vectorThe first distances a, b, c, the second distances x, y, z establish a plurality of relations, and the second distances x, y, z are estimated based on the relations.
In one embodiment, the processor 104 may establish the following relationship based on the cosine law: is-> Due to the unit vector->The first distances a, b, c are known, so the processor 104 may obtain the second distances x, y, z based on solving the above-mentioned relation (which may be considered as simultaneous equations), but may not be limited thereto.
After the second distances x, y, z are obtained, the processor 104 can determine the current three-dimensional position of the feature point A, B, C. In particular, the processor 104 may be located at a corresponding unit vectorA position that is a second distance x from the three-dimensional position O as the current three-dimensional position of the feature point a at the t-th point in time. In addition, the processor 104 may be located at a corresponding unit vectorIs a current three-dimensional position of the feature point B at the t-th point in time. Similarly, the processor 104 may be located at the corresponding unit vector +.>A position that is a second distance z from the three-dimensional position O as the current three-dimensional position of the feature point C at the t-th point in time.
In another embodiment, assuming that the processor 104 determines that the second imaging assembly 102 is unreliable at the t-th time point, the processor 104 may, for example, take the current three-dimensional position corresponding to each feature point at the t-k-th time point as the historical three-dimensional position considered at the t-th time point. The processor 104 may then obtain a first distance of the feature points from each other based on the historical three-dimensional positions of the feature points. Then, the processor 104 estimates a second distance between the first imaging device 101 and each feature point based on the unit vector corresponding to each feature point and the first distance between the feature points. Next, the processor 104 estimates the current three-dimensional position of each feature point based on the three-dimensional position of the first imaging device 101 and the second distance corresponding to each feature point.
Specifically, the processor 104 may still estimate the current three-dimensional position of each feature point based on the relevant teachings of fig. 4, but in the previous embodiment, the three-dimensional position of the second imaging component 102 is taken as the three-dimensional position O in fig. 4. However, in the case that the second imaging component 102 is determined to be unreliable, the processor 104 needs to take the three-dimensional position of the first imaging component 101 as the three-dimensional position O in fig. 4, and perform the subsequent estimation actions accordingly. For details, reference is made to the teachings of the previous embodiments, and further description is omitted.
Fig. 5 is a diagram showing an application scenario for determining the current three-dimensional position of each feature point according to an embodiment of the invention. In the embodiment of the present invention, the operation of the processor 104 to obtain the current three-dimensional position of each feature point when the first image capturing device 101 and the second image capturing device 102 are both reliable is referred to as a first beam adjustment mechanism. In addition, the operation of the processor 104 to obtain the current three-dimensional position of each feature point when the first image capturing device 101 is unreliable may be referred to as a second beam adjustment mechanism, and the operation of the processor 104 to obtain the current three-dimensional position of each feature point when the second image capturing device 102 is unreliable may be referred to as a third beam adjustment mechanism.
In the scenario of fig. 5, at each time point, the processor 104 may execute the first beam adjustment mechanism 511, the second beam adjustment mechanism 521, and the third beam adjustment mechanism 531 to obtain the current three-dimensional position of each feature point corresponding to the first beam adjustment mechanism 511 (hereinafter referred to as the first result 512), the current three-dimensional position of each feature point corresponding to the second beam adjustment mechanism 521 (hereinafter referred to as the second result 522), and the current three-dimensional position of each feature point corresponding to the third beam adjustment mechanism 531 (hereinafter referred to as the third result 532) before determining whether the first image capturing device 101 and/or the second image capturing device 102 are reliable.
That is, before determining whether the first image capturing element 101 and/or the second image capturing element 102 are reliable, the processor 104 may perform the feature matching and beam adjustment method based on the first relative position of the feature point corresponding to the first image capturing element 101 and the second relative position of the feature point corresponding to the second image capturing element 102, so as to find the current three-dimensional position of each feature point as the first result 512. In addition, the processor 104 may further obtain, with the three-dimensional position of the second imaging component 102 being the three-dimensional position O in fig. 4, the current three-dimensional position of each feature point as the second result 522 based on the mechanism of fig. 4. Further, the processor 104 may obtain, when the three-dimensional position of the first image capturing device 101 is the three-dimensional position O in fig. 4, the current three-dimensional position of each feature point based on the mechanism of fig. 4 as the third result 532.
Then, the processor 104 may adaptively select the first, second or third result as the final result in step S500 according to whether the first image capturing device 101 and/or the second image capturing device 102 are reliable.
In one embodiment, assuming that the processor 104 determines that the first capturing device 101 and the second capturing device 102 are both reliable at the t-th time point, the processor 104 may select the first result 512 to determine the current three-dimensional position of each feature point (or may understand to discard the second and third results) in step S501. For another example, assuming that the processor 104 determines that the first imaging device 101 is not reliable at the t-th time point, the processor 104 may select the second result 522 to determine the current three-dimensional position of each feature point (or may understand to discard the first result 512 and the third result 532) in step S502. In addition, assuming that the processor 104 determines that the second imaging device 102 is not reliable at the t-th time point, the processor 104 may select the third result 532 to determine the current three-dimensional position of each feature point in step S503 (or may understand to discard the first result 512 and the second result 522).
In other words, the processor 104 can execute the first beam adjustment mechanism 511, the second beam adjustment mechanism 521, and the third beam adjustment mechanism 531 at each time point, and then adaptively determine the current three-dimensional position of each feature point according to the first result 512, the second result 522, or the third result 532.
In one embodiment, after determining the current three-dimensional position of each feature point at the t-th time point, the processor 104 may further process the current three-dimensional position of each feature point at the t-th time point based on a concept of a Kalman filter (e.g., linear karl). For example, the processor 104 may input the current three-dimensional positions of the feature points obtained from the t-m time points to the t time points into a Kalman filter (e.g., a linear Kalman filter) to correct the current three-dimensional positions of the feature points at the t time points by the Kalman filter. For details, reference may be made to the kalman filter related literature, and further description is omitted herein.
In one embodiment, after the processor 104 obtains a plurality of eye feature points of both eyes on the face according to the above teachings, the three-dimensional display content of the 3D display may be determined based on the eye feature points. For example, the processor 104 may turn on the Lenticullar lens on the 3D display and adjust the pixel position on the 3D display, and the details thereof may refer to the documents related to 3D rendering in the prior art, and details thereof are not described herein. Therefore, a user positioned in front of the 3D display can not see a three-dimensional image with serious 3D crosstalk due to unreliable certain image capturing components.
In the embodiment of the present invention, the above description is given taking 2 imaging devices (i.e. the first imaging device 101 and the second imaging device 102 in fig. 1) and 3 feature points (i.e. the feature point A, B, C in fig. 3) as examples, but in other embodiments, the concept of the present invention is applicable to situations with more imaging devices and more feature points, and is not limited to the above embodiments.
In addition, although the above embodiment is described taking a 3D display as an example, the concept of the embodiment of the present invention can be applied to any mechanism for detecting the three-dimensional position of a feature point, and is not limited to the 3D display.
In summary, the embodiment of the invention can obtain the relative positions of the plurality of feature points on the specific object with respect to each imaging element, and estimate the current three-dimensional position of each feature point based on the historical three-dimensional position of each feature point and the relative position corresponding to another reliable imaging element when determining that one imaging element is unreliable. Therefore, a user positioned in front of the 3D display can not see a three-dimensional image with serious 3D crosstalk due to unreliable certain image capturing components.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (11)

1. A method for detecting a feature point position of an electronic device including a first image capturing component and a second image capturing component, the method comprising:
acquiring a plurality of first relative positions of a plurality of characteristic points on a specific object relative to the first image capturing component;
acquiring a plurality of second relative positions of the plurality of feature points on the specific object relative to the second imaging assembly; and
in response to determining that the first imaging assembly is unreliable, estimating a current three-dimensional position of each of the feature points based on the historical three-dimensional position of each of the feature points and the plurality of second relative positions.
2. The method of claim 1, wherein the step of obtaining the plurality of first relative positions of the plurality of feature points on the particular object with respect to the first imaging assembly comprises:
capturing, by the first imaging assembly, a first image of the particular object;
the plurality of feature points are identified in the first image, and the plurality of first relative positions of the plurality of feature points relative to the first imaging assembly are determined accordingly.
3. The method according to claim 2, comprising:
in response to determining that the number of the plurality of feature points in the plurality of first images is below a preset threshold, determining that the first imaging assembly is unreliable; and
and in response to determining that the number of the plurality of feature points in the plurality of first images is not lower than the preset threshold, determining that the first image capturing component is reliable.
4. The method of claim 1, further comprising:
and in response to determining that the first imaging assembly and the second imaging assembly are both reliable, estimating the current three-dimensional position of each feature point based on the plurality of first relative positions and the plurality of second relative positions.
5. The method of claim 1, wherein the plurality of second relative positions comprise unit vectors corresponding to each of the feature points, and estimating the current three-dimensional position of each of the feature points based on the historical three-dimensional position of each of the feature points and the plurality of second relative positions comprises:
acquiring a first distance between the plurality of feature points based on the historical three-dimensional positions of each feature point;
estimating a second distance between the second imaging element and each of the feature points based on the unit vector corresponding to each of the feature points and the first distance of the plurality of feature points from each other;
estimating the current three-dimensional position of each feature point based on the three-dimensional position of the second imaging assembly and the second distance corresponding to each feature point.
6. The method of claim 5, wherein the plurality of feature points includes a first feature point, a second feature point, and a third feature point, the second imaging component having first, second, and third unit vectors corresponding to the first, second, and third feature points, respectively, and estimating the second distance between the second imaging component and each of the feature points based on the unit vectors corresponding to each of the feature points and the first distance of the plurality of feature points from each other comprises:
establishing a plurality of relational expressions based on the first unit vector, the second unit vector, the third unit vector, the first distance between the first feature point and the second feature point, the first distance between the second feature point and the third feature point, the first distance between the first feature point and the third feature point, the second distance between the second imaging assembly and the first feature point, the second distance between the second imaging assembly and the second feature point, and the second distance between the second imaging assembly and the third feature point;
estimating the second distance between the second imaging component and the first feature point, the second distance between the second imaging component and the second feature point, and the second distance between the second imaging component and the third feature point based on the plurality of relationships.
7. The method of claim 6, wherein the plurality of relationships comprises:
wherein the method comprises the steps ofFor the first unit vector corresponding to the first feature point, +.>For the second unit vector corresponding to the second feature point, +.>For the first unit vector corresponding to the third feature point, a is the first distance between the second feature point and the third feature point, b is the first distance between the first feature point and the third feature point, c is the first distance between the first feature point and the second feature point, x is the second distance between the second imaging assembly and the first feature point, y is the second distance between the second imaging assembly and the second feature point, and z is the second distance between the second imaging assembly and the third feature point.
8. The method of claim 6, wherein the plurality of second relative positions are taken at a t-th time point, the historical three-dimensional position of each of the feature points is taken at a t-k-th time point, t is an index value, k is a positive integer, and the relative positions of the first feature point, the second feature point, and the third feature point with respect to each other are constant between the t-th time point and the t-k-th time point.
9. The method of claim 1, wherein the electronic device is a three-dimensional display and the first and second imaging components belong to a two-pupil camera on the three-dimensional display.
10. The method of claim 9, wherein the particular object is a human face, and after the step of estimating the current three-dimensional position of each feature point based on the historical three-dimensional position of each feature point and the plurality of second relative positions, further comprising:
acquiring a plurality of eye feature points corresponding to both eyes on the face; and
and determining three-dimensional display content of the three-dimensional display based on the plurality of eye feature points.
11. An electronic device, comprising:
a first image capturing component;
a second image capturing component; and
a processor coupled to the first and second imaging assemblies and configured to perform:
acquiring a plurality of first relative positions of a plurality of characteristic points on a specific object relative to the first image capturing component;
acquiring a plurality of second relative positions of the plurality of feature points on the specific object relative to the second imaging assembly; and
in response to determining that the first imaging assembly is unreliable, estimating a current three-dimensional position of each of the feature points based on the historical three-dimensional position of each of the feature points and the plurality of second relative positions.
CN202210129549.5A 2022-02-11 2022-02-11 Feature point position detection method and electronic device Pending CN116631044A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210129549.5A CN116631044A (en) 2022-02-11 2022-02-11 Feature point position detection method and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210129549.5A CN116631044A (en) 2022-02-11 2022-02-11 Feature point position detection method and electronic device

Publications (1)

Publication Number Publication Date
CN116631044A true CN116631044A (en) 2023-08-22

Family

ID=87590376

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210129549.5A Pending CN116631044A (en) 2022-02-11 2022-02-11 Feature point position detection method and electronic device

Country Status (1)

Country Link
CN (1) CN116631044A (en)

Similar Documents

Publication Publication Date Title
US10455141B2 (en) Auto-focus method and apparatus and electronic device
US9092875B2 (en) Motion estimation apparatus, depth estimation apparatus, and motion estimation method
JP4813517B2 (en) Image processing apparatus, image processing program, image processing method, and electronic apparatus
CN112529951A (en) Method and device for acquiring extended depth of field image and electronic equipment
US20100157135A1 (en) Passive distance estimation for imaging algorithms
JP2020511685A (en) Focusing method, terminal, and computer-readable storage medium
US9361704B2 (en) Image processing device, image processing method, image device, electronic equipment, and program
CN110661977B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
US20150201182A1 (en) Auto focus method and auto focus apparatus
TW201439659A (en) Auto focus method and auto focus apparatus
WO2017043258A1 (en) Calculating device and calculating device control method
US20190297267A1 (en) Control apparatus, image capturing apparatus, control method, and storage medium
JP6395429B2 (en) Image processing apparatus, control method thereof, and storage medium
KR20160105322A (en) Focus position detection device, focus position detection method and a computer program for focus position detection
CN116631044A (en) Feature point position detection method and electronic device
US11875532B2 (en) Feature point position detection method and electronic device
CN116347056A (en) Image focusing method, device, computer equipment and storage medium
CN114286011B (en) Focusing method and device
CN112634298B (en) Image processing method and device, storage medium and terminal
TW201642008A (en) Image capturing device and dynamic focus method thereof
US20220270264A1 (en) Image processing device, control method thereof, imaging device, and recording medium
US11930157B2 (en) Eye tracking method and eye tracking device
CN116074488A (en) Eye tracking method and eye tracking device
KR101754517B1 (en) System and Method for Auto Focusing of Curved Serface Subject
CN116843739A (en) Parallax confidence obtaining method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination