CN113506338A - 3D point selection method and device in optical motion capture scene and storage medium - Google Patents

3D point selection method and device in optical motion capture scene and storage medium Download PDF

Info

Publication number
CN113506338A
CN113506338A CN202110610920.5A CN202110610920A CN113506338A CN 113506338 A CN113506338 A CN 113506338A CN 202110610920 A CN202110610920 A CN 202110610920A CN 113506338 A CN113506338 A CN 113506338A
Authority
CN
China
Prior art keywords
distance
points
rigid body
target
real
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110610920.5A
Other languages
Chinese (zh)
Inventor
洪智慧
许秋子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Realis Multimedia Technology Co Ltd
Original Assignee
Shenzhen Realis Multimedia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Realis Multimedia Technology Co Ltd filed Critical Shenzhen Realis Multimedia Technology Co Ltd
Priority to CN202110610920.5A priority Critical patent/CN113506338A/en
Publication of CN113506338A publication Critical patent/CN113506338A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention provides a method for acquiring real-time 3D points of a rigid object in an optical motion capture scene at a target moment; calculating the distance between any two of the real-time 3D points; comparing the distance between any two points in the real-time 3D points with the distance between any two points in each point of the rigid body object in a standard form to obtain a corresponding relation; and determining the corresponding relation between the real-time 3D point and each point of the rigid body object in the reference form according to the corresponding relation. By comparing the distance between the 3D points with the distance between the rigid body object and each point in the standard form, the corresponding relation between the real-time 3D points and each point of the rigid body object in the standard form can be accurately obtained. The corresponding rigid body posture can be conveniently calculated subsequently, and the calculation amount is reduced.

Description

3D point selection method and device in optical motion capture scene and storage medium
Technical Field
The invention relates to the technical field of motion capture, in particular to a method and a device for selecting a 3D point in an optical motion capture scene and a storage medium.
Background
In the existing mark point motion capture system, the installation of a rigid body and a mark point has certain requirements, and the characteristic of the rigid body determines the accuracy of system motion capture to a certain extent. In general, an optical motion capture system captures 2D pixel data of a rigid body object by emitting infrared light by group control cameras arranged around a scene and then receiving the infrared light reflected by a plurality of infrared light balls on a plurality of rigid body objects.
And then, performing three-dimensional reconstruction by using the captured 2D data through a method such as triangulation and the like, thereby restoring the position of the 3D point of the rigid body point in the scene. However, in one motion capture scene, there may be more than one rigid body object. The 3D data of the preliminary rigid body points, usually triangulated, are unknown to which rigid body point on the rigid body object corresponds, so that the corresponding rigid body pose cannot be calculated.
Disclosure of Invention
The invention mainly aims to solve the problem that the 3D data of the primary rigid body point obtained by triangulation in a motion capture scene cannot correspond to the rigid body point on the rigid body object.
In view of the above, the first aspect of the present invention provides a method for 3D point selection in an optical motion capture scene, the method comprising: acquiring real-time 3D points of a rigid body object in an optical motion capture scene at a target moment; calculating the distance between any two of the real-time 3D points; comparing the distance between any two points in the real-time 3D points with the distance between any two points in each point of the rigid body object in a standard form to obtain a corresponding relation; and determining the corresponding relation between the real-time 3D point and each point of the rigid body object in the reference form according to the corresponding relation. By comparing the distance between the 3D points with the distance between the rigid body object and each point in the standard form, the corresponding relation between the real-time 3D points and each point of the rigid body object in the standard form can be accurately obtained. The corresponding rigid body posture can be conveniently calculated subsequently, and the calculation amount is reduced.
Optionally, with reference to the first aspect, in a possible implementation manner, the method further includes: and acquiring the distance between any two points of the rigid body object in the reference form.
Optionally, with reference to the first aspect, in a possible implementation manner, the comparing the distance between any two of the real-time 3D points and the distance between any two of the points of the rigid body object in a reference form to obtain the corresponding relationship includes: acquiring a target distance of the rigid body object in a reference form, wherein the target distance is the distance between any two points of the rigid body object in the reference form; presetting an error range on the basis of the target distance, acquiring the distance between any two of the real-time 3D points within the error range of the target distance, and adding the distance within the error range of the target distance into a candidate list of the target distance; determining a distance value in the candidate list that is closest to the target distance.
Optionally, with reference to the first aspect, in a possible implementation manner, the determining a distance value closest to the target distance in the candidate list includes: calculating score values of all distance values in the candidate list for the target distance; and taking the highest distance value in the score values of the target distances as the distance value closest to the target distance.
Optionally, with reference to the first aspect, in a possible implementation manner, the calculating score values of all distance values in the candidate list for the target distance includes: calculating score values of all distance values in the candidate list for the target distance according to the following formula:
Figure BDA0003095643330000021
wherein, the f () is the score value, Dist is all distance values in the candidate list, Dist is the value of the target distance, and Dist is within the error range.
A second aspect of the invention provides an apparatus for 3D point selection in an optical motion capture scene, the apparatus comprising: the acquisition module is used for acquiring real-time 3D points of the rigid body object in the optical motion capture scene at the target moment; the calculating module is used for calculating the distance between any two real-time 3D points; the comparison module is used for comparing the distance between any two points in the real-time 3D points with the distance between any two points in the points of the rigid body object under the standard form so as to obtain the corresponding relation; and the corresponding module is used for determining the corresponding relation between the real-time 3D point and each point of the rigid body object in the standard form according to the corresponding relation.
Optionally, with reference to the second aspect, the obtaining module is further configured to obtain a distance between any two points of the rigid body object in the reference form.
Optionally, with reference to the second aspect, in a possible implementation manner, the comparing module is specifically configured to obtain a target distance of the rigid body object in a reference form, where the target distance is a distance between any two points of the rigid body object in the reference form; the comparison module is specifically configured to preset an error range based on the target distance, obtain a distance within the error range of the target distance from the distances between any two of the real-time 3D points, and add the distance within the error range of the target distance to the candidate list of the target distance; the comparison module is further configured to determine a distance value closest to the target distance in the candidate list.
Optionally, with reference to the second aspect, the comparison module is specifically configured to calculate score values of all distance values in the candidate list for the target distance; the comparison module is specifically further configured to take a highest distance value among the score values of the target distances as the distance value closest to the target distance.
Optionally, with reference to the second aspect, the comparison module is specifically configured to calculate score values of all distance values in the candidate list for the target distance according to the following formula:
Figure BDA0003095643330000031
wherein, the f () is the score value, Dist is all distance values in the candidate list, Dist is the value of the target distance, and Dist is within the error range.
A third aspect of the invention provides an apparatus for 3D point selection in an optical motion capture scene, the apparatus comprising: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line; the at least one processor invokes the instructions in the memory to cause the data glove to perform a method of 3D point selection in an optical motion capture scene as described in the first aspect and any one of the possible implementations of the invention.
A fourth aspect of the present invention provides a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when being executed by a processor, implements the method for 3D point selection in an optical motion capture scene as described above.
The invention provides a method and a device for selecting a 3D point in an optical motion capture scene and a storage medium. The method comprises the following steps: acquiring real-time 3D points of a rigid body object in an optical motion capture scene at a target moment; calculating the distance between any two of the real-time 3D points; comparing the distance between any two points in the real-time 3D points with the distance between any two points in each point of the rigid body object in a standard form to obtain a corresponding relation; and determining the corresponding relation between the real-time 3D point and each point of the rigid body object in the reference form according to the corresponding relation. By comparing the distance between the 3D points with the distance between the rigid body object and each point in the standard form, the corresponding relation between the real-time 3D points and each point of the rigid body object in the standard form can be accurately obtained. The corresponding rigid body posture can be conveniently calculated subsequently, and the calculation amount is reduced.
Drawings
FIG. 1 is a flowchart illustrating a method for 3D point selection in an optical motion capture scene according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a real-time 3D point according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of points of a rigid body object in a reference state according to an embodiment of the present invention;
fig. 4 is a schematic view of a value image corresponding to a calculation formula provided in an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a 3D point selection apparatus in an optical motion capture scene according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a 3D point selection apparatus in an optical motion capture scene according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The term "and/or" appearing in the present application may be an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this application generally indicates that the former and latter related objects are in an "or" relationship.
The terms "first," "second," and the like in the description and in the claims of the present application and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Moreover, the terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules explicitly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus.
The optical motion capture system transmits infrared light through the group control cameras arranged around a scene, and then receives the infrared light reflected by a plurality of infrared light balls on a plurality of rigid objects to complete the capture of 2D pixel data of the rigid objects. And then, performing three-dimensional reconstruction by using the captured 2D data through a method such as triangulation and the like, thereby restoring the position of the 3D point of the rigid body point in the scene. However, in one motion capture scene, there may be more than one rigid body object. The 3D data of the preliminary rigid body points, usually triangulated, are unknown to which rigid body point on the rigid body object corresponds, so that the corresponding rigid body pose cannot be calculated.
In an optical motion capture scene, the calculation of the 3D point is usually inaccurate and has errors, which may be different under different environments, for example, in an environment with a large variation of light, the offset will be relatively large, and thus the matching between the 3D point and the reference rigid body cannot be determined by accurately comparing the distances.
The present invention therefore provides a method of 3D point selection in an optical motion capture scene. For convenience of understanding, a specific flow of a method for selecting a 3D point in an optical motion capture scene according to an embodiment of the present invention is described below, and with reference to fig. 1, the method for selecting a 3D point in an optical motion capture scene according to an embodiment of the present invention includes:
s101, acquiring real-time 3D points of the rigid body object in the optical motion capture scene at the target moment.
And acquiring real-time 3D points of the rigid body object in the optical motion capture scene at the target moment. The target time may be any time of the rigid body object in the motion capture scene. For example, referring to fig. 2, the real-time 3D points may include 3D1 to 3D7, 7 3D points.
Before acquiring the real-time 3D points of the rigid body object in the motion capture scene, the distance between any two points of the rigid body object in the reference form may be acquired. For example, please refer to fig. 3. The reference rigid bodies include Base1_1 to Base1_5, 5 rigid body points.
It should be noted that the rigid body matching is to match the real-time 3D points appearing in the target time to the corresponding points of the rigid body object in the reference form, and each real-time 3D point is required to be matched only once or not. The points of the rigid body object in the reference form are also allowed to be matched only once, and may not be matched.
Illustratively, for example, 3D1 in fig. 2 matches Basel _1 in fig. 3, 3D2 in fig. 2 matches Basel _2 in fig. 3, 3D3 in fig. 2 matches Basel _5 in fig. 3, and 3D4 in fig. 2 matches Basel _4 in fig. 3. 3D5 through 3D7 in this FIG. 2 failed to match the corresponding point of the rigid body object in the reference morphology, and Basel _3 on the reference rigid body also failed to match the 3D point.
And S102, calculating the distance between any two real-time 3D points.
The distance between any two of the real-time 3D points is calculated. Specifically, referring to fig. 3, the distance between Basel _1 and basee 1_2, the distance between Basel _1 and basee 1_3, the distance between Basel _1 and Basel _4, the distance between Basel _ l and Basel _5, etc. can be calculated.
S103, comparing the distance between any two points in the real-time 3D points with the distance between any two points in the points of the rigid body object in the standard form to obtain the corresponding relation.
And comparing the distance between any two points in the real-time 3D points with the distance between any two points in each point of the rigid body object in the standard form to obtain the corresponding relation. Specifically, the distance between any two points of the rigid body object in the reference form may be set as a target distance, and the target distance may be compared with the distance between any two points of the solid body object in the 3D point. And judging whether the difference between the two is within a preset error range. If the distance is within the error range, adding the distance between the two 3D points within the error range into a candidate list of the target distance. And determining a score for the distance of the two 3D points within the error range based on the magnitude of the error. After completing a candidate list of target distances, traversing the distances between other two points of the rigid body object in the reference form as target distances, and sequentially obtaining a candidate list of distances between any two points in the rigid body object.
Specifically, the target distance may be denoted as Dist, and the distance between any two 3D points may be denoted as Dist, and then the error range is set to [ Dist-MaxDiff, Dist + MaxDiff ]. And if the Dist is within the error range, giving a specific score to the Dist according to the difference value of the Dist and the Dist. The principle of this score is that the smaller the difference from Dist, the higher the score and the lower the negative score.
In one embodiment, it is assumed that the distance error follows a gaussian distribution with a mean center being the target distance between two points when the rigid object is in the reference configuration, a variance of 1, and a difference between the maximum and minimum error distance center point being set to MaxDiff, the error range being set to [ Dist-MaxDiff, Dist + MaxDiff ]. And judging whether the value of dist is within the error range, if the value of dist exceeds the maximum and minimum error value, directly setting the score to be 0, and otherwise, calculating the score according to the Gaussian distribution formula.
Specifically, the calculation formula is as follows:
Figure BDA0003095643330000061
wherein Dist belongs to [ Dist-Maxdiff, Dist + Maxdiff]. Otherwise f (dist) is 0. Specifically, please refer to fig. 4 for a value image corresponding to the calculation formula.
In this way, the Dist with the highest score can be obtained, that is, the distance between the two 3D points with the highest score corresponding to each target distance can be obtained. The correspondence between the distance between any two of the real-time 3D points and the distance between any two of the points of the rigid body object in the reference configuration may be determined, and the two may form a one-to-one correspondence.
And S104, determining the corresponding relation between the real-time 3D point and each point of the rigid body object in the standard form according to the corresponding relation.
After determining the correspondence between the distance between any two of the real-time 3D points and the distance between any two of the points of the rigid body object in the reference state, the correspondence between the real-time 3D points and the points in the reference state may be determined based on the correspondence between the distances. For example, referring to fig. 2 and 3, if it is determined that the distance between 3D1 and 3D2 matches the distance between Basel _1 and Base1_2, the distance between 3D2 and 3D3 matches the distance between Base1_2 and Base1_ 5. Then it may be determined that 3D1 matches Basel _1 in fig. 3, 3D2 in fig. 2 matches Basel _2 in fig. 3, and 3D3 in fig. 2 matches Basel _5 in fig. 3.
In an actual optical motion capture scene, there are usually tens or even hundreds of rigid bodies, and each rigid body is composed of several rigid body points, so the number of rigid body points and the number of 3D points of real-time three-dimensional reconstruction can reach hundreds or thousands, and thus, the number of candidate 3D points of each reference rigid body point is very large, and if matching calculation is directly performed, the calculation amount is very large. One way is to perform a further screening operation, keeping only the top n (say 5) highest scoring candidate points, to a large extent containing the last truly matching 3D point. It has been shown that the higher-scoring candidate points retained in this way are reasonably valid in 99% of cases, and only in rare cases will the correct points be removed. But it is usually very worthwhile to reduce the matching computation by 90%, and to increase the speed by nearly 10 times. And even if the correct point is removed in a certain time, only one matching is omitted at the current moment, the subsequent moment can still keep correct screening at a high probability, each subsequent frame matching cannot be influenced, and the calculation amount of avoiding abusing and avoiding a large amount of wrong matching is achieved.
And 3D points are subjected to candidate selection through Gaussian distribution errors, and the first n candidate points with the highest scores are reserved, so that the points which are completely impossible to match can be greatly reduced, and the calculation amount of further fine matching of the rigid body in the later stage is reduced.
In the above description of the 3D point selection method in the optical motion capture scene according to the embodiment of the present invention, referring to fig. 5, a 3D point selection device in the optical motion capture scene according to the embodiment of the present invention is described below, and an embodiment of the 3D point selection device 20 in the optical motion capture scene according to the embodiment of the present invention includes:
an obtaining module 201, configured to obtain a real-time 3D point of a rigid object in an optical motion capture scene at a target moment;
a calculating module 202, configured to calculate a distance between any two of the real-time 3D points;
a comparison module 203, configured to compare a distance between any two of the real-time 3D points with a distance between any two of the points of the rigid body object in a standard form to obtain a corresponding relationship;
a corresponding module 204, configured to determine, according to the corresponding relationship, a corresponding relationship between the real-time 3D point and each point of the rigid body object in a reference form.
The acquiring module 201 is further configured to acquire a distance between any two points of the rigid body object in the reference form.
The comparison module 203 is specifically configured to obtain a target distance of the rigid body object in a reference state, where the target distance is a distance between any two points of each point of the rigid body object in the reference state;
the comparison module 203 is further specifically configured to preset an error range based on the target distance, obtain a distance within the error range of the target distance from the distances between any two of the real-time 3D points, and add the distance within the error range of the target distance to the candidate list of the target distance;
the comparing module 203 is further configured to determine a distance value closest to the target distance in the candidate list.
The comparing module 203 is specifically configured to calculate score values of all distance values in the candidate list for the target distance; the comparison module is specifically further configured to take a highest distance value among the score values of the target distances as the distance value closest to the target distance.
The comparing module 203 is specifically configured to calculate score values of all distance values in the candidate list for the target distance according to the following formula:
Figure BDA0003095643330000081
wherein, the f () is the score value, Dist is all distance values in the candidate list, Dist is the value of the target distance, and Dist is within the error range.
Fig. 5 above describes the 3D point selection device in the optical motion capture scene in the embodiment of the present invention in detail from the perspective of the modular functional entity, and the 3D point selection device in the optical motion capture scene in the embodiment of the present invention is described in detail in the following from the perspective of hardware processing.
Fig. 6 is a schematic structural diagram of a 3D point selection apparatus in an optical motion capture scene according to an embodiment of the present invention, the apparatus 300 may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 310 (e.g., one or more processors) and a memory 320, and one or more storage media 330 (e.g., one or more mass storage devices) storing applications 333 or data 332. Memory 320 and storage media 330 may be, among other things, transient or persistent storage. The program stored on storage medium 330 may include one or more modules (not shown), each of which may include a series of instructions operating on data glove 300. Further, the processor 310 may be configured to communicate with the storage medium 330 to execute a series of instruction operations in the storage medium 330 on the device 300.
The device 300 may also include one or more power supplies 340, one or more wired or wireless network interfaces 330, one or more input-output interfaces 360, and/or one or more operating systems 331, such as Wimdows Server, Nmc OS X, Umix, Limux, FreeBSD, and the like. Those skilled in the art will appreciate that the 3D point selection device configuration in the optical motion capture scene illustrated in fig. 6 does not constitute a definition of a data glove and may include more or fewer components than those illustrated, or some components in combination, or a different arrangement of components.
The invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, which may also be a volatile computer readable storage medium, having stored therein instructions, which, when run on a computer, cause the computer to perform the steps of the method for 3D point selection in an optical motion capture scene.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a rom (rom), a random access memory (RMN), a magnetic disk, and an optical disk.
In the examples provided herein, it is to be understood that the disclosed methods may be practiced otherwise than as specifically described without departing from the spirit and scope of the present application. The present embodiment is an exemplary example only, and should not be taken as limiting, and the specific disclosure should not be taken as limiting the purpose of the application. For example, some features may be omitted, or not performed.
The technical means disclosed in the invention scheme are not limited to the technical means disclosed in the above embodiments, but also include the technical scheme formed by any combination of the above technical features. It should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present invention, and such improvements and modifications are also considered to be within the scope of the present invention.
The above detailed description is provided for a method, an apparatus and a storage medium for selecting a 3D point in an optical motion capture scene according to an embodiment of the present invention, and a specific example is applied in the description to explain the principles and embodiments of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention. Although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. A method of 3D point selection in an optical motion capture scene, the method comprising:
acquiring real-time 3D points of a rigid body object in an optical motion capture scene at a target moment;
calculating the distance between any two of the real-time 3D points;
comparing the distance between any two points in the real-time 3D points with the distance between any two points in each point of the rigid body object in a standard form to obtain a corresponding relation;
and determining the corresponding relation between the real-time 3D point and each point of the rigid body object in the reference form according to the corresponding relation.
2. The method of claim 1, further comprising:
and acquiring the distance between any two points of the rigid body object in the reference form.
3. The method according to claim 1 or 2, wherein the comparing the distance between any two of the real-time 3D points with the distance between any two of the points of the rigid body object in a reference form to obtain the corresponding relationship comprises:
acquiring a target distance of the rigid body object in a reference form, wherein the target distance is the distance between any two points of the rigid body object in the reference form;
presetting an error range on the basis of the target distance, acquiring the distance between any two of the real-time 3D points within the error range of the target distance, and adding the distance within the error range of the target distance into a candidate list of the target distance;
determining a distance value in the candidate list that is closest to the target distance.
4. The method of claim 3, wherein determining the closest distance value to the target distance in the candidate list comprises:
calculating score values of all distance values in the candidate list for the target distance;
and taking the highest distance value in the score values of the target distances as the distance value closest to the target distance.
5. The method of claim 4, wherein the calculating the score value for the target distance for all distance values in the candidate list comprises:
calculating score values of all distance values in the candidate list for the target distance according to the following formula:
Figure FDA0003095643320000021
wherein, the f () is the score value, Dist is all distance values in the candidate list, Dist is the value of the target distance, and Dist is within the error range.
6. An apparatus for 3D point selection in an optical motion capture scene, the apparatus comprising:
the acquisition module is used for acquiring real-time 3D points of the rigid body object in the optical motion capture scene at the target moment;
the calculating module is used for calculating the distance between any two real-time 3D points;
the comparison module is used for comparing the distance between any two points in the real-time 3D points with the distance between any two points in the points of the rigid body object under the standard form so as to obtain the corresponding relation;
and the corresponding module is used for determining the corresponding relation between the real-time 3D point and each point of the rigid body object in the standard form according to the corresponding relation.
7. The apparatus of claim 6,
the comparison module is specifically configured to obtain a target distance of the rigid body object in a reference state, where the target distance is a distance between any two points of the rigid body object in the reference state;
the comparison module is specifically configured to preset an error range based on the target distance, obtain a distance within the error range of the target distance from the distances between any two of the real-time 3D points, and add the distance within the error range of the target distance to the candidate list of the target distance;
the comparison module is further configured to determine a distance value closest to the target distance in the candidate list.
8. The apparatus of claim 7,
the comparison module is specifically configured to calculate score values of all distance values in the candidate list for the target distance;
the comparison module is specifically further configured to take a highest distance value among the score values of the target distances as the distance value closest to the target distance.
9. An apparatus for 3D point selection in an optical motion capture scene, the apparatus comprising: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line;
the at least one processor invokes the instructions in the memory to cause the data glove to perform the method of 3D point selection in an optical motion capture scene of any of claims 1-5.
10. A computer-readable storage medium, having stored thereon a computer program, wherein the computer program, when being executed by a processor, is adapted to carry out the method for 3D point selection in an optical motion capture scene according to any of the claims 1-5.
CN202110610920.5A 2021-06-01 2021-06-01 3D point selection method and device in optical motion capture scene and storage medium Pending CN113506338A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110610920.5A CN113506338A (en) 2021-06-01 2021-06-01 3D point selection method and device in optical motion capture scene and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110610920.5A CN113506338A (en) 2021-06-01 2021-06-01 3D point selection method and device in optical motion capture scene and storage medium

Publications (1)

Publication Number Publication Date
CN113506338A true CN113506338A (en) 2021-10-15

Family

ID=78008818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110610920.5A Pending CN113506338A (en) 2021-06-01 2021-06-01 3D point selection method and device in optical motion capture scene and storage medium

Country Status (1)

Country Link
CN (1) CN113506338A (en)

Similar Documents

Publication Publication Date Title
CN105806315B (en) Noncooperative target relative measurement system and measuring method based on active coding information
KR101791590B1 (en) Object pose recognition apparatus and method using the same
CN107481284A (en) Method, apparatus, terminal and the system of target tracking path accuracy measurement
US9058538B1 (en) Bundle adjustment based on image capture intervals
CN111445531B (en) Multi-view camera navigation method, device, equipment and storage medium
US20150235380A1 (en) Three-dimensional object recognition device and three-dimensional object recognition method
CN103854283A (en) Mobile augmented reality tracking registration method based on online study
CN105354841B (en) A kind of rapid remote sensing image matching method and system
CN111951326A (en) Target object skeleton key point positioning method and device based on multiple camera devices
CN114445506A (en) Camera calibration processing method, device, equipment and storage medium
CN111583342A (en) Target rapid positioning method and device based on binocular vision
WO2021016806A1 (en) High-precision map positioning method, system and platform, and computer-readable storage medium
WO2022001739A1 (en) Mark point identification method and apparatus, and device and storage medium
JP2016009391A (en) Information processor, feature point selection method, device and program of the same
CN113506338A (en) 3D point selection method and device in optical motion capture scene and storage medium
CN115457127A (en) Self-adaptive covariance method based on feature observation number and IMU pre-integration
CN113470101A (en) Rigid body matching method and device in optical motion capture scene and storage medium
CN114445591A (en) Map construction method, system, device and computer storage medium
CN109238243B (en) Measuring method, system, storage medium and equipment based on oblique photography
CN113920196A (en) Visual positioning method and device and computer equipment
CN111178366B (en) Mobile robot positioning method and mobile robot
CN109087338B (en) Method and device for extracting image sparse optical flow
CN111932628A (en) Pose determination method and device, electronic equipment and storage medium
CN113470100A (en) Rigid body matching method and device in optical motion capture scene and storage medium
Fu et al. Adaptability simulation and analysis of scene matching algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination