CN108830132B - Sphere point distribution method for optical motion capture, capture sphere and system - Google Patents

Sphere point distribution method for optical motion capture, capture sphere and system Download PDF

Info

Publication number
CN108830132B
CN108830132B CN201810322184.1A CN201810322184A CN108830132B CN 108830132 B CN108830132 B CN 108830132B CN 201810322184 A CN201810322184 A CN 201810322184A CN 108830132 B CN108830132 B CN 108830132B
Authority
CN
China
Prior art keywords
sphere
ball
regular
points
capture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810322184.1A
Other languages
Chinese (zh)
Other versions
CN108830132A (en
Inventor
王越
长坂友裕
许秋子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Realis Multimedia Technology Co Ltd
Original Assignee
Shenzhen Realis Multimedia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Realis Multimedia Technology Co Ltd filed Critical Shenzhen Realis Multimedia Technology Co Ltd
Priority to CN201810322184.1A priority Critical patent/CN108830132B/en
Priority to PCT/CN2018/090784 priority patent/WO2019196192A1/en
Priority to US16/470,749 priority patent/US11430135B2/en
Publication of CN108830132A publication Critical patent/CN108830132A/en
Application granted granted Critical
Publication of CN108830132B publication Critical patent/CN108830132B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Abstract

A sphere distribution method, a capture sphere and a system for optical motion capture are provided, wherein the surface of a sphere is divided by adopting a predetermined graph combination, so that the surface of the sphere presents a uniform graph distribution state, and reflective mark points of the sphere are set at the top points or the center points of edges of various geometric images, so as to form the capture sphere for optical capture. The catching ball obtained by the ball point distribution method has the advantages of uniform point distribution and small distance difference between the reflective mark points, so that when the optical catching system is used for identifying the motion posture of the catching ball, the system is favorable for quickly catching the reflective mark points on the catching ball, and the system is favorable for identifying the motion posture information of the catching ball according to the displacement change of the reflective mark points. Meanwhile, for the capture ball obtained by adopting the ball point distribution mode, motion gesture recognition can be carried out on the capture ball only under the condition of capturing fewer matching points on the ball, the system calculation amount is reduced, and convenience is brought to practical application.

Description

Sphere point distribution method for optical motion capture, capture sphere and system
Technical Field
The invention relates to the technical field of optical motion capture, in particular to a sphere point distribution method, a capture sphere and a system for optical motion capture.
Background
In the existing optical motion capture system, motion gesture recognition and trajectory tracking of a target object are generally achieved by capturing a motion trajectory of a reflective marker (marker, which refers to a marker whose surface is covered with a special reflective material, and is commonly used for capturing a moving object, and the shape of the marker is spherical, hemispherical, and the like). In general, a plurality of reflective markers arranged on a capture object are combined into a rigid body capable of being captured (the rigid body refers to an object with a shape and a size which are unchanged during movement and after a force is applied, and the relative positions of each point inside the rigid body are unchanged, and is an ideal model), and each reflective marker corresponds to a certain target part on the capture object. In tracking of a capture object, different target portions of the capture object are mainly distinguished by identifying different rigid bodies on the capture object. The method can well identify the motion postures of most target objects, such as the heads and hands of human bodies, various simulated props in games and the like. However, capturing a target object such as a ball, which has a non-fixed motion pattern, is not feasible because the capturing object has a random motion property in all directions in the space, the motion direction of the capturing object cannot be controlled, and all parts may collide with the contact surface, which may damage the rigid body.
At present, the method of sticking some reflective stickers on the surface of the ball is used to change the catching ball into a catching rigid body, so as to identify the motion posture of the ball and achieve the track tracking effect of the catching ball. This approach has the following disadvantages:
(1) when the catching ball is recognized in a rigid body mode, because a plurality of reflective stickers are pasted on the catching ball to configure a large number of reflective marks, in order to comprehensively recognize the motion posture of the catching ball, in the later posture recognition and trajectory tracking process, the three-dimensional coordinates of each reflective mark need to be traversed, the calculation amount is huge, and the real-time motion posture recognition and trajectory tracking effect of the catching ball is greatly influenced.
(2) The self structure of catching the ball has the size difference, and the ball that the diameter is bigger can cause unpredictable sheltering from to the reflection of light sticker in the motion process to influence the capture process, bring relatively poor motion gesture recognition effect.
Disclosure of Invention
In view of this, the technical problem mainly solved by the present invention is how to configure the reflective marker points on the capture ball to enhance the real-time motion gesture recognition and trajectory tracking effects in the post gesture recognition and trajectory tracking processes.
According to a first aspect, the present application provides a sphere spotting method for optical motion capture, comprising the steps of:
dividing the sphere surface into a plurality of sub-regions;
arranging reflective marker points in the plurality of sub-areas so as to centralize the distribution of distance values between any two reflective marker points, wherein the reflective marker points are used for carrying out optical motion capture on the sphere.
The dividing the sphere surface into a plurality of sub-regions, comprising: the surface of the sphere is divided into a graphical combination of a first number of first geometric figures and a second number of second geometric figures according to the size of the sphere.
The first geometric figure is a regular hexagon, the second geometric figure is a regular pentagon, and the side lengths of the regular hexagon and the regular pentagon are equal.
The first number is twenty and the second number is twelve.
Dividing the surface of the sphere into a first number of first geometries and a second number of second geometries, according to the size of the sphere, comprising: calculating the surface area of the sphere according to the diameter or the radius of the sphere; and calculating the side lengths of the regular hexagon and the regular pentagon according to the fact that the total area of the graph combination is equal to the surface area of the sphere.
The arranging of the reflective marker points in the plurality of sub-areas comprises:
determining a reference point on the equator of the surface of the sphere, wherein the reference point is used as the central point of a first regular hexagon to determine the position area of the first regular hexagon;
dividing the regions of the regular pentagons respectively by taking three non-adjacent sides of the first regular hexagon as references, and dividing the regions of the regular hexagons again by taking three other non-adjacent sides of the regular hexagons as references to form a pattern combination of three regular hexagons and three regular pentagons distributed on six sides of one regular hexagon at intervals until the surface of the sphere is divided, so as to form twenty regular hexagons and twelve regular pentagons.
The arranging of the reflective marker points in the plurality of sub-areas comprises: and arranging the reflective mark points of the sphere at the vertex or the center point of the edge of each geometric figure.
The arranging of the reflective mark points of the sphere at the vertex or the center point of the edge of each geometric figure comprises the following steps: and arranging the light reflecting mark points of the sphere at the top points of the regular hexagons, or arranging the light reflecting mark points of the sphere at the middle points of the edges of the regular hexagons at intervals.
According to a second aspect, the application discloses a capturing ball for optical motion capture, wherein a plurality of reflective marker points are distributed on the capturing ball, and for each reflective marker point, the capturing ball is obtained by the ball point distribution method disclosed by the first aspect.
According to a third aspect, the application discloses an optical motion capture system comprising:
the catching ball disclosed in the second aspect above;
a plurality of infrared cameras for taking infrared images of the capture ball in a plurality of directions during motion in a motion space;
and the motion gesture recognition device is in communication connection with the plurality of infrared cameras and is used for carrying out motion gesture recognition on the capture ball according to the infrared images.
The application has the advantages that:
according to the sphere point distribution method for optical motion capture, the capture sphere and the system, due to the fact that the surface of the sphere is divided by the aid of the predetermined graph combination, the surface of the sphere is enabled to be in a uniform graph distribution state, and the light reflecting mark points of the sphere are set at the top points or the center points of the edges of the geometric images, so that the capture sphere for optical capture can be formed. The catching ball obtained by the ball point distribution method has the advantages of uniform point distribution and small distance difference between the reflective mark points, so that when the optical catching system is used for carrying out motion posture recognition on the catching ball, the system is favorable for rapidly recognizing the reflective mark points on the catching ball, and the system is favorable for recognizing the motion posture information of the catching ball according to the displacement change of the reflective mark points. Moreover, when the catching ball is recognized in the form of a rigid body in the past, at least 30 reflective mark points are required to form the rigid body of the catching ball to ensure the stability of the motion posture recognition process, so that in the process of capturing and recognizing the catching ball at the later stage, a system needs to capture and track each reflective mark point respectively, which is a very large calculation amount, long in time consumption and impractical for the capturing and tracking process of the rigid body; when the motion gesture recognition method and the motion gesture recognition device disclosed by the application are adopted, the motion gesture recognition can be carried out on the capture ball only by finding 10 or even less matching points on the capture ball, so that the calculation amount of a system is greatly reduced, the calculation speed in the capturing and tracking process is improved, and the real-time gesture recognition effect of the capture ball is enhanced, and meanwhile, the convenience is brought to practical application.
Drawings
FIG. 1 is a schematic diagram of a motion gesture recognition system;
FIG. 2 is a flow chart of a ball spotting method for the capture ball;
FIG. 3 is a flow chart of dividing a sphere surface into sub-regions;
FIG. 4 is one of the geometrical distribution diagrams of the catching balls;
FIG. 5 is a second schematic view showing the geometrical distribution of the catching balls;
FIG. 6 is a flow chart of a motion gesture recognition method;
FIG. 7 is a flow chart of obtaining three-dimensional coordinates of matching points;
FIG. 8 is a flow chart for obtaining the coordinates of the center of sphere;
FIG. 9 is a flow chart of obtaining motion gesture information;
fig. 10 is a schematic structural diagram of the motion gesture recognition apparatus;
fig. 11 is a schematic structural diagram of another motion gesture recognition apparatus.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
The numbering of the components as such, e.g., "first", "second", etc., is used herein only to distinguish the objects as described, and does not have any sequential or technical meaning. The term "connected" and "coupled" when used in this application, unless otherwise indicated, includes both direct and indirect connections (couplings).
The embodiment of the invention provides a motion gesture recognition method and device for a capture ball, and aims to solve the problem that the computation amount of the motion gesture recognition process of the capture ball is huge in the prior art. The key point for solving the problem is to uniformly arrange the reflective mark points on the surface of the catching ball to form the catching ball with uniformly distributed points on one hand, and to improve the motion gesture recognition mechanism to form the motion recognition device with rapid and accurate recognition of the motion gesture of the catching ball on the other hand. Therefore, the invention also provides a motion gesture recognition system, which is used for recognizing the motion gesture of the capture ball by adopting the motion gesture recognition device.
Referring to fig. 1, the motion gesture recognition system includes a motion gesture recognition device 11, a plurality of infrared cameras 12, and a capture ball 13. The number of the infrared cameras 12 can be adjusted according to the field size and the actual application. The plurality of infrared cameras 12 can be distributed at a plurality of angles in the motion space to form an all-directional detection effect on the captured ball, and each infrared camera can shoot an infrared image of the captured ball 13 in the motion space at a high speed (the infrared image often comprises pixels corresponding to the reflective marker points); the motion gesture recognition device 11 is in communication connection with each infrared camera 12 and is used for performing motion gesture recognition on the capturing ball 13 according to the infrared image shot by the infrared camera 12; the catching ball 13 should be a sphere with a plurality of reflective marker points distributed uniformly on the surface, preferably, the reflective marker points on the surface of the catching ball are distributed according to the positions of the vertexes of the regular hexagon figure on the surface of the football, or according to the positions of the center points of the edges of the regular hexagon on the surface of the football, which are intersected with each other.
In order to accurately understand the distribution pattern of the reflective marker points on the surface of the capture ball 13, a ball spotting method for the capture ball will be described, referring to fig. 2, which includes steps S01-S02.
In step S01, the sphere surface is divided into a plurality of sub-regions. The specific concept is illustrated as follows:
when the reflective mark points are arranged on the surface of the sphere, the following conditions should be met in order to achieve the effect that the reflective mark points are easy to identify and are not shielded: (1) the uniform distribution of the reflective mark points is ensured; (2) the number of the reflective mark points is as small as possible; (3) the distance types between the reflective mark points are as few as possible, or the distance values between any two reflective mark points are distributed concentratedly. In order to achieve the above conditions, the circular spherical surface of the sphere needs to be divided into a plurality of sub-areas with close areas, and the reflective marker points are arranged in the plurality of sub-areas, so that the distance value distribution between any two reflective marker points can be centralized. The distribution of the distance values between the reflective marker points is concentrated, and can be measured by counting the occurrence times of the distance values between the points, for example, if the distance value between the existing point and the point reaches a preset value, the arrangement of the reflective marker points on the ball is considered to satisfy the condition.
Through the discovery of the contrast to different geometry, the surface of football is always by 20 regular hexagons and 12 regular pentagons are constituteed, and these some regular hexagons and regular pentagon have evenly distributed's form, therefore, can go the division according to the geometry on football surface with the surface of catching ball 13, afterwards, select each summit position of catching the regular hexagons figure on ball 13 surface to carry out the stationing, or select the edge central point position of the non-adjacent limit of regular hexagon to carry out the stationing, so, can realize catching the reflection of light mark point on ball 13 surface and reach the distribution evenly, the point is few with the distance kind between the point, the purpose that distance value distributes and concentrates.
Therefore, the pattern of the football surface can be used as a reference for the plurality of sub-areas, that is, the surface of the ball is divided into a first number of pattern combinations of a first geometric figure and a second number of pattern combinations of a second geometric figure according to the size of the ball, wherein the first geometric figure in the pattern combinations is a regular hexagon, the second geometric figure in the pattern combinations is a regular pentagon, and the side lengths of the regular hexagon and the regular pentagon are equal, and in addition, the first number in the pattern combinations is twenty, and the second number in the pattern combinations is twelve. Then, the sphere surface may be divided into a plurality of sub-regions based on the pattern combination, and referring to fig. 3, step S01 may include steps S011-S015, which will be described in detail below.
Step S011, in order to divide a plurality of sub-regions on the surface of the sphere, the surface area of the sphere needs to be known, so that each sub-region can be reasonably divided on the surface of the sphere according to the determined pattern combinations according to the surface area of the sphere. The surface area of the sphere can be determined by the formula 4 pi R2Obtained, wherein R represents the radius of the sphere.
In step S012, in order to allow the regular hexagons and the regular pentagons in the determined pattern combination to completely cover the sphere to be the captured ball 13, the sizes of the geometric patterns in the pattern combination should be calculated according to the sizes of the spheres. Here, the number of regular hexagons and regular pentagons having equal side lengths is set to 20 and 12, respectively, strictly following the pattern of the soccer ball surface, and then the dimensions of the regular hexagons and regular pentagons (i.e., the side lengths of the geometric figure) are determined by the following formula:
4πR2=20*2.598*x2+12*1.72*x2
in the formula, R represents a radius of a sphere, and x represents a side length of a regular hexagon or a regular pentagon. For example, when a sphere with a radius of 15cm is selected as the capturing ball 13, the side length of a regular hexagon or a regular pentagon to be laid on the surface of the sphere should be 6.24 cm.
In step S013, a position region of the first regular hexagon on the surface of the sphere is determined. In one embodiment, referring to FIG. 4, an "equator" is defined on the surface of the capture ball 13, and a reference point is defined on the equator, the reference point being the center point of the first regular hexagon 130 to determine the location area of the first regular hexagon 130.
Step S014, forming a graph combination of three regular hexagons and three regular pentagons distributed on six sides of the first regular hexagon at intervals. In a specific embodiment, referring to fig. 4 and 5, the region of the regular hexagon 132 is divided based on the non-adjacent three sides of the first regular hexagon 130 in step S013, and the region of the regular hexagon 131 is divided again based on the other non-adjacent three sides of the regular hexagon, so as to form a pattern combination of three regular hexagons 131 and three regular pentagons 132 distributed at intervals on the six sides of the regular hexagon 130. Note that the first regular hexagon 130 and the regular hexagon 131 are regular hexagons having equal side lengths.
In step S015, the entire surface of the sphere is divided according to the pattern combination in step S014 until the surface of the sphere is divided, forming twenty regular hexagons 131 and twelve regular pentagons 132.
It should be noted that the process of dividing the geometric figure includes printing, projecting, pasting, virtually dividing, and other technical means, and the specific selected technical means is not limited herein.
Step S02, arranging reflective markers in the divided sub-regions to make the distance distribution between any two reflective markers for optically capturing the movement of the sphere centralized, in a specific embodiment, arranging the reflective markers of the sphere at the center of the edge of the non-adjacent edge or the top of the geometric figure used in each sub-region, then step S02 includes the following processes:
(1) marking the vertexes of the divided regular hexagons 131 on the surface of the capturing ball 13 (it should be understood that the vertexes of the regular pentagons 132 can be selected to achieve the same marking effect); alternatively, the edge centers of the non-adjacent edges of the divided regular hexagons 131 are marked, and preferably, the edge center points of the edges where two pairs of regular hexagons 131 intersect are marked.
(2) Each marked vertex or each marked edge center is set as a retro-reflective marker point, and each formed retro-reflective marker point is set as a point set (the vertex composition point set includes 60 retro-reflective marker points, and the point set of edge centers includes 30 retro-reflective marker points) for optical motion capture of the capture ball 13. Here, the process of setting the retroreflective marker points may be to provide retroreflective stickers at the respective marked vertexes or the respective marked edges to form retroreflective marker points.
It will be appreciated by those skilled in the art that the present application also contemplates a capture ball for optical motion capture having 60 retro-reflective markers or 30 retro-reflective markers uniformly distributed thereon. When the catching ball is provided with 60 reflective marker points, the reflective marker points are distributed at the vertexes of the regular hexagon formed in the steps S01-S02; when the catching ball has 30 reflective marker points, the reflective marker points are distributed according to the middle points of the edges of the non-adjacent sides of the regular hexagon formed in the steps S01-S04.
In another embodiment, the spherical spotting method of the capturing ball 13 is not required, and a ball of a soccer ball or a similar soccer ball is directly used as the capturing ball, and then, at this time, the reflective marker points on the surface of the capturing ball are distributed according to the positions of the vertexes of the regular hexagon pattern on the surface of the soccer ball, or according to the positions of the center points of the edges of the intersecting edges of every two regular hexagons on the surface of the soccer ball, so that the spherical spotting process of steps S01-S04 is omitted, and the method is cheap for practical application.
In addition, the skilled person should also understand that, the capturing ball 13 obtained by the above ball point arranging method cannot be regarded as a rigid body simply, but as a capturing body with "mixed points" (i.e. reflective mark points) because of the distribution of the reflective mark points on the surface of the capturing ball 13, so that the capturing and identifying process of the capturing ball 13 does not need to calculate the whole rigid body, and only needs to calculate enough "mixed points" to determine the spatial position and the motion posture of the capturing ball. However, the introduction of "miscellaneous points" also brings about a corresponding problem, and the traditional meaning of miscellaneous points refers to points which are incorrect or should not exist in the system, and the points should be excluded in practical application, while the "miscellaneous points" of the capture ball 13 are considered to be captured, so that a new judgment mechanism is needed to judge which "miscellaneous points" belong to the corresponding capture ball and retain the points when the capture ball is identified. Here, the "miscellaneous points" on the surface of the capturing ball 13 are distributed uniformly, and the distance between each "miscellaneous point" and the adjacent "miscellaneous point" is of a small variety, for example, when there are 60 "miscellaneous points" on the capturing ball 13, the closest distance between the point and the point is equal to the side length and is in a dense distribution; when the catching ball is provided with 30 reflective mark points, the closest distance between the points is half of the sum of the diagonal line and the side length of the regular hexagon, and the points are densely distributed; therefore, it is possible to determine which points are "outliers" on the captured ball to be recognized according to the closest distance between the "outliers", and the problems mentioned in this paragraph can be solved well with this invention.
Referring to fig. 6, the present application discloses a motion gesture recognition method for a capture ball, which includes steps S10-S30.
And step S10, acquiring the three-dimensional coordinates of all the reflective marker points in the motion space of the catching ball, and identifying a plurality of matching points of the catching ball. In one embodiment, see FIG. 7, step S10 may include steps S11-S16, described in more detail below.
Step S11, obtaining three-dimensional coordinates of all the reflective markers according to the two-dimensional coordinates of all the reflective markers in the motion space, where the motion space includes other identifiable objects except the capture ball (i.e. the motion space includes other reflective markers on other identifiable objects besides the reflective markers on the capture ball.
Referring to fig. 1, when the capturing ball 13 moves freely in the motion space, the infrared cameras 12 continuously capture images, preferably, the capturing rate of each infrared camera is maintained at 120 frames per second or more, the infrared cameras 12 process the captured infrared images to obtain two-dimensional coordinates of the reflective marker points, and output the two-dimensional coordinates to the motion gesture recognition device 11. The motion gesture recognition device 11 obtains two-dimensional coordinates of the reflective marker points in the multiple infrared images from the infrared camera 12 at the same time, and performs three-dimensional reconstruction processing according to the obtained two-dimensional coordinates to obtain three-dimensional coordinates of all the reflective marker points in the motion space. The three-dimensional reconstruction processing is already a conventional image processing means, belongs to the prior art, and is not described in detail here.
It will be appreciated by those skilled in the art that other retro-reflective marker points are also present in the motion space and must necessarily have an effect on the recognition of the catch ball 13. Therefore, all recognizable reflective markers in the motion space need to be matched one by one to determine which reflective markers belong to the capture ball (i.e., the process of recognizing the matching points on the capture ball), so as to achieve the purpose of correctly recognizing the motion trajectory of the capture ball. The matching process of the reflective mark points please refer to steps S12-S15.
And step S12, comparing the three-dimensional coordinates of any two light reflecting mark points which can be identified in the step S11 to obtain the distance between the two light reflecting mark points.
Step S13, determining whether the distance obtained in step S12 is equal to or close to the distance criterion (i.e. determining whether the difference between the obtained distance and the distance criterion is within an error range), if yes, going to step S14, otherwise, returning to step S12, and continuing to select two reflective marker points in the next group for determination.
Those skilled in the art will also appreciate that conventionally acquiring the distance between a point and a point in space includes three methods: (1) the method is derived according to a formula, deviates from the theory and is easy to make mistakes; (2) the distance is directly measured by using a measuring tool, and the method has higher precision but is not practical in the rapid distance measurement; (3) the distance between the points is obtained by using a distance statistical method in an actual optical motion capture environment, and the method has the common advantages of the two methods and is beneficial to practical application. Therefore, in order to achieve a better ranging effect, the third method is adopted, namely the spatial distance between all the reflective mark points of the capture ball in the motion space is counted, and the distance with concentrated occurrence times in the distance counting result is used as a distance standard value.
It should be noted that, when the third method is used to obtain the distance standard value, the process specifically includes: 1) acquiring three-dimensional coordinates of each reflective marker point on the surface of the capturing ball 13 in the testing stage, wherein only the capturing ball 13 is an identifiable object in the motion space (no other identifiable objects exist, and accordingly no reflective marker points on other identifiable objects exist); 2) calculating the distance between any two reflecting mark points according to the three-dimensional coordinates of all the reflecting mark points on the capturing ball; 3) and counting to obtain all distance values, and setting the distance values in the distribution set in the distance distribution result as distance standard values. It should be noted that, in order to ensure that there are distance values with concentrated distribution, the reflective markers disposed on the surface of the capture ball should be uniformly distributed, and the specific distribution mode may refer to the ball distribution mode described above. Another point to be noted is that there may be one or more distance values in the distribution set, i.e. there may be one or more distance standard values.
In step S14, the two reflective marker points in step S13 are determined as reflective marker points on the catch ball 13, and are referred to as matching points of the catch ball 13.
Step S15, determining whether the number of matching points on the capturing ball 13 has reached a predetermined number (when the reflective marker point is at the top of the capturing ball 13, the predetermined number may be set to any value of 20-60, preferably 20; when the reflective marker point is at the middle of the capturing ball, the predetermined number may be set to any value of 10-40, preferably 10), if yes, going to step S16, otherwise, returning to step S12, and continuing to select the next set of reflective marker points for determination.
In step S20, the center coordinates of the ball 13 are obtained from the three-dimensional coordinates of the respective matching points on the ball 13 in step S10. In one embodiment, see FIG. 8, the three-dimensional coordinates of each matching point are input into an iterative algorithm to obtain the center coordinates of the capture ball 13, wherein the iterative algorithm involved is illustrated by substeps S21-S25 of step S20, as described in detail below.
Step S21, the average value of the three-dimensional coordinates of each matching point is taken to obtain the theoretical center coordinates of the captured ball 13. In one embodiment, the three-dimensional coordinates of each matching point are set to P1(X1, Y1, Z1), P2(X2, Y2, Z2) … … Pn (Xn, Yn, Zn), and the theoretical center-of-sphere coordinate of the capture sphere 13 is set to Q0(K01, K02, K03), so that Q0 is obtained by the following formula.
K01=(X1+X2……+Xn)/n
K02=(Y1+Y2……+Yn)/n
K03=(Z1+Z2……+Zn)/n
Where n represents the number of matching points on the recognized catch ball 13.
Furthermore, a theoretical radius R0 of the capture ball 13 is obtained, and this theoretical radius R0 can be considered as an actual radius of the capture ball 13 obtained by measurement in the first iteration.
In step S22, the corrected radius and the corrected center coordinates of the captured ball 13 are calculated based on the theoretical center coordinates Q0 of the captured ball 13, and the specific process includes steps S221 to S225, which will be described below.
And step S221, obtaining the distance from each matching point to the theoretical sphere center coordinate Q0 according to the three-dimensional coordinates of each matching point and the theoretical sphere center coordinate Q0 in the step S10, and obtaining the corrected radius R1 of the captured ball by taking the average value of the distances from each matching point to the theoretical sphere center coordinate Q0. In one embodiment, if the distances from P1 and P2 … Pi … Pn to Q0 are D1 and D2 … Di … Dn (i is an arbitrary integer from 1 to n), respectively, then R1 is (D1+ D2+ … + Dn)/n.
In step S222, for each matching point, the difference is made between the three-dimensional coordinate of the matching point and the theoretical sphere center coordinate Q0, so as to obtain a coordinate offset Mi (i is an arbitrary integer within a range from 1 to n) of the matching point. In one embodiment, the coordinate offset Mi of each retro-reflective marker point can be expressed as M1(X1-K01, Y1-K02, Z1-K03), M2(X2-K01, Y2-K02, Z2-K03), Mi (Xi-K01, Yi-K02, Zi-K03) … Mn (Xn-K01, Yn-K02, Zn-K03).
In step S223, for each matching point, the distance from the theoretical sphere center coordinate Q0 to each matching point is subtracted from a preset radius to obtain the distance offset Wi (i is any integer in the range of 1-n) of the reflective marker point, where the preset radius includes the theoretical radius R0 or the corrected radius R1 of the capturing sphere 13. In one embodiment, the respective distance offsets may be expressed as W1 — R1-D1, W2 — R1-D2, … Wi — R1-Di …, Wn — R1-Dn; in another embodiment, the respective distance offsets may be expressed as W1-R0-D1, W2-R0-D2, … Wi-R0-Di …, and Wn-R0-Dn.
Here, the theoretical radius R0 is an actual measurement value of the capture ball 13, and is recorded in the system in advance, and this value is updated in step S25.
In step S224, the coordinate offset amount Mi of the matching point Pi and the corresponding distance offset amount Wi are multiplied for each matching point to obtain the corrected coordinate offset amount Pi' of the matching point Pi, and the average of the corrected coordinate offset amounts of the matching points is taken to obtain the coordinate offset amount C of the theoretical center coordinates Q0 of the capturing ball 13. In one embodiment, the corrected coordinate offset of each matching point may be represented as P1 ═ M1 × W1, P2 ═ M2 × W2, … Pi ═ Mi ═ Wi …, Pn ═ Mn ═ Wn, and C is represented as P1 ═ M1 ═ W1, P2 ═ M2 ═ W2, and C is represented as C
C=(P1′+P2′…Pi′…+PN′)/n
Step S225, the theoretical center coordinate Q0 of the captured ball 13 is summed with the coordinate offset C of the theoretical center coordinate to obtain the corrected center coordinate Q1 of the captured ball 13, which is specifically represented as Q1
Q1=Q0+C
Step S23, comparing the corrected radius R1 of the catching ball 13 with the theoretical radius R0, and judging whether the difference value between the calculated radius R1 and the actual radius R0 is within the radius error range set by the user; the corrected sphere center coordinate Q1 of the capture ball 13 is compared with the theoretical sphere center coordinate Q0, and it is determined whether the difference between the corrected sphere center coordinate Q1 and the theoretical sphere center coordinate Q0 is within the coordinate error range set by the user. If both the determination results are yes, the process proceeds to step S24, and if not, the process proceeds to step S25.
In step S24, the corrected center coordinates Q1 of the captured ball 13 are regarded as the center coordinates of the captured ball 13, and at this time, the system is considered to complete the center point optimization of the captured ball 13.
In step S25, the value of the theoretical radius R0 of the captured ball 13 is updated to the corrected radius R1, and the value of the theoretical center coordinate Q0 of the captured ball 13 is updated to the corrected center coordinate Q1. Then, returning to step S21, the corrected radius and the corrected center coordinates of the captured ball are recalculated until the comparison result of the corrected radius of the captured ball 13 obtained again and the updated theoretical radius R0 and the comparison result of the corrected center coordinates of the captured ball 13 obtained again and the updated theoretical center coordinates Q0 are within the corresponding error ranges (i.e., until entering step S24).
And step S30, obtaining the motion attitude information of the catching ball in the motion space according to the three-dimensional coordinates of each matching point and the sphere center coordinates of the catching ball. In one embodiment, see FIG. 9, step S30 may include steps S31-S33, described in more detail below.
In step S31, the center coordinates Q1 (obtained in step S24) of the captured ball 13 obtained from the current frame are compared with the center coordinates of the captured ball 13 obtained from the previous frame, and the displacement information of the captured ball 13 is obtained. In an embodiment, the process of obtaining the center coordinates of the capture ball 13 of the previous frame may refer to step S20, which is not described herein again.
The center coordinate Q1 is a position indicating amount of the capturing ball 13 in the motion space, and when the center coordinate Q1 is changed, it means that the position of the capturing ball 13 is changed, and the displacement distance and the displacement direction of the capturing ball 13 in the motion space can be indicated by the change amount of the center coordinate Q1 in the X, Y, Z direction.
In step S32, the three-dimensional coordinates Pi of each matching point obtained according to the current frame are compared with the three-dimensional coordinates Pi of the corresponding matching point in the previous frame, so as to obtain the posture information of the capturing ball 13. In an embodiment, the step S10 is referred to in the process of obtaining the three-dimensional coordinates of the matching point corresponding to the previous frame, which is not described herein again.
It should be noted that the time interval between the current frame and the previous frame is very small, even around 1ms, and therefore, it can be considered that the capturing ball 13 has a very small posture change in this time period, and it can be considered that each matching point has a very small position change in this time period, and therefore, the matching point having a small change in the three-dimensional coordinates at the current time can be considered as the matching point corresponding to the previous frame.
Further, the three-dimensional coordinate Pi of any one of the matching points is a position indicating amount of the matching point, and when the three-dimensional coordinate Pi changes, which means that the posture of the capturing ball 13 has changed, the posture change distance and the posture change direction of the capturing ball 13 in the motion space can be indicated by the amount of change in the three-dimensional coordinate Pi in the direction X, Y, Z.
In step S33, the displacement information and posture information of the capture ball 13 are taken as the motion posture information of the capture ball 13 in the motion space, which includes the displacement change amount and posture change amount of the capture ball 13.
In view of this, the method for recognizing motion posture of a ball trap disclosed in steps S10-S30 of the present application has the following advantages: (1) inspired by the football surface structure, the point distribution of the ball target objects can be ensured, so that the homogenization and the practicability of the reflective mark points are realized, and the capture and the motion gesture recognition of the target objects are conveniently realized; (2) the concept of 'miscellaneous point' tracking is introduced, the problem that the target object can be captured and tracked only by calculating the whole rigid body is solved, and the motion state of the target object can be determined only by a proper amount of 'matching points', so that the calculation amount is greatly reduced, and the operation efficiency of the system is improved.
It will be understood by those skilled in the art that the present application also protects a motion gesture recognition apparatus for capturing a ball, the motion gesture recognition apparatus 4 being shown in fig. 10 and comprising:
a matching point coordinate obtaining unit 41 for obtaining three-dimensional coordinates of the respective matching points on the capturing ball 13 in the motion space. The process of the matching point coordinate acquiring unit 41 acquiring the three-dimensional coordinates of the matching point on the capture ball 13 in the motion space may refer to step S10, and will not be described in detail here.
And a spherical center coordinate acquiring unit 42 for acquiring the spherical center coordinates of the captured ball 13 from the three-dimensional coordinates of the respective matching points. The sphere center coordinate obtaining unit 42 inputs the three-dimensional coordinates of each matching point into an iterative algorithm to obtain the sphere center coordinates of the captured ball, and the specific process may refer to step S20, which is not described in detail here.
The posture information obtaining unit 43 is configured to obtain the motion posture information of the capture ball 13 in the motion space according to the three-dimensional coordinates of the respective matching points and the coordinates of the center of sphere of the capture ball, and the specific process may refer to step S30.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In addition, each functional unit in the motion gesture recognition apparatus 4 claimed in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. The present application therefore also protects another motion gesture recognition apparatus 4, see fig. 11, comprising a memory 401 and a processor 402, as well as a computer program 403, the memory 401 being adapted to store the computer program 403, the computer program 403 being adapted to implement the method shown in steps S10-30 above when executed by the processor 402.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by computer programs. When all or part of the functions of the above embodiments are implemented by a computer program, the program may be stored in a computer-readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc., and the program is executed by a computer to realize the above functions. For example, the program may be stored in a memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above may be implemented. In addition, when all or part of the functions in the above embodiments are implemented by a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and may be downloaded or copied to a memory of a local device, or may be version-updated in a system of the local device, and when the program in the memory is executed by a processor, all or part of the functions in the above embodiments may be implemented.
The present invention has been described in terms of specific examples, which are provided to aid understanding of the invention and are not intended to be limiting. For a person skilled in the art to which the invention pertains, several simple deductions, modifications or substitutions may be made according to the idea of the invention.

Claims (7)

1. A sphere spotting method for optical motion capture, comprising:
dividing the sphere surface into a plurality of sub-regions, specifically comprising: dividing the surface of the sphere into geometric figure combinations of a first number of regular hexagons and a second number of regular pentagons according to the size of the sphere;
arranging a plurality of reflective mark points which are uniformly distributed in the plurality of sub-areas so that the distance value between any two reflective mark points is distributed in a concentrated manner and reaches a preset value, wherein the distance value in the distributed concentrated manner has one or more values, and the method specifically comprises the following steps: marking the center points of the vertexes or non-adjacent sides of each divided regular hexagon and/or regular pentagon on the surface of the sphere, setting reflective marking points at the center points of the vertexes or non-adjacent sides of each marked regular hexagon and/or regular pentagon, setting the formed reflective marking points as a point set, wherein the point set formed by the vertexes of the regular hexagon and/or regular pentagon comprises 60 reflective marking points, the point set formed by the center points of the sides of the non-adjacent sides of the regular hexagon and/or regular pentagon comprises 30 reflective marking points, and the point set is used for carrying out optical motion capture on the sphere; the reflective mark points have a preset number, and when the reflective mark points are distributed at the top of the sphere, the preset number is set to be any value of 20-60; when the reflective marker points are distributed at the center points of the edges of the sphere, the predetermined number is set to any one of values from 10 to 40.
2. The sphere spotting method of claim 1 wherein the sides of the regular hexagon and regular pentagon are equal.
3. The method of claim 1, wherein the first number is twenty and the second number is twelve.
4. The sphere spotting method of claim 2 wherein said dividing the sphere surface into a first number of regular hexagons and a second number of regular pentagons according to the size of the sphere comprises:
calculating the surface area of the sphere according to the diameter or the radius of the sphere;
and calculating the side lengths of the regular hexagon and the regular pentagon according to the fact that the total area of the graph combination is equal to the surface area of the sphere.
5. The sphere spotting method of claim 2 wherein the dividing of the sphere surface into a first number of regular hexagons and a second number of regular pentagons according to the size of the sphere further comprises the steps of:
determining a reference point on the equator of the surface of the sphere, wherein the reference point is used as the central point of a first regular hexagon to determine the position area of the first regular hexagon;
dividing the regular pentagon by taking three nonadjacent sides of the first regular hexagon as references, dividing the regular hexagon again by taking the other three nonadjacent sides of the regular hexagon as references, and forming a pattern combination of three regular hexagons and three regular pentagons on six sides of one regular hexagon at intervals until the surface of the sphere is divided, so as to form twenty regular hexagons and twelve regular pentagons.
6. A catch ball for optical motion capture, wherein a plurality of retro-reflective markers are distributed on the catch ball, and for each retro-reflective marker, the ball spotting method according to any one of claims 1-5 is used.
7. An optical motion capture system, comprising:
a capture ball as in claim 6;
a plurality of infrared cameras for taking infrared images of the captured ball in a plurality of directions in a motion space;
and the motion gesture recognition device is in communication connection with the plurality of infrared cameras and is used for carrying out motion gesture recognition on the capture ball according to the infrared images.
CN201810322184.1A 2018-04-11 2018-04-11 Sphere point distribution method for optical motion capture, capture sphere and system Active CN108830132B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201810322184.1A CN108830132B (en) 2018-04-11 2018-04-11 Sphere point distribution method for optical motion capture, capture sphere and system
PCT/CN2018/090784 WO2019196192A1 (en) 2018-04-11 2018-06-12 Capture ball-based sphere distribution method, motion attitude identification method and system, and device
US16/470,749 US11430135B2 (en) 2018-04-11 2018-06-12 Capture-ball-based on-ball point distribution method and motion-posture recognition method, system, and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810322184.1A CN108830132B (en) 2018-04-11 2018-04-11 Sphere point distribution method for optical motion capture, capture sphere and system

Publications (2)

Publication Number Publication Date
CN108830132A CN108830132A (en) 2018-11-16
CN108830132B true CN108830132B (en) 2022-01-11

Family

ID=64155480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810322184.1A Active CN108830132B (en) 2018-04-11 2018-04-11 Sphere point distribution method for optical motion capture, capture sphere and system

Country Status (1)

Country Link
CN (1) CN108830132B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820689A (en) * 2019-10-21 2022-07-29 深圳市瑞立视多媒体科技有限公司 Mark point identification method, device, equipment and storage medium
CN111553944B (en) * 2020-03-23 2023-11-28 深圳市瑞立视多媒体科技有限公司 Method, device, terminal equipment and storage medium for determining camera layout position

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105742811A (en) * 2016-03-31 2016-07-06 哈尔滨工程大学 Truncated spherical radar dome and splicing method thereof
CN105832342A (en) * 2016-03-14 2016-08-10 深圳清华大学研究院 Kinematics parameter capturing method based on visible spatial expansion of optical motion capturing system
CN108081258A (en) * 2016-11-22 2018-05-29 广州映博智能科技有限公司 Robot remote control system and method based on optics motion capture

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI234105B (en) * 2002-08-30 2005-06-11 Ren-Guang King Pointing device, and scanner, robot, mobile communication device and electronic dictionary using the same
US8016688B2 (en) * 2005-08-15 2011-09-13 Acushnet Company Method and apparatus for measuring ball launch conditions
CN105169644A (en) * 2014-06-23 2015-12-23 南京专创知识产权服务有限公司 Night match type luminous football and night match type luminous football woodwork
CN108765457B (en) * 2018-04-11 2021-09-07 深圳市瑞立视多媒体科技有限公司 Motion gesture recognition method and device for catching ball

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105832342A (en) * 2016-03-14 2016-08-10 深圳清华大学研究院 Kinematics parameter capturing method based on visible spatial expansion of optical motion capturing system
CN105742811A (en) * 2016-03-31 2016-07-06 哈尔滨工程大学 Truncated spherical radar dome and splicing method thereof
CN108081258A (en) * 2016-11-22 2018-05-29 广州映博智能科技有限公司 Robot remote control system and method based on optics motion capture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
足球的正五边形和正六边形边长的关系;lishaowei3等;《https://wenku.baidu.com/view/b6fc7d353968011ca30091d1.html》;20120905;第1页 *

Also Published As

Publication number Publication date
CN108830132A (en) 2018-11-16

Similar Documents

Publication Publication Date Title
CN108765457B (en) Motion gesture recognition method and device for catching ball
Reid et al. Goal-directed video metrology
CN106553195B (en) Object 6DOF localization method and system during industrial robot crawl
US7190826B2 (en) Measuring the location of objects arranged on a surface, using multi-camera photogrammetry
US7457439B1 (en) System and method for motion capture
US20130106833A1 (en) Method and apparatus for optical tracking of 3d pose using complex markers
CN102054276B (en) Camera calibration method and system for object three-dimensional geometrical reconstruction
CN108830132B (en) Sphere point distribution method for optical motion capture, capture sphere and system
Karaszewski et al. Assessment of next-best-view algorithms performance with various 3D scanners and manipulator
JP2004184236A (en) Measuring method for rotational characteristics and flight characteristics of spherical body
Maurice et al. Epipolar based structured light pattern design for 3-d reconstruction of moving surfaces
Katsuki et al. Design of an artificial mark to determine 3D pose by monocular vision
WO2019196192A1 (en) Capture ball-based sphere distribution method, motion attitude identification method and system, and device
JP4109076B2 (en) Measuring method of rotation amount and rotation axis direction of curved surface body, and measurement device of rotation amount and rotation axis direction of curved surface body
WO2018027451A1 (en) Flight positioning method and device
DE102018208080B3 (en) Method and system for locating an object in a robot environment
Gulzar et al. See what i mean-probabilistic optimization of robot pointing gestures
CN115170665B (en) Image-based spherical object pose determination method and system
CN113379846B (en) Method for calibrating rotating shaft of rotary table based on direction indication mark point calibration template
US11221631B2 (en) Performance arena for robots with position location system
Ishiguro et al. Identifying and localizing robots with omnidirectional vision sensors
Drouin et al. Monitoring Human Activities: Flexible Calibration of a Wide Area System of Synchronized Cameras
Aalerud Industrial Perception of a Human World: A Hitchhiker's Guide to Autonomy
CN117911472A (en) Point cloud processing method, camera calibration method and system thereof, and electronic equipment
Balan Visual Object Recognition in RoboCup

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant