WO2021000327A1 - Procédé de génération de modèle de main, appareil, dispositif de terminal et procédé de capture de mouvement de main - Google Patents
Procédé de génération de modèle de main, appareil, dispositif de terminal et procédé de capture de mouvement de main Download PDFInfo
- Publication number
- WO2021000327A1 WO2021000327A1 PCT/CN2019/094725 CN2019094725W WO2021000327A1 WO 2021000327 A1 WO2021000327 A1 WO 2021000327A1 CN 2019094725 W CN2019094725 W CN 2019094725W WO 2021000327 A1 WO2021000327 A1 WO 2021000327A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- knuckle
- finger
- virtual
- hand model
- axis
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/014—Hand-worn input/output arrangements, e.g. data gloves
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T90/00—Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation
Definitions
- the invention relates to motion capture technology, in particular to a hand model generation method, device, terminal equipment and hand motion capture method.
- Motion capture technology refers to setting a tracker on a key part of a moving object, the position of the tracker is captured by the motion capture system, and then processed by a computer to obtain three-dimensional space coordinate data.
- the three-dimensional space coordinate data is recognized by the computer, it can be used in animation production, gait analysis, biomechanics, ergonomics and other fields.
- the real person images captured by multiple cameras are replaced by digital models, and the actions of actors are captured and recorded, and then these actions are synchronized to the virtual characters in the computer, so that the actions of the virtual characters and the real person There is no difference in order to achieve realistic and natural results.
- the body parts of the user concerned are also different, and the setting position of the tracker is also different accordingly.
- an application needs to pay attention to the movement of the human hand, it needs to capture the movement of the human hand, for example, an application that uses fingers to play a piano or an application that operates an airplane cockpit. Therefore, how to design an algorithm that can accurately capture hand movements has become a very important research and development topic.
- the user when performing hand positioning and tracking, the user needs to wear gloves equipped with reflective marking points to move within the camera capture range.
- the camera collects the user's hand movement data, and combines the hand model to perform IK calculation to determine the user's hand movement trajectory and obtain hand movement information.
- the common practice is to set up an initial hand model in advance, compare the current user’s hand condition with the initial hand model, zoom in a certain proportion, and compare the zoomed hand
- the part model is used as the hand model for the final IK calculation.
- the hands are large or small, thick and thin, fingers are long and short, and the proportions of the hands are also different. It can be seen that there are many factors that affect the hand model. Therefore, if only the initial model of the hand is scaled to a certain proportion in the solution process, the hand model obtained after scaling cannot accurately reflect the user’s current hand shape. This will ultimately affect the accuracy of hand positioning.
- the present application provides a hand model generation method, device, and hand motion capture method to solve the problem of inaccurate hand models generated by simple equal scaling of the initial hand model in the prior art.
- the first aspect of the present application provides a method for generating a hand model, including:
- the current user calculates metacarpal expansion range and finger knuckle length to generate a middle hand model;
- the metacarpal expansion range is relative to the first finger knuckle
- the hand motion data is obtained after a motion capture camera captures the hand motion of a user wearing motion capture gloves;
- the final hand model is generated according to the expansion range of the metacarpal bones, the length of the finger knuckles, and the knuckle rotation center of each finger knuckle.
- the second aspect of the present application provides a hand model generating device, including:
- the initial hand model establishment unit is used to establish a three-dimensional coordinate system, and project a preset hand model onto a plane formed by a first axis and a second axis in the three-dimensional coordinate system to obtain an initial hand model;
- the coincidence unit is configured to coincide the plane where the palm of the current user is located with the plane formed by the first axis and the second axis in the three-dimensional coordinate system, and all the fingers of the user initially face the first axis;
- the intermediate hand model generating unit is configured to calculate the current user's metacarpal expansion range and finger knuckle length according to the received hand movement data of the current user and the initial hand model to generate the intermediate hand model;
- the expansion range of the metacarpal bone is the expansion range of the first knuckle of the finger relative to the third axis of the three-dimensional coordinate system;
- the hand motion data is obtained after the motion capture camera photographs the hand motion of the user wearing the motion capture glove;
- the final hand model generation unit is used to generate the final hand model according to the metacarpal bone expansion range, the length of the finger knuckles, and the knuckle rotation center of each finger knuckle.
- a third aspect of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
- the processor implements the first Aspect of the hand model generation method mentioned in any possible implementation manner.
- a fourth aspect of the present application provides a hand motion capture method, the method includes:
- the initial hand model when generating the hand model for inverse kinematics IK operation, is not simply scaled, but the hand motion data obtained according to the motion capture glove worn by the user As well as the initial hand model, calculate the metacarpal expansion, the length of the finger knuckles, and the knuckle rotation center to generate a final hand model that better reflects the user's specific hand conditions (hand size, hand thickness, and specific length of fingers).
- FIG. 1 is a schematic diagram of the layout of reflective marking points on motion capture gloves in the prior art
- FIG. 2 is a schematic diagram of the layout of reflective marking points on a motion capture glove provided by an embodiment of the present invention
- FIG. 3 is a schematic flowchart of a first embodiment of a method for generating a hand model according to an embodiment of the present invention
- FIG. 4 is a schematic flowchart of a second embodiment of a method for generating a hand model according to an embodiment of the present invention
- FIG. 5 is a schematic flowchart of an embodiment of S404 in FIG. 4;
- FIG. 6 is a schematic diagram of the virtual mark point and the real reflective mark point created in FIG. 5;
- FIG. 7 is a schematic flowchart of a third embodiment of a method for generating a hand model according to an embodiment of the present invention.
- Fig. 8 is a schematic diagram of the center of rotation created in midway 7;
- FIG. 9 is a schematic diagram of a terminal device provided by an embodiment of the present application.
- FIG. 10 is a schematic flowchart of a hand motion capture method provided by an embodiment of the present application.
- the term “if” can be interpreted as “when” or “once” or “in response to determination” or “in response to detection” depending on the context .
- the phrase “if determined” or “if detected [described condition or event]” can be interpreted as meaning “once determined” or “response to determination” or “once detected [described condition or event]” depending on the context ]” or “in response to detection of [condition or event described]”.
- the layout of the reflective marking points on the motion capture glove is shown in FIG. 1.
- the layout of the reflective marking points on the existing motion capture gloves is: a rigid body (including 3 or more reflective marking points) is placed on the back of the hand (that is, the position of the first knuckle of each finger), and then At the specific knuckle length directly above each knuckle of each finger (on the middle axis of the knuckle along the extending direction of the finger), a reflective marking point is respectively arranged.
- the motion capture camera collects the hand motion trajectory image and transmits it to the server.
- the server analyzes the image to identify the movement data of each reflective marking point. Then, according to the movement data of each reflective marking point, the preset initial hand model is scaled in equal proportion, and the scaled hand model is subjected to inverse kinematics IK calculation.
- the inventor of the present application found in practice that when the existing glove layout is used for motion capture, if the user’s hand is too small, the distance between the two reflective marking points during the hand motion capture process is too close, making the algorithm unable to Identify the correct reflective marking points. In order to ensure that the algorithm can correctly identify each reflective marking point during hand movement, the inventor of the present application proposes a new layout idea for the reflective marking point of the motion capture glove.
- the reflective marking points on the glove are too close and the marking points cannot be accurately identified.
- the marking points on the second, third, and fourth knuckles of the finger are used.
- the layout not all the marking points are laid out directly above the knuckles (on the middle axis of the knuckles along the direction of finger extension), as shown in Figure 2, but according to the actual situation, some reflective marking points are flexibly placed
- the layout is on the side of the knuckles, which avoids the problem that the reflective marking points are too close and cannot be accurately identified.
- the preset initial hand model is usually scaled in an equal proportion based on the collected hand motion capture data.
- the hands are large or small, thick and thin, fingers are long and short, and the proportions of the hands are also different. It can be seen that there are many factors that affect the hand model. Therefore, if only the initial model of the hand is scaled to a certain proportion in the solution process, the hand model obtained after scaling cannot accurately reflect the user’s current hand shape. This will ultimately affect the accuracy of hand positioning. Therefore, this application provides a new hand model generation method for hand motion capture.
- the hand model generation method includes:
- S300 Establish a three-dimensional coordinate system, and project a preset hand model onto a plane formed by a first axis and a second axis in the three-dimensional coordinate system to obtain an initial hand model.
- a three-dimensional coordinate system is established, and a preset hand model (three-dimensional model) is projected onto a plane formed by the first axis and the second axis of the three-dimensional coordinate system to obtain an initial hand model.
- the first axis and the second axis can be any two of the x, y, and z axes.
- the corresponding initial hand model is a two-dimensional image without thickness.
- the first axis is the x-axis
- the second axis is the z-axis
- the third axis is the y-axis as examples for description.
- the plane of the current user's palm is overlapped with the plane formed by the first axis and the second axis in the three-dimensional coordinate system, and the initial orientation of all the fingers of the user coincides with the first axis, that is, the plane of the current user's palm coincides with the xz plane and all fingers All face the x-axis.
- S302 According to the received hand movement data of the current user and the initial hand model, calculate the expansion range of the metacarpal bones and the length of the finger knuckles of the current user to generate an intermediate hand model.
- the metacarpal bones in the x-z plane have a spread relative to the y axis. Therefore, it is necessary to calculate the metacarpal expansion range based on the received hand movement data of the current user and the initial hand model.
- the expansion range of the metacarpal bone is the expansion range of the first knuckle of the finger relative to the y axis in the three-dimensional coordinate system.
- the expansion range of the metacarpal bones also includes the expansion range of the first knuckle of the finger relative to the x-axis and z-axis in the three-dimensional coordinate system.
- the hand motion data is obtained after the motion capture camera photographs the hand motion of the user wearing the motion capture glove.
- the user wears motion capture gloves and moves in the capture area.
- multiple motion capture cameras capture multiple frames of hand motion images and transmit them to the server.
- one hand includes 5 fingers, and each finger includes 4 knuckles.
- the phalanx near the root of the hand is defined as the first phalanx, and then defined as the other phalanx (including the second phalanx, the third phalanx, and the fourth phalanx) in the order of the phalanx structure. Section) until the fingertips. Therefore, it is necessary to calculate the finger knuckle length of each finger knuckle. Then, the middle hand model is generated according to the calculated metacarpal expansion range and finger knuckle length. The generated middle hand model is a two-dimensional image in the x-z plane.
- the intermediate hand model can actually be understood as the projection of the final hand model on the x-z plane.
- the final hand model needs to be deduced based on the projection of the final hand model on the x-z plane, that is, the middle hand model.
- the specific operation method is, for example, first determining the knuckle rotation center of the corresponding knuckle according to the position of the end knuckle of each finger knuckle of the generated middle hand model and the captured position information of the real reflective marking point. After the knuckle rotation center is calculated, the thickness of the hand model is correspondingly obtained. Then go to step S304.
- the initial hand model when generating the hand model for IK operation, is not simply scaled, but the hand motion obtained by the user's motion capture glove Data and initial hand model, calculate the extent of metacarpal expansion, finger knuckle length, and knuckle rotation center to generate a final hand model that better reflects the user's specific hand conditions (hand size, hand thickness, and specific length of fingers) .
- FIG. 4 it is a schematic flowchart of the second embodiment of the hand model generation method provided by the present invention.
- the hand model generation method of this embodiment includes:
- S400 Establish a three-dimensional coordinate system, and project a preset hand model onto a plane formed by a first axis and a second axis in the three-dimensional coordinate system to obtain an initial hand model.
- step S300 of the second embodiment For the operation method of this step, reference may be made to step S300 of the second embodiment, which will not be described in detail here.
- S401 The plane where the palm of the current user is located is coincident with the plane formed by the first axis and the second axis in the three-dimensional coordinate system, and the initial orientation of all the fingers of the user coincides with the first axis, that is, the plane where the palm of the current user is located coincides with the xz plane and all fingers are All face the x-axis.
- step S301 of the second embodiment For the operation method of this step, reference may be made to step S301 of the second embodiment, which will not be described in detail here.
- step S302 in the second embodiment in detail.
- the operations include:
- S402 According to the received hand motion data of the current user, obtain the position information of each real reflective marking point on the motion capturing glove worn by the current user in the three-dimensional coordinate system.
- the user When performing motion capture, the user wears motion capture gloves and moves in the capture area. Multiple motion capture cameras will capture multiple frames of hand motion images and transmit them to the server.
- the server calculates the position information of each real reflective marking point on the motion capture glove in the created three-dimensional coordinate system according to the captured moving image information and the relative position relationship between the multiple motion capture cameras.
- the reflective marking points laid out on the motion capture gloves are referred to as real reflective marking points to better distinguish them from the virtual marking points that will appear later.
- S403 Determine the knuckle length of the first knuckle of each finger according to the position information of the real reflective marking points on the rigid body on the back of the motion capture glove and the position information of the real reflective marking points on the second knuckles of each finger.
- the position of the end knuckle of each finger's first knuckle can be determined according to a certain ratio, and then According to the position information of the end knuckle of the first knuckle and the position information of the root node of the first knuckle (this information is preset), the knuckle length of the first knuckle of each finger is calculated.
- the specific knuckle length such as calculating the length of the second knuckle of the thumb
- it is specifically obtained based on the position information of the real reflective marking point on the second knuckle of the thumb and the initial hand model, such as calculating the third knuckle of the middle finger
- the length is specifically obtained according to the position information of the real reflective marking point on the third knuckle of the middle finger and the initial hand model. And so on.
- the current expansion range of the metacarpal bones of the palm can be specifically embodied as the rotation angle ⁇ y of the first knuckle of each finger on the xz plane relative to the y axis.
- the specific calculation in this step is to calculate the rotation angle ⁇ y of the first knuckle of each finger on the xz plane relative to the y axis. It should be noted that for the thumb, when calculating the expansion range of the metacarpal bone of the palm, the rotation angle of the first knuckle relative to the x-axis and the z-axis on the xz plane is also calculated.
- S501 Create virtual mark points corresponding to real reflective mark points on each other knuckle of each finger, and determine the position expression of each virtual mark point according to the initial hand model.
- the first method below can be used to determine the first position expression of the virtual marking point ;
- the second method below can be used to determine the first virtual marking point The position expression will be described in detail below.
- the first way is:
- the knuckle position is the projection of the finger knuckle in the x-z plane.
- the end knuckle position is the projection of the end knuckle in the x-z plane.
- Figure 6 it is a schematic diagram of the hand after creating the virtual marker.
- the solid square represents the position of the end knuckles of each finger except the first knuckle, and the open circles represent the positions of the created virtual marker points. It is important to note that the position of the end knuckle of the fourth knuckle coincides with the position of the virtual marking point (the solid square of the fourth knuckle is not shown in FIG. 6). What needs to be obtained in this step is the position information of each solid square.
- each finger in the initial hand model is the x-axis, it can be based on the offset amplitude of each virtual mark point on the x-axis, the offset amplitude of each virtual mark point on the y-axis, and each The position of the end knuckles corresponding to each other knuckle of a finger determines the first position expression of each virtual marking point on the other knuckles of each finger.
- the first position of a certain virtual mark point p i,j is specifically expressed as:
- loc i,j a i,j u 1 +b i,j u 2 +BC i,j , and
- p i,j represents the virtual marking point on the jth knuckle of the i-th finger
- loc i,j represents the first position expression of the virtual marking point p i,j
- u 1 represents the unit vector on the y-axis
- a i,j represent the offset amplitude of the virtual mark point p i,j on the y-axis
- u 2 represents the unit vector on the x-axis
- bi ,j represent the offset amplitude of the virtual mark point p i,j on the x-axis
- BC i,j represents the end knuckle position of the knuckle where the virtual marking point p i,j is located.
- BL i,j represents the knuckle length of the knuckle where the virtual mark point p i,j is located.
- i and j are integers and 1 ⁇ i ⁇ 5, 2 ⁇ j ⁇ 4.
- the solid square represents the end joint position of each finger except the first knuckle
- the open circle represents the position of the created virtual marker point. It is important to note that the position of the end knuckle of the fourth knuckle coincides with the position of the virtual marking point (the solid square of the fourth knuckle is not shown in FIG. 6). What needs to be obtained in this step is the position information of each solid square.
- the specific acquisition method may be: acquiring the information of the real reflective marking points placed on the side of the finger cuff input by the user; that is, the user will inform the server in advance which real reflective marking point is arranged on the side of the glove finger cuff. After obtaining this information, according to the one-to-one correspondence between the real reflective marking points and the virtual marking points, the information of the virtual marking points that need to be optimized for the shaft angle can be determined.
- each virtual marking point on the x-axis the offset amplitude of each virtual marking point on the y-axis, the position of the end knuckles of each finger and each other knuckle, and the required axis angle
- the information of the optimized virtual marking points determines the position expression of each virtual marking point on the other knuckles of each finger.
- the position expression of a certain virtual mark point p i,j is specifically:
- loc i,j R i,j (a i,j u 1 +b i,j u 2 +BC i,j ), and
- p i,j represents the virtual mark point on the jth knuckle of the i-th finger
- loc i,j represents the position expression of the virtual mark point p i,j
- u 1 represents the unit vector on the y-axis
- a i, j represents the offset amplitude of the virtual mark points p i,j on the y axis
- u 2 represents the unit vector on the x axis
- bi ,j represents the offset amplitude of the virtual mark points p i,j on the x axis
- BC i,j represents the end knuckle position of the knuckle where the virtual marking point p i,j is located.
- BL i,j represents the knuckle length of the knuckle where the virtual mark point p i,j is located.
- R i,j represents the rotation matrix of the virtual marker point p i,j relative to the x axis. If the virtual mark point p i,j needs to be optimized for the axis angle, then R i,j is a 3 ⁇ 3 rotation matrix defined by the rotation angle of the virtual mark point p i,j relative to the x axis, if the virtual mark point p i,j does not need to be optimized for the axis angle, so R i,j is a 3 ⁇ 3 identity matrix I. Among them, i and j are integers and 1 ⁇ i ⁇ 5, 2 ⁇ j ⁇ 4.
- S502 Construct a first cost function according to the position information of the real reflective marking points on the other knuckles of each finger and the first position expression of the virtual marking points corresponding to the real reflective marking points to calculate the real reflective marking points on the other knuckles The sum of the squares of the first Euclidean distance between the reflective mark point and its corresponding virtual mark point.
- the operation mode may be, for example:
- the first cost function of each finger is constructed according to the Euclidean distance between the real reflective mark point and the virtual mark point p i,j corresponding to the real reflective mark point.
- X i is the i-th optimization parameters fingers.
- the optimization parameters include: the expansion range of the metacarpal bone of the i-th finger of the current user, and the knuckle length of other knuckles of the i-th finger. If the axis angle optimization is considered and the finger needs to be optimized, the optimization parameters include: the rotation angle of the finger participating in the axis angle optimization.
- the first cost function of each finger is constructed according to the Euclidean distance between the real reflective mark point and the virtual mark point corresponding to the real reflective mark point. You can also set different cost weights w i,j for the second knuckle, third knuckle, and fourth knuckle of each finger. At this time, the updated first cost function of each finger is obtained
- step S405 is entered.
- the first cost function of the user's five fingers can be optimized simultaneously, or the first cost function of each finger can be minimized separately.
- the preferred method is to minimize them separately, so that there are few optimization parameters and parallel operations can be performed, which can improve the optimization speed to a certain extent.
- the offset of the position on the knuckle of the real reflective marking point on the x-axis can be compared with the knuckle of the phalanx where the reflective marking point is located.
- the ratio between the lengths BL i,j is taken as the ratio between the offset amplitude bi ,j of the virtual marking point p i,j on the x-axis and the knuckle length BL i,j of the knuckle where the virtual marking point is located.
- the ratio between the offset amplitude of the position on the knuckle where the real reflective marking point is located on the x-axis and the knuckle length BL i,j of the knuckle where the reflective marking point is located is specific, For example, it is 1/2, that is, the actual reflective marking point is arranged at the middle position of the phalanx where the reflective marking point is located, and the initial hand model records the proportion information.
- S406 Generate an intermediate hand model by using the obtained optimal solution of the current user's metacarpal expansion range and the current user's finger knuckle length.
- S407 Determine the knuckle rotation center of the corresponding knuckle according to the position of the end knuckle of each finger knuckle of the generated middle hand model.
- step S407-step S408 is similar to step S303-step S304 in the second embodiment, and will not be repeated here.
- FIG. 7 it is a schematic flowchart of a third embodiment of a hand model generation method applied to hand motion capture provided by the present invention.
- the hand model generation method includes:
- step S700 a three-dimensional coordinate system is established, and the preset hand model is projected onto a plane formed by the first axis and the second axis in the three-dimensional coordinate system to obtain an initial hand model.
- the plane of the current user's palm is overlapped with the plane formed by the first axis and the second axis in the three-dimensional coordinate system, and the initial orientation of all the fingers of the user coincides with the first axis, that is, the plane of the current user's palm coincides with the xz plane and all fingers All face the x-axis.
- S702 According to the received hand movement data of the current user, obtain the position information of each real reflective marking point on the motion capture glove worn by the current user in the three-dimensional coordinate system.
- S703 Determine the knuckle length of the first knuckle of each finger according to the position information of the real reflective marking points on the rigid body on the back of the motion capture glove and the real reflective marking points on the second knuckles of each finger.
- S704 According to the position information of the real reflective marking points on the other knuckles of each finger and the initial hand model, calculate the length of the knuckles corresponding to each other knuckle of each finger and the current user's metacarpal expansion range.
- S705 Minimize the first cost function by using the least squares method to obtain the optimal solution of the expansion range of the metacarpal bone of the current user, and obtain the optimal solution of the length of the other phalanx of each finger of the current user.
- S706 Generate an intermediate hand model by using the obtained optimal solution of the current user's metacarpal expansion range and the current user's finger knuckle length.
- step S700 to step S706 are the same as the operations of step S400 to step S406 in the third embodiment, and will not be repeated here.
- this embodiment describes step S407 in the third embodiment in detail.
- the knuckle rotation center corresponding to all the knuckles of each finger can be calculated.
- the rotation center of the first knuckle is set at the center of the wrist root. Therefore, in order to increase the calculation speed in actual operation, only the second and third knuckles of each finger can be calculated. , The center of rotation of the fourth knuckle.
- the expansion range of the metacarpal bone and the length of the finger knuckle, as well as the optimal solutions of a i, j and bi , j can be obtained.
- the fingers need to be moved to obtain subsequent multi-frame data representing finger movement. Therefore, the received hand motion capture data includes subsequent multi-frame finger movement data. Therefore, for the subsequent multi-frame data of the hand motion capture data including finger movement, when determining the knuckle rotation center of each finger knuckle of the middle hand model, it may include:
- S707 Create a knuckle rotation center cor i,j of the knuckle where each virtual mark point p i,j on the other knuckles of each finger is located based on the middle hand model.
- S708 Determine the third position expression of the knuckle rotation center cor i,j in the middle hand model.
- the open circle represents the position of the created virtual marker point p i,j
- the solid circle represents the position of the knuckle rotation center cor i,j of the knuckle where the virtual marker point p i,j is located
- the open square represents Is the end knuckle position bc i,j of the knuckle where the virtual marker p i,j is located. That is, the virtual mark points p i,j and the knuckle rotation center cor i,j end knuckle positions bc i,j correspond one-to-one.
- the position of the knuckle rotation center cor i,j of the knuckle where the virtual marker point p i,j is located is expressed as:
- S709 Determine the position expression of the virtual marking point p i,j in the subsequent multi-frame data according to the position expression of the knuckle rotation center cor i,j in the subsequent multi-frame data and the position information of the virtual marking point p i,j .
- step S707 the position expression of the virtual marker points p i, j in the subsequent multi-frame data can be obtained:
- the second cost function of each finger constructed is:
- Y i is the optimized parameter of the f- th frame data, including the knuckle rotation of each finger knuckle in the f- th frame data
- the center is specifically a parameter that determines the center of rotation, that is, the offset range mi,j of the center of rotation on the x-axis and the offset n i,j of the center of rotation on the y-axis.
- the sum of the second Euclidean distance squared between the virtual mark points on the other knuckles of all fingers and the corresponding real reflective mark points is F represents the number of data frames
- Y is the optimized parameter for subsequent multi-frame data, including the knuckle rotation center of all finger knuckles in the subsequent multi-frame data. Specifically, it is the parameter that determines the rotation center of all finger knuckles, that is, the rotation center is on the x axis.
- the offset magnitude mi,j of and the offset n i,j of the center of rotation on the y-axis is the offset magnitude mi,j of and the offset n i,j of the center of rotation on the y-axis.
- S711 Use the least square method to minimize the second cost function to obtain the optimal solution of the knuckle rotation center corresponding to each other knuckle of each finger of the current user.
- the second cost when the second cost is minimized as a function of each finger by using the least squares method, may also be a second knuckle of each finger, a third knuckle, the fourth knuckle to set different weights q i ,j , at this time, the updated second cost function is
- the embodiment of the present application also provides a hand model generating device, which includes:
- the initial hand model establishment unit is used to establish a three-dimensional coordinate system, and project a preset hand model onto a plane formed by a first axis and a second axis in the three-dimensional coordinate system to obtain an initial hand model;
- the coincidence unit is configured to coincide the plane where the palm of the current user is located with the plane formed by the first axis and the second axis in the three-dimensional coordinate system, and all the fingers of the user initially face the first axis;
- the intermediate hand model generating unit is configured to calculate the current user's metacarpal expansion range and finger knuckle length according to the received hand movement data of the current user and the initial hand model to generate the intermediate hand model;
- the expansion range of the metacarpal bone is the expansion range of the first knuckle of the finger relative to the third axis of the three-dimensional coordinate system;
- the hand motion data is obtained after the motion capture camera photographs the hand motion of the user wearing the motion capture glove;
- the final hand model generation unit is used to generate the final hand model according to the metacarpal bone expansion range, the length of the finger knuckles, and the knuckle rotation center of each finger knuckle.
- the hand model generating device when generating the hand model, the hand model generating method disclosed in any one of the above-mentioned Embodiment 2 to Embodiment 4 can be adopted, and details are not described herein again.
- the hand model generating device of this embodiment When the hand model generating device of this embodiment generates a hand model for inverse kinematics IK calculations, it does not simply scale the initial hand model, but captures the hand obtained by the user’s motion capture glove. Movement data and initial hand model, calculate the metacarpal bone expansion, finger knuckle length, and knuckle rotation center to generate a final hand that better reflects the user’s specific hand conditions (hand size, hand thickness, finger length) Department model.
- FIG. 9 is a schematic diagram of a terminal device provided by an embodiment of the present application.
- the terminal device 9 of this embodiment includes: a processor 90, a memory 91, and a computer program 92 stored in the memory 91 and running on the processor 90, such as a hand model generation program .
- the processor 90 executes the computer program 92, the steps in the foregoing embodiments of the hand model generation method are implemented.
- the processor 90 executes the computer program 92, the functions of the modules/units in the foregoing device embodiments are realized.
- the computer program 92 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 91 and executed by the processor 90 to complete This application.
- the one or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program 92 in the terminal device 9.
- the computer program 92 may be divided into multiple units such as an initial hand model building unit and a coincidence unit, and the specific functions of each unit are as follows:
- the initial hand model establishment unit is used to establish a three-dimensional coordinate system, and project the preset hand model onto the plane formed by the first axis and the second axis in the three-dimensional coordinate system to obtain the initial hand model;
- the coincidence unit It is used to align the plane of the current user's palm with the plane formed by the first axis and the second axis in the three-dimensional coordinate system, and all the fingers of the user initially face the first axis;
- the middle hand model generation unit is used to generate According to the received hand motion data of the current user and the initial hand model, calculate the current user’s metacarpal expansion range and finger knuckle length to generate a middle hand model;
- the metacarpal expansion range is relative to the first finger knuckle The expansion range of the third axis of the three-dimensional coordinate system;
- the hand motion data is obtained after the motion capture camera photographs the hand motion of the user wearing motion capture gloves;
- the determining unit is used to determine the intermediate hand model The knuckle rotation center of the
- the terminal device 9 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
- the terminal device may include, but is not limited to, a processor 90 and a memory 91.
- FIG. 9 is only an example of the terminal device 9 and does not constitute a limitation on the terminal device 7. It may include more or less components than shown in the figure, or a combination of certain components, or different components.
- the terminal device may also include input and output devices, network access devices, buses, etc.
- the so-called processor 90 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
- the general-purpose processor may be a microprocessor or the processor may also be any conventional processor or the like.
- the memory 91 may be an internal storage unit of the terminal device 9, such as a hard disk or memory of the terminal device 9.
- the memory 91 may also be an external storage device of the terminal device 9, for example, a plug-in hard disk equipped on the terminal device 9, a smart memory card (Smart Media Card, SMC), and a Secure Digital (SD) Card, Flash Card, etc. Further, the memory 91 may also include both an internal storage unit of the terminal device 7 and an external storage device.
- the memory 91 is used to store the computer program and other programs and data required by the terminal device.
- the memory 91 can also be used to temporarily store data that has been output or will be output.
- FIG. 10 is a schematic flowchart of a hand motion capture method provided by an embodiment of the present application, and the method includes:
- Step 1001 Use any of the hand model generation methods described above to generate a final hand model
- Step 1002 Perform IK calculation using the generated final hand model to obtain hand movement data.
- the disclosed apparatus/terminal device and method may be implemented in other ways.
- the device/terminal device embodiments described above are only illustrative.
- the division of the modules or units is only a logical function division, and there may be other divisions in actual implementation, such as multiple units.
- components can be combined or integrated into another system, or some features can be omitted or not implemented.
- the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- each unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
- the integrated module/unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
- this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
- the computer program can be stored in a computer-readable storage medium. When the program is executed by the processor, the steps of the foregoing method embodiments can be implemented.
- the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
- the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media, etc.
- ROM Read-Only Memory
- RAM Random Access Memory
- electrical carrier signal telecommunications signal
- software distribution media etc.
- the content contained in the computer-readable medium can be appropriately added or deleted in accordance with the requirements of the legislation and patent practice in the jurisdiction.
- the computer-readable medium Does not include electrical carrier signals and telecommunication signals.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
- Image Analysis (AREA)
Abstract
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980005240.1A CN111433783B (zh) | 2019-07-04 | 2019-07-04 | 手部模型生成方法、装置、终端设备及手部动作捕捉方法 |
PCT/CN2019/094725 WO2021000327A1 (fr) | 2019-07-04 | 2019-07-04 | Procédé de génération de modèle de main, appareil, dispositif de terminal et procédé de capture de mouvement de main |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/094725 WO2021000327A1 (fr) | 2019-07-04 | 2019-07-04 | Procédé de génération de modèle de main, appareil, dispositif de terminal et procédé de capture de mouvement de main |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021000327A1 true WO2021000327A1 (fr) | 2021-01-07 |
Family
ID=71547541
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/094725 WO2021000327A1 (fr) | 2019-07-04 | 2019-07-04 | Procédé de génération de modèle de main, appareil, dispositif de terminal et procédé de capture de mouvement de main |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111433783B (fr) |
WO (1) | WO2021000327A1 (fr) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113569775A (zh) * | 2021-08-02 | 2021-10-29 | 杭州相芯科技有限公司 | 一种基于单目rgb输入的移动端实时3d人体动作捕捉方法及系统、电子设备、存储介质 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112115799B (zh) * | 2020-08-24 | 2023-12-26 | 青岛小鸟看看科技有限公司 | 基于标记点的三维手势的识别方法、装置及设备 |
CN112416133B (zh) * | 2020-11-30 | 2021-10-15 | 魔珐(上海)信息科技有限公司 | 手部动作捕捉方法、装置、电子设备及存储介质 |
CN112515661B (zh) * | 2020-11-30 | 2021-09-14 | 魔珐(上海)信息科技有限公司 | 姿态捕捉方法、装置、电子设备及存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160335790A1 (en) * | 2015-05-13 | 2016-11-17 | Intel Corporation | Iterative closest point technique based on a solution of inverse kinematics problem |
CN106846403A (zh) * | 2017-01-04 | 2017-06-13 | 北京未动科技有限公司 | 一种三维空间中手部定位的方法、装置及智能设备 |
US9721383B1 (en) * | 2013-08-29 | 2017-08-01 | Leap Motion, Inc. | Predictive information for free space gesture control and communication |
CN108369478A (zh) * | 2015-12-29 | 2018-08-03 | 微软技术许可有限责任公司 | 用于交互反馈的手部跟踪 |
US20180350105A1 (en) * | 2017-05-31 | 2018-12-06 | Google Llc | Hand tracking based on articulated distance field |
CN109409236A (zh) * | 2018-09-28 | 2019-03-01 | 江苏理工学院 | 三维静态手势识别方法和装置 |
CN109816773A (zh) * | 2018-12-29 | 2019-05-28 | 深圳市瑞立视多媒体科技有限公司 | 一种虚拟人物的骨骼模型的驱动方法、插件及终端设备 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105653044A (zh) * | 2016-03-14 | 2016-06-08 | 北京诺亦腾科技有限公司 | 一种用于虚拟现实系统的动作捕捉手套及虚拟现实系统 |
CN106346485B (zh) * | 2016-09-21 | 2018-12-18 | 大连理工大学 | 基于人手运动姿态学习的仿生机械手的非接触式控制方法 |
CN108693958B (zh) * | 2017-04-12 | 2020-05-22 | 南方科技大学 | 一种手势识别方法、装置及系统 |
CN109191593A (zh) * | 2018-08-27 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | 虚拟三维模型的运动控制方法、装置及设备 |
-
2019
- 2019-07-04 WO PCT/CN2019/094725 patent/WO2021000327A1/fr active Application Filing
- 2019-07-04 CN CN201980005240.1A patent/CN111433783B/zh active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9721383B1 (en) * | 2013-08-29 | 2017-08-01 | Leap Motion, Inc. | Predictive information for free space gesture control and communication |
US20160335790A1 (en) * | 2015-05-13 | 2016-11-17 | Intel Corporation | Iterative closest point technique based on a solution of inverse kinematics problem |
CN108369478A (zh) * | 2015-12-29 | 2018-08-03 | 微软技术许可有限责任公司 | 用于交互反馈的手部跟踪 |
CN106846403A (zh) * | 2017-01-04 | 2017-06-13 | 北京未动科技有限公司 | 一种三维空间中手部定位的方法、装置及智能设备 |
US20180350105A1 (en) * | 2017-05-31 | 2018-12-06 | Google Llc | Hand tracking based on articulated distance field |
CN109409236A (zh) * | 2018-09-28 | 2019-03-01 | 江苏理工学院 | 三维静态手势识别方法和装置 |
CN109816773A (zh) * | 2018-12-29 | 2019-05-28 | 深圳市瑞立视多媒体科技有限公司 | 一种虚拟人物的骨骼模型的驱动方法、插件及终端设备 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113569775A (zh) * | 2021-08-02 | 2021-10-29 | 杭州相芯科技有限公司 | 一种基于单目rgb输入的移动端实时3d人体动作捕捉方法及系统、电子设备、存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN111433783B (zh) | 2023-06-06 |
CN111433783A (zh) | 2020-07-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021000327A1 (fr) | Procédé de génération de modèle de main, appareil, dispositif de terminal et procédé de capture de mouvement de main | |
CN111402290B (zh) | 一种基于骨骼关键点的动作还原方法以及装置 | |
WO2020207190A1 (fr) | Procédé de détermination d'informations tridimensionnelles, dispositif de détermination d'informations tridimensionnelles et appareil terminal | |
CN105528082B (zh) | 三维空间及手势识别追踪交互方法、装置和系统 | |
WO2021115331A1 (fr) | Appareil, dispositif et procédé de positionnement de coordonnées basé sur une triangulation, et support d'enregistrement | |
KR102076431B1 (ko) | 이미지 변형 처리 방법 및 장치, 컴퓨터 기억 매체 | |
JP2021103564A (ja) | 仮想オブジェクト駆動方法、装置、電子機器及び可読記憶媒体 | |
CN109840500B (zh) | 一种三维人体姿态信息检测方法及装置 | |
WO2020244075A1 (fr) | Procédé et appareil de reconnaissance de langage des signes, dispositif informatique et support d'informations | |
WO2024094227A1 (fr) | Procédé d'estimation de pose de geste basé sur un filtrage de kalman et un apprentissage profond | |
CN102848389A (zh) | 基于视觉运动捕捉的机械臂标定及跟踪系统实现方法 | |
WO2018188116A1 (fr) | Procédé, dispositif et système de reconnaissance gestuelle | |
WO2019019248A1 (fr) | Procédé, dispositif et système d'interaction de réalité virtuelle | |
CN112513713B (zh) | 用于地图构建的系统和方法 | |
CN115546365A (zh) | 一种虚拟人驱动方法及系统 | |
CN110096152A (zh) | 身体部位的空间定位方法、装置、设备及存储介质 | |
CN115994944A (zh) | 三维关键点预测方法、训练方法及相关设备 | |
CN113112617A (zh) | 一种三维图像的处理方法、装置,电子设备及存储介质 | |
CN113569775B (zh) | 一种基于单目rgb输入的移动端实时3d人体动作捕捉方法及系统、电子设备、存储介质 | |
CN115205737B (zh) | 基于Transformer模型的运动实时计数方法和系统 | |
Xiao et al. | Robust precise dynamic point reconstruction from multi-view | |
CN115223240A (zh) | 基于动态时间规整算法的运动实时计数方法和系统 | |
TWI811108B (zh) | 混合實境處理系統及混合實境處理方法 | |
US12073020B2 (en) | Head-mounted display, unlocking method and non-transitory computer readable storage medium thereof | |
CN108108694A (zh) | 一种人脸特征点定位方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19936019 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19936019 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.03.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19936019 Country of ref document: EP Kind code of ref document: A1 |