CN114283447B - Motion capturing system and method - Google Patents

Motion capturing system and method Download PDF

Info

Publication number
CN114283447B
CN114283447B CN202111520595.XA CN202111520595A CN114283447B CN 114283447 B CN114283447 B CN 114283447B CN 202111520595 A CN202111520595 A CN 202111520595A CN 114283447 B CN114283447 B CN 114283447B
Authority
CN
China
Prior art keywords
pose
feature
point set
target object
relation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111520595.XA
Other languages
Chinese (zh)
Other versions
CN114283447A (en
Inventor
蒋再毅
杜华
姚毅
杨艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yuanke Fangzhou Technology Co ltd
Original Assignee
Beijing Yuanke Fangzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yuanke Fangzhou Technology Co ltd filed Critical Beijing Yuanke Fangzhou Technology Co ltd
Priority to CN202111520595.XA priority Critical patent/CN114283447B/en
Publication of CN114283447A publication Critical patent/CN114283447A/en
Application granted granted Critical
Publication of CN114283447B publication Critical patent/CN114283447B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a motion capture system and a motion capture method, comprising at least two image collectors, wherein the at least two image collectors are used for synchronously collecting images of the same target object corresponding to different visual angles; a 2D feature detection calculator for extracting 2D feature points in each image; the system self-calibration calculator is used for calculating a first pose relation according to the 2D feature point set; the 3D feature positioning calculator is used for generating a 3D feature point set according to the first pose relation and all the 2D feature points; and the human body motion solver is used for solving the human body motion information of each person in the target object according to the 3D characteristic point set. According to the method, the gesture features of the human body in the image are analyzed and processed through the original image based on the collected natural video, no personnel are required to wear any equipment, and compared with a traditional human body capturing technology, the method not only reduces the cost of hardware equipment, but also is more efficient and free in the motion capturing process due to the fact that the special clothes are not worn.

Description

Motion capturing system and method
Technical Field
The application belongs to the technical field of vision processing, and particularly relates to a motion capture system and a motion capture method.
Background
In recent years, related technologies such as film and television special effect production, cartoon games and virtual reality are rapidly developed, wherein the motion capture technology is a key technology. Motion capture systems are largely divided into motion capture systems based on optical images and motion capture systems based on inertial sensors.
Currently, motion capture systems based on optical images acquire motion information of a human body by tracking mark points attached to the human body, and thus, a user needs to wear professional clothing and attach mark points to the professional clothing during motion capture, which somewhat constrains the user.
Therefore, how to provide a more convenient motion capture method is a technical problem to be solved.
Disclosure of Invention
In order to solve the above technical problems, the present application provides a motion capture system and method.
In a first aspect, the present application provides a motion capture system, including at least two image collectors for synchronously collecting images of a same target object corresponding to different viewing angles;
a 2D feature detection calculator for extracting 2D feature points in each of the images;
the system self-calibration calculator is used for calculating a first pose relation according to a 2D characteristic point set, wherein the 2D characteristic point set is a set of 2D characteristic points in all images in a preset time period, and the first pose relation is a pose relation between the at least two image collectors;
the 3D feature positioning calculator is used for generating a 3D feature point set according to the first pose relation and all the 2D feature points, wherein the 3D feature point set is a 3D coordinate set of feature points on the target object;
and the human body motion solver is used for solving the human body motion information of each person in the target object according to the 3D characteristic point set.
In one implementation, the 2D feature points are 2D coordinates of human body nodes in the target object.
In one implementation, the 2D feature points are 2D coordinates of marker points marked on the target object.
In one implementation, the system further comprises a synchronizer for synchronously triggering the at least two image collectors.
In one implementation, the system further includes a calibration module including an acquisition module, a calculation module, and an update module;
the acquisition module is used for acquiring the 2D characteristic point set according to a preset rule;
the computing module is used for computing a second pose relationship by taking the 2D characteristic point set as input of an SFM algorithm, wherein the second pose relationship is the current pose relationship between the at least two image collectors;
the updating module is used for updating the first pose relation to a second pose relation.
In one implementation manner, the 3D feature positioning calculator is configured to generate a 3D feature point set by adopting trigonometric calculation according to the first pose relationship and all the 2D feature points.
In one implementation, the human motion solver is configured to solve human motion information of each person in the target object by using a reverse dynamics algorithm according to the 3D feature point set.
In a second aspect, the present application further provides a motion capture method, the method including:
synchronously acquiring images corresponding to the same target object at different visual angles through at least two image collectors;
extracting 2D characteristic points in each image;
calculating a first pose relation according to a 2D feature point set, wherein the 2D feature point set is a set of 2D feature points in all the images in a preset time period, and the first pose relation is a pose relation between the at least two image collectors;
generating a 3D feature point set according to the first pose relation and all the 2D feature points, wherein the 3D feature point set is a 3D coordinate set of feature points on the target object;
and according to the 3D characteristic point set, solving the human body motion information of each person in the target object.
In one implementation, the method further includes:
acquiring the 2D feature point set according to a preset rule;
taking the 2D feature point set as input of an SFM algorithm, and calculating a second pose relationship, wherein the second pose relationship is the current pose relationship between the at least two image collectors;
and updating the first pose relationship to a second pose relationship.
In one implementation, a set of 3D feature points is generated by trigonometric calculation according to the first pose relationship and all the 2D feature points.
In summary, the motion capturing system and method provided by the application can perform real-time camera calibration and automatic calibration according to characteristics of a human body joint point and the like during motion capturing, so as to avoid problems of precision reduction, even capturing failure and the like caused by camera system parameter change formed by at least two image collectors due to environmental vibration, touch and the like. Secondly, the gesture characteristics of the human body in the image are analyzed and processed through the original image based on the collected natural video, personnel do not need to wear any equipment, and compared with the traditional human body capturing technology, the method and the device not only reduce the cost of hardware equipment, but also are more efficient and free in the motion capturing process due to the fact that the special clothes are not worn. In addition, based on the analysis of human body posture information by multi-view natural video, the field of view covers the whole field and the whole body of the human body, so that the phenomena of misidentification of the human body posture, misplacement of a skeleton and the like caused by the occlusion of the human body are greatly reduced, the stability and the accuracy of multi-person interaction are improved, and mismatching caused by the occlusion is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a motion capture scene graph provided in an embodiment of the present application;
FIG. 2 is a schematic workflow diagram of a motion capture method according to an embodiment of the present disclosure;
fig. 3 is an illustration of a human body joint according to an embodiment of the present disclosure;
fig. 4 is a block diagram of a motion capture system according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
For the convenience of understanding the technical scheme of the application, an application scene is first introduced.
As shown in fig. 1, the multi-view image acquisition system built by at least two image collectors 100 captures motion of a target object in a scene, wherein the target object can be a person or a plurality of persons, so that images corresponding to the same target object under different view angles can be acquired by the at least two image collectors, and further, motion information of each person in the scene can be calculated by the motion capture method provided by the application on the acquired images through a computing platform 200, thereby being applicable to the technical fields of film and television production, virtual reality and the like.
The following describes a motion capture method provided in the embodiments of the present application.
As shown in fig. 2, the motion capturing method provided in the embodiment of the application includes the following steps:
step 100, synchronously acquiring images corresponding to the same target object at different viewing angles through at least two image collectors.
According to the method, the images of the same target object are synchronously acquired from different angles through at least two image collectors, so that the phenomena of human body posture misidentification, skeleton dislocation and the like caused by the fact that a human body is shielded can be greatly reduced, the stability and the accuracy of multi-person interaction actions are improved, and the mismatching caused by shielding is reduced.
The image collector is not limited, and the corresponding industrial camera and lens can be adapted according to the field size in the actual application scene. The image collector can perform data transmission with the processor through a USB-to-optical fiber cable mode, and therefore the limitation of the length of the USB line can be broken. The image acquisition and the data transmission of each image acquisition device are independent, so that the consistency of the images acquired by all the image acquisition devices in the time domain is ensured, each image acquisition device can be communicated with the synchronizer through a trigger line, and then a trigger synchronous signal is sent to each image acquisition device through the synchronizer, so that the synchronous image acquisition of each image acquisition device is ensured.
And 200, extracting 2D characteristic points in each image.
In the present application, the target object may be provided with a mark point, or may not be provided with a mark point, which is not limited in the present application.
If there are no marker points on the target object, the 2D feature points may be the 2D coordinates of the human body nodes in the target object. For example, the 2D coordinates corresponding to the human body joints such as head, shoulder, hand, waist, knee, foot, etc. acquired in the image are extracted.
If a marker point is provided on the target object, the 2D feature point may be the 2D coordinates of the marker point on the target object. The marking points can be coding marking points or non-coding marking points, the coding marking points can be in the form of two-dimensional codes or digital numbers, and the non-coding marking points can be fluorescent points marked on the target object.
It should be noted that if the target object is provided with a mark point, the 2D feature point may include both the 2D coordinates of the mark point on the target object and the 2D coordinates of the human body node in the target object, which is not limited in this application.
Correspondingly, a converging 2D feature detection calculator may be pre-trained, which may be used to extract 2D feature points in each of the images.
The method for extracting the 2D feature points in each image is not limited, and in one implementation manner, the 2D feature points may be extracted based on the open source library openelse.
When the open source library OpenPose is used for extracting 2D feature points, the images acquired by the image collectors are used as input, so that 2D coordinates of human body joint points (shown in fig. 3) captured in each image can be obtained, and images with the 2D feature points can be displayed.
Step 300, calculating a first pose relationship according to a 2D feature point set, wherein the 2D feature point set refers to a set of 2D feature points in all images in a preset time period, and the first pose relationship is a pose relationship between the at least two image collectors.
It should be noted that, step 300 is equivalent to calibrating the cameras of at least two image collectors by using the 2D feature points extracted in the motion capturing process, so as to determine the pose relationship between the image collectors.
The pose of the image collector comprises position information and pose information of the image collector, wherein the pose information represents parameter information corresponding to rotating shafts of the image collector, such as positions, directions, angles and the like of three rotating shafts of the image collector.
After the image collector determines, internal parameters of the image collector, such as focal length, optical coordinate center and the like, can be obtained, and then the pose relation among the image collectors can be calculated according to the 2D characteristic point set and the internal parameters of the image collector.
Therefore, the camera calibration method provided by the application does not need to adopt a calibration plate in the traditional camera calibration, and on the other hand, the camera calibration can be carried out by utilizing dynamic characteristic points in the motion capture process, so that the pose relation between the image collectors can be reflected more accurately.
The method for calculating the first pose relationship by using the 2D feature point set is not limited, and in one implementation manner, the first pose relationship can be calculated by combining with an SFM algorithm.
A method for calculating the first pose relationship using the 2D feature point set in combination with the SFM algorithm is described below.
The conventional SFM algorithm firstly needs to select a calibration plate image, then extracts characteristic points of the calibration plate image, and then performs characteristic point matching on the extracted characteristic points to obtain mutually matched characteristic point pairs; finally, the characteristic point pairs are determined as input, SFM algorithm is executed, and internal parameters, external parameters and the like of the image collector are obtained through calculation.
The difference between the method and the conventional SFM algorithm is that feature extraction and matching are performed based on the calibration plate image in the conventional SFM algorithm, and the method and the system directly utilize 2D feature points on the target object in the motion capture process as input to execute the SFM algorithm.
The 2D feature points extracted in the present application are human body joint points, for example, each human body setting includes 25 human body joint points, so that the extracted 2D feature points actually have their own IDs, for example, the 2D feature point a is a left shoulder, the 2D feature point B is a right shoulder, the 2D feature point C is a left eye, the 2D feature point D is a right eye, and the like, which are not listed here one by one. Therefore, the 2D feature points extracted by the method are actually equivalent to feature points after feature matching is completed, and can be directly used as SFM algorithm input for calculation, so that calibration time and operation complexity are reduced.
Taking the 2D feature points as input, the step of executing the SFM algorithm may include: the method comprises the steps of estimating a basic matrix, estimating an essential matrix, decomposing the essential matrix into R and T (wherein relative rotation between a world coordinate system and a camera coordinate system is matrix R, relative displacement is vector T), calculating a three-dimensional point cloud, reprojection, calculating a transformation matrix from a third camera to the world coordinate system, calculating transformation matrices of more cameras, and optimizing the position of the three-dimensional point cloud and the position of an image collector by utilizing beam method adjustment. The steps of performing the SFM algorithm may refer to any one of the possible methods in the prior art, and will not be described here.
Although the camera calibration method provided by the application can accurately determine the pose relationship between the image collectors within a certain time, the previous calibration result may not be accurate any more as the factors such as time lapse and shaking of system hardware change, so the application also provides a method for calibrating the pose relationship between the image collectors.
The method for calibrating the pose relation between the image collectors, provided by the application, can comprise the following steps:
step 310, acquiring a 2D feature point set according to a preset rule.
The preset rule is not limited, for example, 2D feature points corresponding to all image collectors in a period of time can be automatically acquired at regular intervals; for example, the 2D feature points corresponding to all the image collectors in a period of time can be automatically acquired in real time. Specifically, a thread can be independently set up in the background, and the thread is used for acquiring the 2D feature point set according to a preset rule.
Therefore, the step 310 can timely acquire the latest 2D feature point set, that is, the 2D feature point set acquired in the step 310 can timely reflect the actual situation of each current image collector.
And 320, taking the 2D characteristic point set as input of an SFM algorithm, and calculating a second pose relationship, wherein the second pose relationship is the current pose relationship between the at least two image collectors.
In order to distinguish the first pose relationship, the pose relationship between the current image collectors obtained by the last calculation according to the latest obtained 2D feature point set is called a second pose relationship.
Thus, the pose relationship between the image collectors can be automatically recalculated at intervals or in real time.
Step 330, updating the first pose relationship to a second pose relationship.
The pose relation updating method is not limited. For example: in step 330, after the last pose relationship is obtained by calculation, the last pose relationship can be directly updated; or after the last pose relation is obtained through calculation, comparing the last pose relation with the last pose relation, if the difference of the two pose relations is within the allowable range, keeping the last pose relation, and if the difference of the two pose relations is not within the allowable range, updating the last pose relation.
In summary, the method and the device can realize the calibration of the online camera and the automatic calibration of the pose relationship between the image collectors when performing motion capture, thereby ensuring that the pose relationship between the image collectors is accurate in the whole motion capture process and further ensuring the accuracy of the final motion capture result.
If the pose relationship of each image collector determined previously is updated, the corresponding calculation is performed in step 400 and step 500 based on the updated pose relationship of each image collector.
And 400, generating a 3D feature point set according to the first pose relation and all the 2D feature points, wherein the 3D feature point set refers to a 3D coordinate set of the feature points on the target object.
According to the method and the device, based on the first pose relation among the image collectors, the 2D feature points corresponding to the images with different visual angles are subjected to three-dimensional matching, so that the 3D coordinates of the feature points in the target object are determined, and a 3D feature point set can be generated.
If the 2D feature point extracted in step 200 is a human body node in the target object, a corresponding 3D coordinate set of the human body node in the target object is generated in step 400; if the 2D feature points extracted in step 200 are marker points on the target object, then a corresponding set of 3D coordinates of the marker points on the target object is generated in step 400.
Taking the feature points of the target object as human body joint points as an example, as the pose relation among the image collectors is known, the corresponding relation among the 2D feature points in the images with different visual angles can be matched, so that the 3D coordinates corresponding to the human body joint points of each person in the target object can be generated. Further, 3D coordinates corresponding to human body joint points of all people in the target object form a 3D characteristic point set.
The method for calculating the 3D coordinates of the feature points on the target object is not limited, and in one implementation manner, the method can calculate the 3D coordinates of the feature points on the target object by adopting a triangle calculation method.
And 500, calculating the human body motion information of each person in the target object according to the 3D characteristic point set.
The human motion information refers to a motion relationship between joints, for example, a motion relationship between a lower arm and a lower leg, and specifically includes a position relationship, a rotation relationship, and the like.
The human body joints have a certain geometric relationship and constraint conditions, so that based on the three-dimensional motion capture of the target object, the human body motion information can be calculated by adopting software tools such as reverse dynamics algorithm IK, unity or Maya according to the 3D characteristics of each human body joint.
Furthermore, the method can combine the obtained data of all the characteristic joint points of the same human body, correlate the data according to the structural characteristics of the human body, connect all the characteristic joint points to form human body skeleton actions in the current state, and establish human body model structure record skeleton data.
The method and the device can also convert the human body operation information obtained by the calculation in the step 500 into the 3D characteristics under the relevant coordinate system of the application end in real time, and can broadcast in real time, so that the application end can receive the human body capturing action information in real time.
The application can further comprise a display module, the display module can display the resolved human motion information and the acquired two-dimensional image information in a system interface in real time, real-time observation can be performed, and the application can specifically comprise a 2D multi-view image display module and a 3D stereoscopic space display module.
In summary, according to the motion capturing method provided by the application, firstly, during motion capturing, real-time camera calibration and automatic calibration can be performed according to characteristics of a human body joint point and the like, so that the problems of accuracy reduction, even capturing failure and the like caused by camera system parameter change formed by at least two image collectors due to environmental vibration, touch and the like are avoided. Secondly, the gesture characteristics of the human body in the image are analyzed and processed through the original image based on the collected natural video, personnel do not need to wear any equipment, and compared with the traditional human body capturing technology, the method and the device not only reduce the cost of hardware equipment, but also are more efficient and free in the motion capturing process due to the fact that the special clothes are not worn. In addition, based on the analysis of human body posture information by multi-view natural video, the field of view covers the whole field and the whole body of the human body, so that the phenomena of misidentification of the human body posture, misplacement of a skeleton and the like caused by the occlusion of the human body are greatly reduced, the stability and the accuracy of multi-person interaction are improved, and mismatching caused by the occlusion is reduced.
The motion capture system further provided herein, as shown in fig. 4, includes at least two image collectors 100 and a computing platform 200, wherein the computing platform 200 includes a 2D feature detection calculator 210, a system self-calibration calculator 220, a 3D feature location calculator 230, and a human motion solver 240.
At least two image collectors 100 for synchronously collecting images of the same target object corresponding to different viewing angles;
a 2D feature detection calculator 210 for extracting 2D feature points in each of the images;
the system self-calibration calculator 220 is configured to calculate a first pose relationship according to a 2D feature point set, where the 2D feature point set is a set of 2D feature points in all the images in a preset time period, and the first pose relationship is a pose relationship between the at least two image collectors;
a 3D feature positioning calculator 230, configured to generate a 3D feature point set according to the first pose relationship and all the 2D feature points, where the 3D feature point set is a 3D coordinate set of feature points on the target object;
and a human motion solver 240 for solving human motion information of each person in the target object according to the 3D feature point set.
Further, the 2D feature points are 2D coordinates of a human body node in the target object.
Further, the 2D feature points are 2D coordinates of a mark point marked on the target object.
Further, the system further comprises a synchronizer 300, wherein the synchronizer 300 is used for synchronously triggering the at least two image collectors 100.
Further, the system also comprises a calibration module, wherein the calibration module comprises an acquisition module, a calculation module and an updating module;
the acquisition module is used for acquiring the 2D characteristic point set according to a preset rule;
the computing module is used for computing a second pose relationship by taking the 2D characteristic point set as input of an SFM algorithm, wherein the second pose relationship is the current pose relationship between the at least two image collectors;
the updating module is used for updating the first pose relation to a second pose relation.
Further, the 3D feature positioning calculator 230 is configured to generate a 3D feature point set by adopting trigonometric calculation according to the first pose relationship and all the 2D feature points.
Further, the human motion solver 240 is configured to solve human motion information of each person in the target object by using a reverse dynamics algorithm according to the 3D feature point set.
The same or similar parts between the various embodiments in this specification are referred to each other. In particular, for embodiments of the system, since they are substantially similar to the method embodiments, the description is relatively simple, as far as reference is made to the description in the method embodiments.
In a specific implementation, an embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium may store a program, where the program may include some or all of the steps in each embodiment of the motion capture method provided in the present application when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory, RAM), or the like.
It will be apparent to those skilled in the art that the techniques in the embodiments of the present application may be implemented in software plus the necessary general hardware platform. Based on such understanding, the technical solutions in the embodiments of the present application may be embodied in essence or what contributes to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments of the present application.
Furthermore, the present application may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
In some of the flows described in the specification and claims of the present invention and in the foregoing figures, a plurality of operations occurring in a particular order are included, but it should be understood that the operations may be performed out of order or performed in parallel, with the order of operations such as 100, 200, etc., being merely used to distinguish between the various operations, the order of the operations themselves not representing any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
It should also be noted that, in this document, descriptions such as "first" and "second" are used to distinguish between different messages, devices, modules, etc., and do not represent a sequential order, nor do they limit that "first" and "second" are of different types. Moreover, the terms "comprise," "include," or any other variation thereof, are intended to cover a non-exclusive inclusion.
The foregoing detailed description has been provided for the purposes of illustration in connection with specific embodiments and exemplary examples, but such description is not to be construed as limiting the application. Those skilled in the art will appreciate that various equivalent substitutions, modifications and improvements may be made to the technical solution of the present application and its embodiments without departing from the spirit and scope of the present application, and these all fall within the scope of the present application. The scope of the application is defined by the appended claims.

Claims (7)

1. A motion capture system, comprising:
at least two image collectors for synchronously collecting images of the same target object corresponding to different visual angles;
a 2D feature detection calculator configured to extract 2D feature points in each of the images, where the 2D feature points are 2D coordinates of a human body node in the target object;
the system self-calibration calculator is used for calculating a first pose relation according to a 2D characteristic point set, wherein the 2D characteristic point set is a set of 2D characteristic points in all the images in a first preset time period, and the first pose relation is a pose relation between the at least two image collectors;
the 3D feature positioning calculator is used for generating a 3D feature point set according to the first pose relation and all the 2D feature points, wherein the 3D feature point set is a 3D coordinate set of feature points on the target object;
a human body motion solver for solving human body motion information of each person in the target object according to the 3D feature point set;
the system also comprises a calibration module, wherein the calibration module comprises an acquisition module, a calculation module and an updating module;
the acquisition module is used for acquiring the 2D characteristic point set according to a preset rule;
the computing module is configured to compute the first pose relationship by using the 2D feature point set corresponding to the first preset time period as an input of an SFM algorithm, and compute the second pose relationship by using the 2D feature point set corresponding to the second preset time period as an input of the SFM algorithm; wherein the second preset time period is located after the first preset time period;
the updating module is used for updating the first pose relation to the second pose relation when the pose change between the first pose relation and the second pose relation is larger than a pose change threshold value, so that the 3D feature positioning calculator generates a 3D feature point set according to the second pose relation and all the 2D feature points; and when the pose change between the first pose relation and the second pose relation is smaller than or equal to a pose change threshold, the first pose relation is maintained.
2. The motion capture system of claim 1, wherein the 2D feature points further comprise 2D coordinates of marker points marked on the target object.
3. The motion capture system of claim 1, further comprising a synchronizer for synchronously triggering the at least two image collectors.
4. The motion capture system of claim 1, wherein the 3D feature location calculator is configured to generate a set of 3D feature points using triangulation based on the first pose relationship and all of the 2D feature points.
5. The motion capture system of claim 1, wherein the human motion solver is configured to solve human motion information for each person in the target object using an inverse dynamics algorithm based on the set of 3D feature points.
6. A method of motion capture, the method comprising:
synchronously acquiring images corresponding to the same target object at different visual angles through at least two image collectors;
2D characteristic points in each image are extracted, wherein the 2D characteristic points are 2D coordinates of human body joint points in the target object;
taking the 2D characteristic point set corresponding to a first preset time period as input of an SFM algorithm, and calculating the first pose relation, wherein the 2D characteristic point set is a set of 2D characteristic points in all images in the first preset time period, and the first pose relation is a pose relation between the at least two image collectors;
generating a 3D feature point set according to the first pose relation and all the 2D feature points, wherein the 3D feature point set is a 3D coordinate set of feature points on the target object;
according to the 3D feature point set, solving human motion information of each person in the target object;
taking the 2D characteristic point set corresponding to the second preset time period as the input of the SFM algorithm, and calculating a second pose relation; wherein the second preset time period is located after the first preset time period;
when the pose change between the first pose relation and the second pose relation is larger than a pose change threshold, updating the first pose relation to the second pose relation to generate a 3D feature point set according to the second pose relation and all the 2D feature points; and when the pose change between the first pose relation and the second pose relation is smaller than or equal to a pose change threshold, the first pose relation is maintained.
7. The motion capture method of claim 6, wherein a set of 3D feature points is generated using trigonometric calculations from the first pose relationship and all of the 2D feature points.
CN202111520595.XA 2021-12-13 2021-12-13 Motion capturing system and method Active CN114283447B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111520595.XA CN114283447B (en) 2021-12-13 2021-12-13 Motion capturing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111520595.XA CN114283447B (en) 2021-12-13 2021-12-13 Motion capturing system and method

Publications (2)

Publication Number Publication Date
CN114283447A CN114283447A (en) 2022-04-05
CN114283447B true CN114283447B (en) 2024-03-26

Family

ID=80871829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111520595.XA Active CN114283447B (en) 2021-12-13 2021-12-13 Motion capturing system and method

Country Status (1)

Country Link
CN (1) CN114283447B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321754A (en) * 2018-03-28 2019-10-11 西安铭宇信息科技有限公司 A kind of human motion posture correcting method based on computer vision and system
CN110977985A (en) * 2019-12-23 2020-04-10 中国银联股份有限公司 Positioning method and device
CN111199576A (en) * 2019-12-25 2020-05-26 中国人民解放军军事科学院国防科技创新研究院 Outdoor large-range human body posture reconstruction method based on mobile platform
CN111447340A (en) * 2020-05-29 2020-07-24 深圳市瑞立视多媒体科技有限公司 Mixed reality virtual preview shooting system
CN112639883A (en) * 2020-03-17 2021-04-09 华为技术有限公司 Relative attitude calibration method and related device
CN113421286A (en) * 2021-07-12 2021-09-21 北京未来天远科技开发有限公司 Motion capture system and method
CN113487674A (en) * 2021-07-12 2021-10-08 北京未来天远科技开发有限公司 Human body pose estimation system and method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321754A (en) * 2018-03-28 2019-10-11 西安铭宇信息科技有限公司 A kind of human motion posture correcting method based on computer vision and system
CN110977985A (en) * 2019-12-23 2020-04-10 中国银联股份有限公司 Positioning method and device
CN111199576A (en) * 2019-12-25 2020-05-26 中国人民解放军军事科学院国防科技创新研究院 Outdoor large-range human body posture reconstruction method based on mobile platform
CN112639883A (en) * 2020-03-17 2021-04-09 华为技术有限公司 Relative attitude calibration method and related device
CN111447340A (en) * 2020-05-29 2020-07-24 深圳市瑞立视多媒体科技有限公司 Mixed reality virtual preview shooting system
WO2021238804A1 (en) * 2020-05-29 2021-12-02 深圳市瑞立视多媒体科技有限公司 Mixed reality virtual preview photographing system
CN113421286A (en) * 2021-07-12 2021-09-21 北京未来天远科技开发有限公司 Motion capture system and method
CN113487674A (en) * 2021-07-12 2021-10-08 北京未来天远科技开发有限公司 Human body pose estimation system and method

Also Published As

Publication number Publication date
CN114283447A (en) 2022-04-05

Similar Documents

Publication Publication Date Title
CN113706699B (en) Data processing method and device, electronic equipment and computer readable storage medium
KR101711736B1 (en) Feature extraction method for motion recognition in image and motion recognition method using skeleton information
US20170330375A1 (en) Data Processing Method and Apparatus
CN106843507B (en) Virtual reality multi-person interaction method and system
JP7015152B2 (en) Processing equipment, methods and programs related to key point data
JP5795250B2 (en) Subject posture estimation device and video drawing device
CN110598590A (en) Close interaction human body posture estimation method and device based on multi-view camera
CN108564653B (en) Human body skeleton tracking system and method based on multiple Kinects
CN110633005A (en) Optical unmarked three-dimensional human body motion capture method
US20160210761A1 (en) 3d reconstruction
CN111539300A (en) Human motion capture method, device, medium and equipment based on IK algorithm
KR20230078777A (en) 3D reconstruction methods, devices and systems, media and computer equipment
CN113449570A (en) Image processing method and device
WO2024094227A1 (en) Gesture pose estimation method based on kalman filtering and deep learning
CN109740659A (en) A kind of image matching method and device, electronic equipment, storage medium
CN114882106A (en) Pose determination method and device, equipment and medium
CN111047678B (en) Three-dimensional face acquisition device and method
CN111539299B (en) Human motion capturing method, device, medium and equipment based on rigid body
WO2017003424A1 (en) Metric 3d stitching of rgb-d data
CN114283447B (en) Motion capturing system and method
JP4667900B2 (en) 3D analysis method from 2D image and system to execute it
WO2020155024A1 (en) Method and apparatus for missing data processing of three dimensional trajectory data
WO2020155025A1 (en) Method and device for handling exception value of three-dimensional trajectory data
CN113610969B (en) Three-dimensional human body model generation method and device, electronic equipment and storage medium
WO2018173205A1 (en) Information processing system, method for controlling same, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230423

Address after: 418-436, 4th Floor, Building 1, Jinanqiao, No. 68 Shijingshan Road, Shijingshan District, Beijing, 100041

Applicant after: Beijing Yuanke Fangzhou Technology Co.,Ltd.

Address before: 100094 701, 7 floor, 7 building, 13 Cui Hunan Ring Road, Haidian District, Beijing.

Applicant before: Lingyunguang Technology Co.,Ltd.

Applicant before: Shenzhen Lingyun Shixun Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant