CN114283447A - Motion capture system and method - Google Patents
Motion capture system and method Download PDFInfo
- Publication number
- CN114283447A CN114283447A CN202111520595.XA CN202111520595A CN114283447A CN 114283447 A CN114283447 A CN 114283447A CN 202111520595 A CN202111520595 A CN 202111520595A CN 114283447 A CN114283447 A CN 114283447A
- Authority
- CN
- China
- Prior art keywords
- feature
- point set
- feature point
- target object
- motion capture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000001514 detection method Methods 0.000 claims abstract description 7
- 230000000007 visual effect Effects 0.000 claims abstract description 6
- 238000004364 calculation method Methods 0.000 claims description 17
- 239000003550 marker Substances 0.000 claims description 11
- 230000008569 process Effects 0.000 abstract description 7
- 230000009471 action Effects 0.000 abstract description 6
- 239000011159 matrix material Substances 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000000245 forearm Anatomy 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The application discloses a motion capture system and a motion capture method, which comprise at least two image collectors, a motion capture module and a motion capture module, wherein the at least two image collectors are used for synchronously collecting images of the same target object corresponding to different visual angles; a 2D feature detection calculator for extracting 2D feature points in each image; the system self-calibration calculator is used for calculating a first attitude relationship according to the 2D feature point set; the 3D feature positioning calculator is used for generating a 3D feature point set according to the first attitude relationship and all the 2D feature points; and the human motion resolver is used for resolving the human motion information of each person in the target object according to the 3D feature point set. This application analyzes out human gesture characteristic in the image through analysis and processing based on the natural video's of collection original image, need not personnel and dresses any equipment, compares with traditional human capture technique, has not only reduced hardware equipment's cost, owing to not having the constraint of dressing special dress moreover, and the action capture process is more high-efficient, free.
Description
Technical Field
The application belongs to the technical field of visual processing, and particularly relates to a motion capture system and method.
Background
In recent years, related technologies such as movie and television special effect production, cartoon games, virtual reality and the like have been developed rapidly, wherein a motion capture technology is a key technology. Motion capture systems are largely divided into optical image-based motion capture systems and inertial sensor-based motion capture systems.
At present, a motion capture system based on an optical image acquires motion information of a human body by tracking a mark point attached to the human body, so that a user needs to wear a professional garment and attach the mark point to the professional garment during motion capture, which restrains the user to a certain extent.
Therefore, how to provide a more convenient motion capture method is a technical problem that needs to be solved urgently at present.
Disclosure of Invention
To solve the above technical problems, the present application provides a motion capture system and method.
In a first aspect, the present application provides a motion capture system, including at least two image collectors, configured to synchronously collect images corresponding to a same target object at different viewing angles;
a 2D feature detection calculator for extracting 2D feature points in each of the images;
the system self-calibration calculator is used for calculating a first position and posture relation according to a 2D feature point set, wherein the 2D feature point set is a set of 2D feature points in all the images within a preset time period, and the first position and posture relation is a position and posture relation between the at least two image collectors;
a 3D feature location calculator, configured to generate a 3D feature point set according to the first pose relationship and all the 2D feature points, where the 3D feature point set is a 3D coordinate set of feature points on the target object;
and the human motion resolver is used for resolving the human motion information of each person in the target object according to the 3D feature point set.
In one implementation, the 2D feature points are 2D coordinates of human joint points in the target object.
In one implementation manner, the 2D feature point is a 2D coordinate of a marking point marked on the target object.
In one implementation, the system further includes a synchronizer for synchronously triggering the at least two image collectors.
In one implementation manner, the system further comprises a calibration module, wherein the calibration module comprises an acquisition module, a calculation module and an updating module;
the acquisition module is used for acquiring the 2D feature point set according to a preset rule;
the calculation module is configured to calculate a second pose relationship by using the 2D feature point set as an input of an SFM algorithm, where the second pose relationship is a pose relationship between the at least two current image collectors;
the updating module is used for updating the first position and posture relation into a second position and posture relation.
In an implementation manner, the 3D feature location calculator is configured to generate a 3D feature point set by using triangulation calculation according to the first pose relationship and all the 2D feature points.
In one implementation manner, the human motion solver is configured to solve the human motion information of each person in the target object by using an inverse dynamics algorithm according to the 3D feature point set.
In a second aspect, the present application further provides a motion capture method, the method comprising:
synchronously acquiring images of the same target object corresponding to different visual angles through at least two image collectors;
extracting 2D feature points in each image;
calculating a first pose relation according to the 2D feature point set, wherein the 2D feature point set is a set of 2D feature points in all the images within a preset time period, and the first pose relation is a pose relation between the at least two image collectors;
generating a 3D feature point set according to the first attitude relationship and all the 2D feature points, wherein the 3D feature point set is a 3D coordinate set of feature points on the target object;
and resolving the human motion information of each person in the target object according to the 3D feature point set.
In one implementation, the method further comprises:
acquiring the 2D feature point set according to a preset rule;
calculating a second pose relationship by taking the 2D feature point set as input of an SFM algorithm, wherein the second pose relationship is the pose relationship between the at least two current image collectors;
and updating the first position and posture relation into a second position and posture relation.
In an implementation manner, a 3D feature point set is generated by using triangulation calculation according to the first pose relationship and all the 2D feature points.
In summary, the motion capture system and method provided by the present application can perform real-time camera calibration and automatic calibration according to the characteristics of human joint points and the like while capturing motion, thereby avoiding the problems of accuracy reduction and even capture failure and the like caused by the parameter change of the camera system formed by at least two image collectors due to environmental vibration, touch and the like. Secondly, human posture characteristic in the image is analyzed out through analyzing and processing the original image based on the natural video of gathering, need not personnel and dresses any equipment, compares with traditional human capture technique, has not only reduced hardware equipment's cost, owing to not having the constraint of dressing special dress moreover, and the action capture process is more high-efficient, free. In addition, the human body posture information is analyzed based on the multi-view natural video, the view field covers the whole field and the whole human body, the phenomena of human body posture error recognition, skeleton dislocation and the like caused by the fact that the human body is shielded are greatly reduced, the stability and the accuracy of multi-person interaction are improved, and error matching caused by shielding is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a diagram of a motion capture scene provided by an embodiment of the present application;
fig. 2 is a schematic workflow diagram of a motion capture method according to an embodiment of the present application;
FIG. 3 is a diagram illustrating a joint point of a human body according to an embodiment of the present disclosure;
fig. 4 is a block diagram of a motion capture system according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
To facilitate understanding of the technical solution of the present application, an application scenario is first introduced.
As shown in fig. 1, in the present application, a multi-view image capturing system built by at least two image capturing devices 100 is used for capturing motion of a target object in a scene, wherein the target object may be one person or multiple persons, so that images corresponding to the same target object at different views can be captured by the at least two image capturing devices, and further, through a computing platform 200, the motion information of each person in the scene is resolved for the captured images by using the motion capturing method provided by the present application, so that the present application can be applied to the technical fields of movie and television production, virtual reality, and the like.
A motion capture method provided in an embodiment of the present application is described below.
As shown in fig. 2, a motion capture method provided in an embodiment of the present application includes the following steps:
This application is at first through two at least image collector, follows the image of the same target object of the synchronous collection of different angles respectively, can significantly reduce like this because human body sheltered from the human posture misidentification that leads to and phenomenon such as skeleton dislocation, improves many people interactive action's stability and accuracy, reduces to shelter from the mismatch that causes.
The image collector for selection is not limited, and for example, the corresponding industrial camera and the lens can be adapted according to the field size in the actual application scene. The image collector can perform data transmission with the processor in a mode of converting a USB into an optical fiber cable, so that the limitation of the length of the USB cable can be broken. The image acquisition and data transmission of each image acquisition device are independent, and in order to ensure that the images acquired by all the image acquisition devices have consistency in time domain, each image acquisition device can be communicated with the synchronizer through a trigger line, and then a trigger synchronization signal is sent to each image acquisition device through the synchronizer, so that each image acquisition device is ensured to acquire images synchronously.
And 200, extracting 2D feature points in each image.
In the application, the target object can be provided with or without a mark point, and the application does not limit the mark point.
The 2D feature points may be 2D coordinates of human joint points in the target object if there are no marker points on the target object. For example, 2D coordinates corresponding to human body joint points such as head, shoulder, hand, waist, knee, foot, etc. collected in the image are extracted.
If the target object is provided with the marker points, the 2D feature points may be 2D coordinates of the marker points on the target object. The marker points can be coding marker points or non-coding marker points, the coding marker points can be in the form of two-dimensional codes or digital numbers, and the non-coding marker points can be fluorescent points marked on the target object.
It should be noted that, if the target object is provided with the mark point, the 2D feature point may include both the 2D coordinate of the mark point on the target object and the 2D coordinate of the human body joint point in the target object, which is not limited in this application.
Correspondingly, a convergent 2D feature detection calculator may be trained in advance, and the 2D feature detection calculator may be configured to extract 2D feature points in each of the images.
The method for extracting the 2D feature points in each image is not limited in the present application, and in an implementation manner, the extraction of the 2D feature points may be implemented based on an open source library openpos.
When the open source library openpos is used for extracting 2D feature points, images acquired by the image acquirers are used as input, 2D coordinates of human body joint points (shown in fig. 3) captured in each image can be obtained, and images with the 2D feature points can be displayed.
It should be noted that, step 300 is equivalent to calibrating cameras of at least two image collectors by using the 2D feature points extracted in the motion capture process, so that the pose relationship between the image collectors can be determined.
The pose of the image collector comprises position information and posture information of the image collector, wherein the posture information represents parameter information corresponding to rotating shafts of the image collector, such as positions, directions, angles and the like of the three rotating shafts of the image collector.
After the image collector is determined, internal parameters of the image collector, such as focal length, optical coordinate center and the like, can be obtained, and further, the pose relationship between the image collectors can be calculated according to the 2D feature point set and the internal parameters of the image collector.
Therefore, according to the camera calibration method provided by the application, on one hand, a calibration plate in the traditional camera calibration is not needed, on the other hand, the camera calibration is carried out by utilizing the dynamic characteristic points in the motion capture process, and the pose relation between the image collectors can be reflected more accurately.
The method for calculating the first pose relationship by using the 2D feature point set is not limited, and in an implementation manner, the first pose relationship may be calculated by combining with an SFM algorithm.
The following describes a method for calculating the first pose relationship by using the 2D feature point set in combination with the SFM algorithm.
In the conventional SFM algorithm, firstly, a calibration plate image needs to be selected, then, feature point extraction is carried out on the calibration plate image, and then, feature point matching is carried out on the extracted feature points to obtain mutually matched feature point pairs; and finally, taking the determined characteristic point pairs as input, executing an SFM algorithm, and calculating to obtain internal parameters, external parameters and the like of the image collector.
The difference between the present application and the conventional SFM algorithm is that the conventional SFM algorithm performs feature extraction and matching based on a calibration plate image, whereas the present application directly performs the SFM algorithm using 2D feature points on a target object as input in a motion capture process.
The 2D feature points extracted in the present application are human body joint points, for example, each human body setting includes 25 human body joint points, so that the extracted 2D feature points actually have their own IDs, and for example, the 2D feature point a is a left shoulder, the 2D feature point B is a right shoulder, the 2D feature point C is a left eye, and the 2D feature point D is a right eye, which are not listed here. Therefore, the 2D feature points extracted by the method are actually equivalent to the feature points after the feature matching is completed, and therefore, the method can be directly used as SFM algorithm input for calculation, and accordingly, the calibration time and the complexity of operation are reduced.
Taking the 2D feature points as input, the step of executing the SFM algorithm may include: estimating a basic matrix, estimating an essential matrix, decomposing the essential matrix into R and T (wherein the relative rotation between a world coordinate system and a camera coordinate system is a matrix R, and the relative displacement is a vector T), calculating three-dimensional point cloud, re-projecting, calculating a transformation matrix from a third camera to the world coordinate system, calculating the transformation matrix of more cameras, optimizing the position of the three-dimensional point cloud and the position of an image collector by using beam adjustment, and the like. The steps for executing the SFM algorithm may refer to any feasible method in the prior art, and are not described herein again.
Although the camera calibration method provided by the application can accurately determine the pose relationship between the image collectors within a certain time, the previous calibration result may not be accurate any more as factors such as time lapse and shaking of system hardware change, and therefore the application also provides a method for calibrating the pose relationship between the image collectors.
The method for calibrating the pose relationship between the image collectors can comprise the following steps:
and 310, acquiring a 2D feature point set according to a preset rule.
The preset rule is not limited, for example, the 2D feature points corresponding to all the image collectors within a period of time can be automatically acquired at regular intervals; and for example, the 2D feature points corresponding to all the image collectors in a period of time can be automatically acquired in real time. Specifically, a thread may be separately set up in the background, and the thread is configured to obtain the 2D feature point set according to a preset rule.
Therefore, step 310 can acquire the latest 2D feature point set in time, that is, the 2D feature point set acquired in step 310 can reflect the real situation of each current image collector in time.
And 320, taking the 2D feature point set as input of an SFM algorithm, and calculating a second pose relationship, wherein the second pose relationship is the pose relationship between the at least two current image collectors.
In order to facilitate distinguishing from the first pose relationship, the pose relationship between the current image collectors obtained by the latest calculation according to the latest acquired 2D feature point set is referred to as a second pose relationship.
Therefore, the pose relationship between the image collectors can be automatically recalculated at intervals or in real time.
Step 330, updating the first position and posture relationship to a second position and posture relationship.
The posture relationship updating method is not limited. For example: in step 330, after the last pose relationship is obtained by calculation, the last pose relationship can be directly updated; or after the last pose relationship is obtained through calculation, the last pose relationship is compared with the last pose relationship, if the difference of the two pose relationships is within an allowable range, the last pose relationship can be maintained, and if the difference of the two pose relationships is not within the allowable range, the last pose relationship is updated.
In conclusion, when motion capture is executed, online camera calibration and automatic calibration of the pose relationship between the image collectors can be realized, so that the pose relationship between the image collectors is accurate in the whole motion capture process, and the accuracy of the final motion capture result is further ensured.
It should be noted that, if the previously determined pose relationship between the image collectors is updated, step 400 and step 500 perform corresponding calculation based on the updated pose relationship between the image collectors.
Step 400, generating a 3D feature point set according to the first pose relationship and all the 2D feature points, wherein the 3D feature point set refers to a 3D coordinate set of feature points on the target object.
According to the method and the device, the 2D characteristic points corresponding to the images with different visual angles can be subjected to stereo matching based on the first posture relation between the image collectors, so that the 3D coordinates of the characteristic points in the target object are determined, and a 3D characteristic point set can be generated.
If the 2D feature points extracted in step 200 are human body joint points in the target object, the 3D coordinate set of the human body joint points in the target object is generated in step 400; if the 2D feature points extracted in step 200 are the marker points on the target object, the 3D coordinate set of the marker points on the target object is generated in step 400.
Taking the feature points of the target object as the human body joint points as an example, because the pose relationship between the image collectors is known, the corresponding relationship between the 2D feature points in the images with different viewing angles can be matched, so that the 3D coordinates corresponding to the human body joint points of each person in the target object can be generated. Further, 3D coordinates corresponding to human body joint points of all the people in the target object form a 3D feature point set.
The method for calculating the 3D coordinates of the feature points on the target object is not limited in the present application, and in an implementation manner, the method for calculating the 3D coordinates of the feature points on the target object may be a trigonometric calculation method.
And 500, calculating the human motion information of each person in the target object according to the 3D feature point set.
The human motion information refers to a motion relationship between joints, for example, a motion relationship between the forearm and the lower leg, and specifically includes a position relationship, a rotation relationship, and the like.
The joints of the human body have certain geometric relations and the joints of the human body have constraint conditions, so that the human body motion information can be calculated by adopting software tools such as inverse dynamics algorithms IK, Unity or Maya and the like according to the 3D characteristics of each human body joint point, and the three-dimensional motion capture of the target object is realized.
Furthermore, the method and the device can combine the data of each characteristic joint point of the same human body obtained by calculation, associate the data according to the structural characteristics of the human body, connect the characteristic joint points to form the skeleton action of the human body under the current state, and establish the structure of the human body model to record the skeleton data.
The human body operation information obtained through the calculation in the step 500 can be converted into the 3D characteristics under the relevant coordinate system of the application terminal in real time, and broadcasting can be carried out in real time, so that the application terminal can receive the human body capturing action information in real time.
The system can further comprise a display module, the display module can display the calculated human body action information and the collected two-dimensional image information in a system interface in real time, real-time observation can be carried out, and the system can specifically comprise a 2D multi-view image display module and a 3D stereoscopic space display module.
In summary, the motion capture method provided by the present application can perform real-time camera calibration and automatic calibration according to the characteristics of human body joints and the like while capturing motion, so as to avoid the problems of accuracy reduction and even capture failure and the like caused by the change of camera system parameters formed by at least two image collectors due to environmental vibration, touch and the like. Secondly, human posture characteristic in the image is analyzed out through analyzing and processing the original image based on the natural video of gathering, need not personnel and dresses any equipment, compares with traditional human capture technique, has not only reduced hardware equipment's cost, owing to not having the constraint of dressing special dress moreover, and the action capture process is more high-efficient, free. In addition, the human body posture information is analyzed based on the multi-view natural video, the view field covers the whole field and the whole human body, the phenomena of human body posture error recognition, skeleton dislocation and the like caused by the fact that the human body is shielded are greatly reduced, the stability and the accuracy of multi-person interaction are improved, and error matching caused by shielding is reduced.
The motion capture system provided by the present application, as shown in fig. 4, includes at least two image collectors 100 and a computing platform 200, wherein the computing platform 200 includes a 2D feature detection calculator 210, a system self-calibration calculator 220, a 3D feature localization calculator 230, and a human motion solver 240.
At least two image collectors 100 for synchronously collecting images corresponding to the same target object at different viewing angles;
a 2D feature detection calculator 210 for extracting 2D feature points in each of the images;
the system self-calibration calculator 220 is configured to calculate a first pose relationship according to a 2D feature point set, where the 2D feature point set is a set of 2D feature points in all the images within a preset time period, and the first pose relationship is a pose relationship between the at least two image collectors;
a 3D feature location calculator 230, configured to generate a 3D feature point set according to the first pose relationship and all the 2D feature points, where the 3D feature point set is a 3D coordinate set of feature points on the target object;
and the human motion solver 240 is used for solving the human motion information of each person in the target object according to the 3D feature point set.
Further, the 2D feature points are 2D coordinates of human body joint points in the target object.
Further, the 2D feature point is a 2D coordinate of a mark point marked on the target object.
Further, the system further comprises a synchronizer 300, and the synchronizer 300 is configured to synchronously trigger the at least two image collectors 100.
Further, the system also comprises a calibration module, wherein the calibration module comprises an acquisition module, a calculation module and an updating module;
the acquisition module is used for acquiring the 2D feature point set according to a preset rule;
the calculation module is configured to calculate a second pose relationship by using the 2D feature point set as an input of an SFM algorithm, where the second pose relationship is a pose relationship between the at least two current image collectors;
the updating module is used for updating the first position and posture relation into a second position and posture relation.
Further, the 3D feature location calculator 230 is configured to generate a 3D feature point set by using triangulation calculation according to the first pose relationship and all the 2D feature points.
Further, the human motion solver 240 is configured to solve the human motion information of each person in the target object by using an inverse dynamics algorithm according to the 3D feature point set.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, for the embodiments of the system, since they are substantially similar to the method embodiments, the description is simple, and for the relevant points, reference may be made to the description of the method embodiments.
In specific implementation, the present application also provides a computer-readable storage medium, where the computer-readable storage medium may store a program, and the program may include some or all of the steps in the embodiments of the motion capture method provided in the present application when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a Random Access Memory (RAM), or the like.
Those skilled in the art will clearly understand that the techniques in the embodiments of the present application may be implemented by way of software plus a required general hardware platform. Based on such understanding, the technical solutions in the embodiments of the present application may be essentially implemented or a part contributing to the prior art may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
Further, the present application may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
In some of the flows described in the present specification and claims and in the above figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, with the order of the operations being numbered 100, 200, etc. merely to distinguish between the various operations, and the order of execution does not in itself dictate any order of execution. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
It should also be noted that, in this document, descriptions such as "first", "second", etc. are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit "first" and "second" to different types. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion.
The present application has been described in detail with reference to specific embodiments and illustrative examples, but the description is not intended to limit the application. Those skilled in the art will appreciate that various equivalent substitutions, modifications or improvements may be made to the presently disclosed embodiments and implementations thereof without departing from the spirit and scope of the present disclosure, and these fall within the scope of the present disclosure. The protection scope of this application is subject to the appended claims.
Claims (10)
1. A motion capture system, comprising:
the system comprises at least two image collectors, a target object acquisition unit and a target object acquisition unit, wherein the at least two image collectors are used for synchronously collecting images of the same target object corresponding to different visual angles;
a 2D feature detection calculator for extracting 2D feature points in each of the images;
the system self-calibration calculator is used for calculating a first position and posture relation according to a 2D feature point set, wherein the 2D feature point set is a set of 2D feature points in all the images within a preset time period, and the first position and posture relation is a position and posture relation between the at least two image collectors;
a 3D feature location calculator, configured to generate a 3D feature point set according to the first pose relationship and all the 2D feature points, where the 3D feature point set is a 3D coordinate set of feature points on the target object;
and the human motion resolver is used for resolving the human motion information of each person in the target object according to the 3D feature point set.
2. The motion capture system of claim 1, wherein the 2D feature points are 2D coordinates of human joint points in the target object.
3. The motion capture system of claim 1, wherein the 2D feature points are 2D coordinates of marker points marked on the target object.
4. The motion capture system of claim 1, further comprising a synchronizer for synchronously triggering the at least two image collectors.
5. The motion capture system of claim 1, further comprising a calibration module comprising an acquisition module, a calculation module, and an update module;
the acquisition module is used for acquiring the 2D feature point set according to a preset rule;
the calculation module is configured to calculate a second pose relationship by using the 2D feature point set as an input of an SFM algorithm, where the second pose relationship is a pose relationship between the at least two current image collectors;
the updating module is used for updating the first position and posture relation into a second position and posture relation.
6. The motion capture system of claim 1, wherein the 3D feature location calculator is configured to generate a set of 3D feature points by triangulation based on the first pose relationship and all of the 2D feature points.
7. The motion capture system of claim 1, wherein the human motion solver is configured to solve the human motion information for each person in the target object using an inverse kinematics algorithm based on the set of 3D feature points.
8. A method of motion capture, the method comprising:
synchronously acquiring images of the same target object corresponding to different visual angles through at least two image collectors;
extracting 2D feature points in each image;
calculating a first pose relation according to the 2D feature point set, wherein the 2D feature point set is a set of 2D feature points in all the images within a preset time period, and the first pose relation is a pose relation between the at least two image collectors;
generating a 3D feature point set according to the first attitude relationship and all the 2D feature points, wherein the 3D feature point set is a 3D coordinate set of feature points on the target object;
and resolving the human motion information of each person in the target object according to the 3D feature point set.
9. The motion capture method of claim 8, further comprising:
acquiring the 2D feature point set according to a preset rule;
calculating a second pose relationship by taking the 2D feature point set as input of an SFM algorithm, wherein the second pose relationship is the pose relationship between the at least two current image collectors;
and updating the first position and posture relation into a second position and posture relation.
10. The motion capture method of claim 8, wherein a 3D feature point set is generated from the first pose relationship and all of the 2D feature points using triangulation.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111520595.XA CN114283447B (en) | 2021-12-13 | 2021-12-13 | Motion capturing system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111520595.XA CN114283447B (en) | 2021-12-13 | 2021-12-13 | Motion capturing system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114283447A true CN114283447A (en) | 2022-04-05 |
CN114283447B CN114283447B (en) | 2024-03-26 |
Family
ID=80871829
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111520595.XA Active CN114283447B (en) | 2021-12-13 | 2021-12-13 | Motion capturing system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114283447B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110321754A (en) * | 2018-03-28 | 2019-10-11 | 西安铭宇信息科技有限公司 | A kind of human motion posture correcting method based on computer vision and system |
CN110977985A (en) * | 2019-12-23 | 2020-04-10 | 中国银联股份有限公司 | Positioning method and device |
CN111199576A (en) * | 2019-12-25 | 2020-05-26 | 中国人民解放军军事科学院国防科技创新研究院 | Outdoor large-range human body posture reconstruction method based on mobile platform |
CN111447340A (en) * | 2020-05-29 | 2020-07-24 | 深圳市瑞立视多媒体科技有限公司 | Mixed reality virtual preview shooting system |
CN112639883A (en) * | 2020-03-17 | 2021-04-09 | 华为技术有限公司 | Relative attitude calibration method and related device |
CN113421286A (en) * | 2021-07-12 | 2021-09-21 | 北京未来天远科技开发有限公司 | Motion capture system and method |
CN113487674A (en) * | 2021-07-12 | 2021-10-08 | 北京未来天远科技开发有限公司 | Human body pose estimation system and method |
-
2021
- 2021-12-13 CN CN202111520595.XA patent/CN114283447B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110321754A (en) * | 2018-03-28 | 2019-10-11 | 西安铭宇信息科技有限公司 | A kind of human motion posture correcting method based on computer vision and system |
CN110977985A (en) * | 2019-12-23 | 2020-04-10 | 中国银联股份有限公司 | Positioning method and device |
CN111199576A (en) * | 2019-12-25 | 2020-05-26 | 中国人民解放军军事科学院国防科技创新研究院 | Outdoor large-range human body posture reconstruction method based on mobile platform |
CN112639883A (en) * | 2020-03-17 | 2021-04-09 | 华为技术有限公司 | Relative attitude calibration method and related device |
CN111447340A (en) * | 2020-05-29 | 2020-07-24 | 深圳市瑞立视多媒体科技有限公司 | Mixed reality virtual preview shooting system |
WO2021238804A1 (en) * | 2020-05-29 | 2021-12-02 | 深圳市瑞立视多媒体科技有限公司 | Mixed reality virtual preview photographing system |
CN113421286A (en) * | 2021-07-12 | 2021-09-21 | 北京未来天远科技开发有限公司 | Motion capture system and method |
CN113487674A (en) * | 2021-07-12 | 2021-10-08 | 北京未来天远科技开发有限公司 | Human body pose estimation system and method |
Also Published As
Publication number | Publication date |
---|---|
CN114283447B (en) | 2024-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Alexiadis et al. | An integrated platform for live 3D human reconstruction and motion capturing | |
CN105210113B (en) | Monocular vision SLAM with the movement of general and panorama camera | |
KR101711736B1 (en) | Feature extraction method for motion recognition in image and motion recognition method using skeleton information | |
JP7015152B2 (en) | Processing equipment, methods and programs related to key point data | |
CN106843507B (en) | Virtual reality multi-person interaction method and system | |
JP5795250B2 (en) | Subject posture estimation device and video drawing device | |
US20200097732A1 (en) | Markerless Human Movement Tracking in Virtual Simulation | |
WO2018075053A1 (en) | Object pose based on matching 2.5d depth information to 3d information | |
CN108564653B (en) | Human body skeleton tracking system and method based on multiple Kinects | |
CN110544302A (en) | Human body action reconstruction system and method based on multi-view vision and action training system | |
US20160210761A1 (en) | 3d reconstruction | |
JP2015186531A (en) | Action information processing device and program | |
JP7379065B2 (en) | Information processing device, information processing method, and program | |
CN113658211A (en) | User posture evaluation method and device and processing equipment | |
JP2005256232A (en) | Method, apparatus and program for displaying 3d data | |
US20180020203A1 (en) | Information processing apparatus, method for panoramic image display, and non-transitory computer-readable storage medium | |
KR20230078777A (en) | 3D reconstruction methods, devices and systems, media and computer equipment | |
CN109740659A (en) | A kind of image matching method and device, electronic equipment, storage medium | |
CN112184898A (en) | Digital human body modeling method based on motion recognition | |
WO2017003424A1 (en) | Metric 3d stitching of rgb-d data | |
JP4667900B2 (en) | 3D analysis method from 2D image and system to execute it | |
CN114283447A (en) | Motion capture system and method | |
CN109840948B (en) | Target object throwing method and device based on augmented reality | |
JP2002008043A (en) | Device and method for analyzing action | |
CN111028339B (en) | Behavior modeling method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230423 Address after: 418-436, 4th Floor, Building 1, Jinanqiao, No. 68 Shijingshan Road, Shijingshan District, Beijing, 100041 Applicant after: Beijing Yuanke Fangzhou Technology Co.,Ltd. Address before: 100094 701, 7 floor, 7 building, 13 Cui Hunan Ring Road, Haidian District, Beijing. Applicant before: Lingyunguang Technology Co.,Ltd. Applicant before: Shenzhen Lingyun Shixun Technology Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |