WO2016183812A1 - 一种混合运动捕捉系统及方法 - Google Patents

一种混合运动捕捉系统及方法 Download PDF

Info

Publication number
WO2016183812A1
WO2016183812A1 PCT/CN2015/079346 CN2015079346W WO2016183812A1 WO 2016183812 A1 WO2016183812 A1 WO 2016183812A1 CN 2015079346 W CN2015079346 W CN 2015079346W WO 2016183812 A1 WO2016183812 A1 WO 2016183812A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
position information
inertial
optical
motion capture
Prior art date
Application number
PCT/CN2015/079346
Other languages
English (en)
French (fr)
Inventor
戴若犁
刘昊扬
李龙威
陈金舟
桂宝佳
陈号天
Original Assignee
北京诺亦腾科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京诺亦腾科技有限公司 filed Critical 北京诺亦腾科技有限公司
Priority to PCT/CN2015/079346 priority Critical patent/WO2016183812A1/zh
Publication of WO2016183812A1 publication Critical patent/WO2016183812A1/zh
Priority to US15/817,373 priority patent/US10679360B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P15/00Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration
    • G01P15/18Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration in two or more dimensions
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P15/00Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration
    • G01P15/02Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration by making use of inertia forces using solid seismic masses
    • G01P15/08Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration by making use of inertia forces using solid seismic masses with conversion into electric or magnetic values
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P15/00Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration
    • G01P15/14Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration by making use of gyroscopes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P3/00Measuring linear or angular speed; Measuring differences of linear or angular speeds
    • G01P3/36Devices characterised by the use of optical means, e.g. using infrared, visible, or ultraviolet light
    • G01P3/38Devices characterised by the use of optical means, e.g. using infrared, visible, or ultraviolet light using photographic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Definitions

  • This invention relates to motion capture techniques, and more particularly to a hybrid motion capture system and method.
  • Motion capture technology can record the motion of objects digitally.
  • the current commonly used motion capture technologies mainly include optical motion capture and motion capture based on inertial sensors.
  • the optical motion capture system includes a plurality of cameras that are arranged around the object to be tested, and the range of motion of the object to be tested is in an overlapping area of the camera.
  • the key parts of the object to be tested are affixed with some characteristic reflective points or luminous points as the identification of visual recognition and processing.
  • the camera continuously captures the motion of the object and saves the image sequence for analysis and processing, calculates the spatial position of each marker point at a certain moment, and obtains its accurate motion trajectory.
  • the advantage of optical motion capture is that there are no restrictions on mechanical devices, wired cables, etc., allowing the object to have a wide range of motion and a high sampling frequency. However, such a system can only capture the motion of objects in the overlapping area of the camera, and when the motion is complicated, the logo is easily confused and occluded, resulting in erroneous results.
  • the basic method is to connect an inertial measurement unit (IMU) to the object to be tested and move along with the object to be tested.
  • the inertial measurement unit usually includes a micro accelerometer (measuring an acceleration signal) and a microgyroscope (measuring an angular velocity signal).
  • the size and weight of the IMU can be made small, so that the motion of the object to be measured has little influence, and the requirements for the field are low, and it is not affected by light and occlusion, and the allowable range of motion is large.
  • inertia-based motion capture has integral drift that results in reduced accuracy of motion capture.
  • U.S. Patent No. 8,203,487 discloses a motion capture system and method incorporating ultra-wideband (UWB) measurements and MEMS inertial measurements.
  • the system comprises: 1) a sensor unit comprising one or more UWB pulse transmitters and a series of inertial sensors; 2) a series of UWB receiver units remotely receiving pulse signals transmitted by the UWB transmitter to obtain each sensor Time of arrival (TOA), where the UWB transmitter is synchronized with the inertial sensor on the hardware; 3) the receiving processor receives the arrival time information as well as the inertial information and The information is integrated to obtain the position and posture of the object.
  • TOA Time of arrival
  • the above system uses UWB to be used in combination with the inertial sensor for positioning. Due to the poor positioning accuracy of the UWB, although the combination of inertial sensors and certain algorithm processing makes the captured motion trajectory smoother, it has limited help for improving the positioning accuracy. At the same time, UWB positioning can only be positioned in the horizontal plane, and can not be positioned in the vertical direction.
  • the above technical solution uses a pressure gauge to overcome this problem, but the positioning accuracy of the pressure gauge itself is relatively poor. In addition, the solution needs to set up multiple receivers, and the re-erection and debugging of the device takes a long time for the motion capture that needs to change the scene.
  • US Patent No. 20130028469 (Publication No.) combines an optical identification point with an inertial sensor to capture the position and posture of the object, determines the position of the optical identification point in the 2D image through an identification determining unit, and determines the optical identification point in depth by a depth determining unit.
  • the depth in the image is then combined by the optical marker based estimator to combine the position of the optical marker point in the 2D image with the depth in the depth map to obtain the 3D position of the marker point.
  • the inertia-based attitude and position are obtained by an inertial sensor unit.
  • the integrated estimator obtains the final integrated position and attitude based on the object's speed, position, and marker point signal conditions, with different weights based on the marker point location and the inertia based location.
  • the present invention provides a hybrid motion capture system and method for combining the advantages of optical motion capture and inertia based motion capture while avoiding the respective disadvantages of both motion capture modes.
  • an embodiment of the present invention provides a hybrid motion capture system, where the hybrid motion capture system includes: at least one inertial sensor module, at least one optical identifier, at least two optical cameras, and a receiving processor, The inertial sensor module is wirelessly connected to the receiving processor, and the optical camera is connected to the receiving processor in a wired or wireless manner, and the inertial sensor module and the optical identifier are mounted on the object to be measured;
  • the inertial sensor module is configured to measure its own inertial information and spatial attitude information
  • the optical camera is configured to acquire image information of the optical identifier
  • the receiving processor receives the inertial information, the spatial attitude information and the image information, generates inertial-based position information according to the inertial information and the spatial attitude information, and generates an optical identification point based on the image information.
  • Position information, and the inertial-based position information and position information based on optical identification points Performing integration to generate location information of the object to be tested.
  • the inertial sensor module includes:
  • a three-axis MEMS accelerometer for measuring acceleration information of the inertial sensor module itself
  • a three-axis MEMS gyroscope for measuring angular velocity information of the inertial sensor module itself
  • a three-axis MEMS magnetometer for measuring a geomagnetic vector of the inertial sensor module itself
  • a CPU connected to the three-axis MEMS accelerometer, a three-axis MEMS gyroscope, and a three-axis MEMS magnetometer for integrating the angular velocity information to generate a dynamic spatial orientation, which is generated according to the acceleration information and the geomagnetic vector Static absolute spatial orientation, and modifying the dynamic spatial orientation by using the static absolute spatial orientation to generate the spatial attitude information;
  • an RF transceiver connected to the CPU, configured to send the spatial attitude information and inertial information to the receiving processor, where the inertial information includes the acceleration information and angular velocity information.
  • a portion of the inertial sensor module and the partial optical identifier are integrated to form at least one inertial identification point.
  • the receiving processor is specifically configured to: perform second integration on the acceleration to generate the inertial-based position information, and then perform the inertial-based position information based on biomechanical constraints and external constraints. Correction is performed to generate corrected inertial-based position information.
  • the receiving processor integrates the inertial-based position information and the position information based on the optical identification point, specifically, when the optical identifier is occluded or overlapped, after the correction
  • the position information of the object to be tested is generated based on the inertial position information; when the position based on the optical mark is regained, the measurement error a based on the optical mark and the measurement error b based on the inertia are calculated, according to the measurement error a and the measurement
  • the error b is given to the optical marker-based position information and the corrected inertia-based position information weight A and the weight B, respectively, to generate position information of the object to be tested.
  • calculation formulas of the weight A and the weight B are as follows:
  • the optical identification is a reflective passive optical identification.
  • the optical identification is an illuminating active optical identification.
  • the optical camera is a plurality of discrete monocular cameras that are fixedly mounted or The tripod is mounted in a certain area.
  • the optical camera is at least one set of binocular cameras or multi-view cameras, and is mounted to a certain area by a tripod mounting or fixed installation.
  • an embodiment of the present invention further provides a hybrid motion capture method, which is applied to the hybrid motion capture system described above, and the hybrid motion capture method includes:
  • the inertial-based position information and the position information based on the optical identification points are integrated to generate position information of the object to be tested.
  • the generating inertial-based position information according to the inertial information and the spatial attitude information comprises: performing second integration on the acceleration to generate the inertial-based position information, and then based on biomechanical constraints And the outer constraint corrects the inertia-based position information to generate corrected inertial-based position information.
  • the inertial-based position information and the position information based on the optical identification point are integrated to generate position information of the object to be tested, including: when the optical identifier is occluded or overlapped, The corrected inertial-based position information generates position information of the object to be tested; when the position based on the optical identifier is re-acquired, the measurement error a based on the optical identifier and the measurement error b based on the inertia are calculated, according to the measurement error a and the measurement error b are respectively assigned to the optical marker-based position information and the corrected inertia-based position information weight A and the weight B, and the position information of the object to be tested is generated.
  • calculation formulas of the weight A and the weight B are as follows:
  • the advantages based on the optical motion capture and the inertia-based motion capture can be combined, and at the same time, the shortcomings of the two motion capture modes can be avoided, thereby achieving accurate capture of the fine motion of the human body.
  • FIG. 1 is a schematic structural view 1 of a hybrid motion capture system according to an embodiment of the present invention.
  • FIG. 2 is a schematic structural diagram of an inertial sensor module 101 according to an embodiment of the present invention.
  • FIG. 3 is a structural block diagram of a receiving processor 104 according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural view of an inertial marking point according to an embodiment of the present invention.
  • FIG. 5 is a second schematic structural diagram of a hybrid motion capture system according to an embodiment of the present invention.
  • FIG. 6 is a flowchart of a hybrid motion capture method according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural view 3 of a hybrid motion capture system according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a process of capturing and data processing of a human body motion by a hybrid motion capture system according to an embodiment of the present invention.
  • an embodiment of the present invention provides a hybrid motion capture system, including: at least one inertial sensor module 101, at least one optical identifier 102, at least two optical cameras 103, and a receiving processor 104.
  • the inertial sensor module 101 is wirelessly coupled to the receiving processor, such as by a radio frequency (RF) transceiver.
  • the optical camera 103 is connected to the receiving processor 103 in a wired or wireless manner.
  • the inertial sensor module 101 and the optical identifier 102 are respectively mounted on the object to be measured.
  • the inertial sensor module 101 may be installed at different positions of the object to be tested, and the inertial sensor module 101 may measure its own inertial information and spatial attitude information, and transmit the inertial information and the spatial attitude information to the receiving processor 104.
  • optical markers 102 There may be multiple optical markers 102, which are respectively installed at different positions of the object to be tested, and at least two optical cameras 103 are fixed in an area near the object to be tested.
  • the optical camera 103 can capture image information of the optical logo 102 mounted at different positions of the object to be tested, and transmit the image information to the receiving processor 104.
  • the receiving processor 104 receives the inertial information, the spatial attitude information and the image information, generates inertial-based position information according to the inertial information and the spatial attitude information, and generates position information based on the optical identification point according to the image information, and is based on inertia
  • the location information and the location information based on the optical identification points are integrated to generate location information of the object to be tested.
  • the inertial sensor module 101 can acquire the inertial information and the spatial attitude information, and the image information of the optical identifier 102 installed at different positions of the object to be tested can be acquired by the optical camera 103, and then received by the receiving processor 104 according to the inertial information and the spatial attitude.
  • the information generates inertial-based position information of the object to be tested, and generates position information based on the optical identification point according to the image information, and finally the receiving processor 104 integrates the position information based on the inertia and the position information based on the optical identification point to generate the object to be tested. Location information.
  • the inertial sensor module 101 includes a three-axis MEMS accelerometer 201, a three-axis MEMS gyroscope 202, a three-axis MEMS magnetometer 203, a CPU 204, and an RF transceiver 205.
  • the triaxial MEMS accelerometer 201 can measure the acceleration information of the inertial sensor module 101 itself
  • the triaxial MEMS gyroscope 202 can measure the angular velocity information of the inertial sensor module itself
  • the triaxial MEMS magnetometer 203 can measure the inertial sensor module itself. Geomagnetic vector.
  • the CPU 204 is connected to the three-axis MEMS accelerometer 201, the three-axis MEMS gyroscope 202 and the three-axis MEMS magnetometer 203, and can integrate the angular velocity information measured by the three-axis MEMS gyroscope 202 to generate a dynamic spatial orientation.
  • the integral formula is: Where ⁇ T and ⁇ 0 are spatial orientations, and ⁇ t is an angular velocity. According to the above integral formula, dynamic spatial orientation information can be obtained.
  • the CPU 204 can also generate a static absolute spatial orientation according to the acceleration information and the geomagnetic vector measured by the three-axis MEMS accelerometer 201, and then correct the dynamic spatial orientation by using the static absolute spatial orientation to generate the spatial attitude information of the inertial sensor module 101, that is, to be tested.
  • the spatial attitude information of the inertial sensor mounting portion of the object can also generate a static absolute spatial orientation according to the acceleration information and the geomagnetic vector measured by the three-axis MEMS accelerometer 201, and then correct the dynamic spatial orientation by using the static absolute spatial orientation to generate the spatial attitude information of the inertial sensor module 101, that is, to be tested.
  • the spatial attitude information of the inertial sensor mounting portion of the object can also generate a static absolute spatial orientation according to the acceleration information and the geomagnetic vector measured by the three-axis MEMS accelerometer 201, and then correct the dynamic spatial orientation by using the static absolute spatial orientation to generate the spatial attitude information of the inertial sensor module 101, that is, to be tested.
  • the RF transceiver 205 is connected to the CPU 204.
  • the RF transceiver 205 can transmit the spatial attitude information of the object to be tested and the inertial information of the inertial sensor module 101 to the receiving processor 104.
  • the inertial information includes acceleration information and angular velocity information. .
  • the inertial sensor module 101 can also include a battery 206 and a DC converter 207.
  • the receiving processor 104 includes a processor 301 and an RF transceiver 302, and the processor 301 and The RF transceiver 302 is wired, and the RF transceiver 302 can wirelessly connect to the RF transceiver 205, receive inertial information, spatial attitude information, and image information from the RF transceiver 205, and send it to the processor 301.
  • the processor 301 After receiving the inertial information and the spatial attitude information, the processor 301 first needs the acceleration in the inertial information to perform secondary integration to generate inertial-based position information.
  • inertial-based position information in order to make the above-mentioned inertial-based position information more accurate, it is necessary and based on biomechanical constraints (such as joint-connected constraints, etc.) and external constraints (such as constraints on contact with the ground), inertial-based positions.
  • biomechanical constraints such as joint-connected constraints, etc.
  • external constraints such as constraints on contact with the ground
  • the displacement of the same bone calculated by the spatial orientation and the spatial position of the base point, K is the scale factor calculated by Karman filtering or other methods, and its magnitude depends on the relative magnitude of the error of P a and P ⁇ , only the bones are listed here.
  • the biomechanical constrained displacement correction, other biomechanical constraints such as the allowable degrees of freedom of each joint, the relative range of motion between the allowed bones, etc. are not repeated. According to the above biomechanical constraint correction, the correction requires the use of spatial attitude information, which includes the spatial orientation of each bone and the spatial position of the base point.
  • P′ is the displacement of a certain part of the body after correction
  • P is the calculated displacement of a part of the body before correction
  • P c is the calculation Before the correction
  • the body 104 is at the displacement of the body part of the contact point
  • P o is the displacement of the outside of the contact point.
  • the processor 301 may generate location information based on the optical identification point according to the image information, and capture image information of each optical identifier from different angles by using multiple optical cameras, so that each time of the optical identification may be obtained.
  • the spatial coordinates in turn, the position information of the optical identification points.
  • the processor 301 further needs to integrate the position information based on the inertia and the position information based on the optical identification point to generate position information of the object to be tested.
  • the position information based on inertia and the position information based on the optical identification point are The method of integrating is to generate position information of the object to be tested with the corrected inertial-based position information when the optical marker is occluded or overlapped.
  • the motion of the object to be measured is estimated simultaneously with the position information based on the optical identification and the position information based on the inertia.
  • the estimation method is: assigning the position information based on the optical identification to the weight A according to the measurement error based on the optical identification, and assigning the position information based on the inertia based on the inertia-based measurement error b to the weight B.
  • the position information having a small measurement error is given a significant weight
  • the position information having a large measurement error is given a small weight.
  • the magnitudes a and b based on the optical identification and the inertia-based relative error may be evaluated in advance, and the optical identification-based position information is given a fixed weight A and inertial-based position information with a fixed weight B, weight.
  • a and weight B can be calculated by the following formula:
  • the calculation method of the weight A and the weight B is not limited thereto, and other weight calculation methods by those skilled in the art may also be applied to the present invention and will not be described again.
  • the above two measurement errors are evaluated online by using a certain filtering algorithm (such as a Kalman filter algorithm, etc.), and the position information based on the optical identification and the position information based on the inertia are given in real time to correspond.
  • a certain filtering algorithm such as a Kalman filter algorithm, etc.
  • the weights and weights can also be calculated using equations (1) and (2).
  • the error of the optical measurement is relatively small, and thus the weight of the position information based on the optical is relatively large.
  • the optical identification is visible, the displacement of the object gradually approaches the position information based on the optical identification.
  • the position information based on the optical identification cannot be obtained due to occlusion or overlapping, etc., the weight of the position information based on the optical identification is given 0, and the motion of the object to be tested is based on the position information based on the inertia.
  • the specific position of the optical identifier may be occluded or the position overlap may occur. If only the position information based on the optical identification point is used to position the object to be tested, the position information of the object to be tested may not be accurately obtained. By obtaining the inertia-based position information as a supplement to the position information based on the optical identification point, it is possible to compensate for the shortage of the positional positioning of the object to be measured using only the position information based on the optical identification point.
  • the hybrid motion capture system of FIG. 1 may include a plurality of inertial sensor modules 101 and a plurality of optical markers 102.
  • One inertial sensor module 101 and one optical marker 102 may be integrated to form an inertial identification point, and the inertial identification points may have multiple , as shown in Figure 4 and Figure 5.
  • the hybrid motion capture system may include at least one inertial identification point, a plurality of inertial sensor modules 101 and a plurality of optical identifiers 102, and may include at least one inertial identification point and a plurality of inertial sensor modules 101, which may include at least one inertia
  • the identification point and the plurality of optical identifiers 102 may include a plurality of inertial identification points, and may further include a plurality of inertial sensor modules 101 and at least one optical identification 102.
  • the optical marker can be a reflective passive optical marker, such as a reflective marker, or an illuminating active optical marker, such as an infrared illuminator.
  • the optical camera 103 is disposed at a certain area away from the object to be tested.
  • the optical camera 103 is a plurality of discrete monocular cameras that are mounted to a certain area by fixed mounting or tripod mounting.
  • the optical camera 103 can also be at least one set of binocular cameras or multi-view cameras that are mounted to a certain area by tripod mounting or fixed mounting.
  • an embodiment of the present invention provides a hybrid motion capture method, which can be applied to the hybrid motion capture system described above, and the hybrid motion capture method includes:
  • Step 601 Receive inertial information and spatial attitude information measured by the inertial sensor module, and generate inertial-based position information according to the inertial information and the spatial attitude information.
  • Step 602 Receive image information of the optical identifier captured by the camera, and generate location information based on the optical identification point according to the image information.
  • Step 603 Integrate the inertial-based position information and the position information based on the optical identifier to obtain final position information of the object to be tested.
  • the execution body of the hybrid motion capture method may be the receiving processor 101 of the hybrid motion capture system, and the invention is not limited thereto.
  • the invention can generate inertial-based position information of the object to be tested according to the inertial information and the spatial attitude information, generate position information based on the optical identification point according to the image information, and finally integrate the position information based on the inertia and the position information based on the optical identification point. , generating location information of the object to be tested.
  • step 601 after receiving the inertial information and the spatial attitude information, first, the acceleration in the inertial information is required to perform secondary integration to generate inertial-based position information.
  • inertial-based position information obtained above it is necessary to use inertial-based position information according to biomechanical constraints (such as joint-connected constraints, etc.) and external constraints (such as constraints on contact with the ground). Correction is performed to generate corrected inertial-based position information.
  • biomechanical constraints such as joint-connected constraints, etc.
  • external constraints such as constraints on contact with the ground
  • the displacement of the same bone calculated by the spatial orientation and the spatial position of the base point, K is the scale factor calculated by Karman filtering or other methods, and its magnitude depends on the relative magnitude of the error of P a and P ⁇ , only the bones are listed here.
  • the biomechanical constrained displacement correction, other biomechanical constraints such as the allowable degrees of freedom of each joint, the relative range of motion between the allowed bones, etc. are not repeated. It can be seen from the above biomechanical constraint correction that the correction requires the use of spatial attitude information.
  • P′ is the displacement of a certain part of the body after correction
  • P is the calculated displacement of a part of the body before correction
  • P c is the calculation Before the correction
  • the body 104 is at the displacement of the body part of the contact point
  • P o is the displacement of the outside of the contact point.
  • the position information based on the optical identification point may be generated according to the image information, and each optical identifier is photographed from different angles by using multiple optical cameras.
  • the image information can be obtained by optically identifying the spatial coordinates of each moment, thereby obtaining the position information of the object to be tested.
  • the inertial-based position information and the position information based on the optical identification point are integrated in step 603 to generate position information of the object to be tested.
  • the inertial-based position information and The method for integrating the position information based on the optical identification point is to generate the position information of the object to be tested with the corrected inertial-based position information when the optical identification is occluded or overlapped.
  • the motion of the object to be measured is estimated simultaneously with the position information based on the optical identification and the position information based on the inertia.
  • the estimation method is: assigning the position information based on the optical identification to the weight A according to the measurement error based on the optical identification, and assigning the position information based on the inertia based on the inertia-based measurement error b to the weight B.
  • the position information having a small measurement error is given a significant weight
  • the position information having a large measurement error is given a small weight.
  • the magnitudes a and b based on the optical identification and the inertia-based relative error may be evaluated in advance, And the position information based on the optical identification is given a fixed weight A and the inertia-based position information with a fixed weight B, and the weight A and the weight B can be calculated by the above formula (1) and formula (2):
  • the calculation method of the weight A and the weight B is not limited thereto, and other weight calculation methods by those skilled in the art may also be applied to the present invention and will not be described again.
  • the above two measurement errors are evaluated online by using a certain filtering algorithm (such as a Kalman filter algorithm, etc.), and the position information based on the optical identification and the position information based on the inertia are given in real time to correspond.
  • a certain filtering algorithm such as a Kalman filter algorithm, etc.
  • the weights and weight calculation methods can also be performed using the above formula (1) and formula (2).
  • the error of the optical measurement is relatively small, and thus the weight of the position information based on the optical is relatively large.
  • the optical identification is visible, the displacement of the object gradually approaches the position information based on the optical identification.
  • the position information based on the optical identification cannot be obtained due to occlusion or overlapping, etc., the weight of the position information based on the optical identification is given 0, and the motion of the object to be tested is based on the position information based on the inertia.
  • the specific position of the optical identifier may be occluded or the position overlap may occur. If only the position information based on the optical identification point is used to position the object to be tested, the position information of the object to be tested may not be accurately obtained. By obtaining the inertia-based position information as a supplement to the position information based on the optical identification point, it is possible to compensate for the shortage of the positional positioning of the object to be measured using only the position information based on the optical identification point.
  • the object to be tested of the present invention may be a human body or other moving objects.
  • the present invention is described by taking only a human body as an example.
  • the hybrid motion capture system is based on inertial-based position information, supplemented by position information of the optical marker point, and the hybrid motion capture system includes: 16 inertia bound to the human body.
  • the inertial identification point is an inertial sensor module integrated with an active optical identification.
  • the inertial sensor module and the inertial identification point are mounted to the human body through the sensor suit and the strap.
  • the inertial sensor module includes: a three-axis MEMS accelerometer, a three-axis MEMS gyroscope, a three-axis MEMS magnetometer, a CPU, an RF transceiver module, a battery, and a DC converter, which can be referred to FIG.
  • the CPU processes and calculates the acceleration information, the angular velocity information, and the geomagnetic vector of the inertial sensor to obtain the spatial attitude information of the inertial sensor module 701, and then transmits the inertial information and the spatial attitude information RF transceiver module to the receiving processor.
  • the receiving processor includes an RF transceiver 704 and a PC connected thereto.
  • the receiving processor can obtain the spatial position information of the body part of the module by secondary integration of the acceleration signal.
  • the integral error is corrected to obtain the final inertial-based position information (including spatial position and orientation information).
  • the active optical identifier in the one inertial identification point 702 is an infrared light emitting diode that is powered by the battery and the DC conversion circuit and emits infrared light after the inertial sensor module is turned on.
  • the binocular camera 703 is fixed to the side of the human activity venue by a tripod, and the captured image is transmitted to the receiving processor via USB. After receiving the image, the receiving processor determines the spatial location of the optical identifier according to the binocular positioning principle.
  • the receiving processor integrates the inertial-based position and the optical identification-based position to obtain the final spatial position of the human body. The following is a specific implementation process.
  • the binocular camera 703 is first fixedly mounted on the side of the activity venue through a tripod, and the position of the camera is based on the principle that the effective image acquisition area of the camera covers the active field of the human body to the maximum extent. The height and angle of the placement are minimized by the principle of occlusion.
  • the camera can be placed at a position higher than the head of the human body and inclined downward by a certain angle.
  • the 16 inertial sensor modules 701 and one inertial identification point 702 are mounted to the human body through the sensor suit and the strap, wherein the inertial marker point 702 is bound to the head by a strap, and the 16 inertial sensor modules 701 are coupled through the sensor suit and With the strap attached to the torso and limbs, the inertial marker point is mounted on the head and the camera's marker point is relatively less occluded when the camera is taller than the head position.
  • the wearer performs a number of calibration actions, such as a T-pose, a natural standing posture, etc., to correct the installation error of the sensor module.
  • the calibration operation is a technique well known to those skilled in the art, and details are not described herein.
  • the inertial sensor (including the inertial sensor in the inertial identification point) measures the inertial information (acceleration information and angular velocity information) of the human body installation part and the geomagnetic information.
  • the CPU pair of the inertial sensor module The information measured by the sensor is processed to obtain the spatial attitude information of the inertial sensor module, and then the inertial information and the calculated spatial attitude information are sent to the receiving processor beside the venue through the RF receiving module.
  • the receiving processor can obtain the inertial-based position information of the body part of the module by secondary integration of the acceleration information, through biomechanical constraints (such as joint-joined constraints, etc.) and External constraints (such as constraints on contact with the ground, etc.), correction of position information based on inertia. Due to the problem of the camera angle of view, the camera can only track the position of the marker point in a certain area of the venue, as shown in Figure 7.
  • the binocular camera 703 sends the captured image to the PC via USB, and the PC detects that both cameras of the binocular camera 703 include the identification point. Information, the spatial position of the identification point is determined according to the binocular positioning principle.
  • the PC takes the spatial position of the identification point as an observation value, and integrates the spatial position based on the identification with the inertia-based position through a certain filtering algorithm (such as Kalman filter, etc.), so that the wearer's model can be smoothly approached based on the identification. s position. If the PC does not get the spatial position based on the marker due to the wearer coming out of the camera positioning area or the occlusion occurs, the hybrid motion capture system uses the position information based on the inertia to generate the position information of the wearer.
  • a certain filtering algorithm such as Kalman filter, etc.
  • the advantage of this embodiment is that the device is easy to set up and install, and the site can be easily changed without being affected by the occlusion. At the same time, the spatial position of the human body can be accurately positioned, and the integral drift of the inertial motion capture system is avoided.
  • the hybrid motion capture system is supplemented by inertial based position information, based on position information of the optical marker points.
  • the hybrid motion capture system includes a plurality of optical markers bound to the human body, a plurality of inertial sensor modules, and two inertial marker points, a plurality of cameras fixed to the periphery of the activity venue, and a reception processor at the side of the venue.
  • a plurality of optical markers are fixed to a part of the human body such as the head, the trunk, and the extremities that are not easily blocked and do not slide with the muscles.
  • the plurality of inertial sensor modules are highly miniaturized sensor modules that are attached to the joints of the fingers of both hands and the back of the hand.
  • the two inertial marking points are fixed at the wrists of both hands.
  • Inertial identification points include three-axis MEMS accelerometers, three-axis MEMS gyroscopes, three-axis MEMS magnetometers, serial communication modules, RF transceiver modules, CPU modules, batteries, DC conversion modules, and optical tags.
  • the inertial sensor module includes a three-axis MEMS accelerometer, a three-axis MEMS gyroscope, a three-axis MEMS magnetometer, a CPU module, and a serial communication module.
  • the inertial sensor module on the two hands transmits the collected inertial information and the spatial attitude information of the module calculated by the CPU to the inertial identification point through serial communication.
  • the inertial identification point transmits the spatial attitude information (azimuth information) and the inertia information of each module of the entire hand to the receiving processor through the RF transceiver.
  • the receiving processor can calculate the position and orientation of the hand (including the finger) according to the received inertial information sent by each inertial sensor and the orientation information of the inertial sensor module.
  • a plurality of cameras around the site capture a plurality of optical markers of various parts of the human body. According to the received image information of the respective cameras for optical identification, the receiving processor can obtain motion information of the entire body.
  • the receiving processor integrates the overall motion information of the person obtained by the optical identification and the hand motion information obtained by the inertial sensor, and can obtain the whole body motion information of the human body including the hand motion. The following is a specific implementation process.
  • a plurality of optical cameras are installed around the designated site, and after the installation is completed, the optical camera is calibrated. Then install optical markers, inertial sensor modules, and inertial markers on designated parts of the human body. point.
  • the system is turned on and the inertial sensor is calibrated by placing the hand with the inertial sensor in a special mold placed in the specified position, and then pressing the calibration button on the receiving processor's operation interface.
  • the processor calibrates the mounting orientation error of the inertial sensor module according to the known position of each bone of the hand and the orientation measured by the inertial sensor module.
  • the calibration operation is a technique well known to those skilled in the art, and details are not described herein.
  • the optical camera captures the optical identification of various parts of the body and transmits the image to the receiving processor in real time.
  • the receiving processor processes the received images of the respective cameras to obtain motion information of the human body based on the optical identification.
  • the inertial sensor module transmits the measured inertial information and the calculated orientation information to the receiving processor in real time, and the receiving processor can calculate the posture and position information of the hand according to the biomechanical constraints of the hand (such as joint connection constraints).
  • the inertia identification point is the coincidence point of the inertial sensor and the optical identification. According to this known condition, the integral error of the inertial sensor can be corrected to obtain accurate motion information of the hand. Combined with the previously obtained optical identification-based motion information, accurate motion information of the whole body (including both hands) can be obtained.
  • the inertia sensor is used for motion capture in a hand that is easy to generate occlusion, and the other parts adopt a relatively accurate optical method for motion capture, and the inertia identification point is used for data integration at the combination of the two, so that The advantages of optical motion capture and inertial motion capture are perfectly combined to achieve precise capture of the fine movements of the human body.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Abstract

一种混合运动捕捉系统及方法,该混合运动捕捉系统包括:至少一惯性传感器模块(101),至少一光学标识(102),至少两个光学摄像头(103)及接收处理器(104),惯性传感器模块(101)与接收处理器(104)连接,光学摄像头(103)与接收处理器(104)连接,惯性传感器模块(101)及光学标识(102)安装在待测量对象上;惯性传感器模块(101)测量惯性信息及空间姿态信息;光学摄像头(103)获取光学标识(102)的图像信息;接收处理器(104)根据惯性信息及空间姿态信息生成基于惯性的位置信息;根据图像信息生成基于光学标识点的位置信息,将基于惯性的位置信息及基于光学标识点的位置信息进行整合,生成待测对象的位置信息。混合运动捕捉系统将基于光学运动捕捉和基于惯性的运动捕捉的优点结合,同时能够避开两种运动捕捉方式各自的缺点。

Description

一种混合运动捕捉系统及方法 技术领域
本发明是关于运动捕捉技术,特别是关于一种混合运动捕捉系统及方法。
背景技术
近年来,运动捕捉技术开始广泛应用于体育运动的动作捕捉与分析。运动捕捉技术可以以数字的方式记录对象的动作,当前常用的运动捕捉技术主要包括光学式运动捕捉和基于惯性传感器的运动捕捉。
光学式运动捕捉系统包含多个相机,环绕待测物体排列,待测物体的运动范围处于相机的重叠区域。待测物体的关键部位贴上一些特质的反光点或者发光点作为视觉识别和处理的标识。系统标定后,相机连续拍摄物体的运动并把图像序列保存下来进行分析和处理,计算每一个标识点在某一瞬间的空间位置,并从而得到其准确的运动轨迹。光学式运动捕捉的优点是没有机械装置、有线电缆等的限制,允许物体的运动范围较大,并且采样频率较高。但是这种系统只能捕捉相机重叠区域的物体运动,而且当运动比较复杂时,标识容易混淆和遮挡,从而产生错误的结果。
传统的机械式惯性传感器长期应用于飞机、船舶的导航,随着微机电系统(MEMS)技术的高速发展,微型惯性传感器的技术成熟,近年来,人们开始尝试基于微型惯性传感器的运动捕捉。基本方法是把惯性测量单元(IMU)连接到待测物体上并跟随待测物体一起运动。惯性测量单元通常包括微加速度计(测量加速度信号)以及微陀螺仪(测量角速度信号),通过对加速度信号的二次积分以及陀螺仪信号的积分,可以得到待测物体的位置信息以及方位信息。由于MEMS技术的应用,IMU的尺寸和重量可以做的很小,从而对待测物体的运动影响很小,并且对于场地的要求低,不受光线以及遮挡的影响,允许的运动范围大。但是基于惯性的运动捕捉存在积分漂移导致运动捕捉的精度降低。
美国专利US8203487揭示了一种结合超宽带(UWB)测量和MEMS惯性测量的运动捕捉系统与方法。该系统包括:1)传感器单元,包含一个或者多个UWB脉冲发射器以及一系列的惯性传感器;2)一系列UWB接收器单元,远程接收UWB发射器发送来的脉冲信号,以获得每个传感器的到达时间(TOA),其中UWB发射器与惯性传感器在硬件上进行了同步;3)接收处理器,接收到达时间信息以及惯性信息,并对这些信 息进行整合处理以获取对象的位置和姿态。
上述系统采用UWB来跟惯性传感器结合使用进行定位,由于UWB的定位精度较差,虽然通过惯性传感器的结合以及一定的算法处理使得捕捉的运动轨迹比较平滑,但是对于提升定位精度帮助有限。同时,UWB定位只能进行水平面内的定位,对于竖直方向不能进行定位,上述技术方案采用压力计来克服这个问题,但是压力计本身的定位精度也比较差。此外,该方案需要架设多个接收器,对于需要变换场景的运动捕捉,设备的重新架设和调试耗时较长。
美国专利US20130028469(公开号)将光学标识点与惯性传感器结合进行对象位置和姿态的捕捉,通过一个标识确定单元确定光学标识点在2D图像里面的位置,通过一个深度确定单元确定光学标识点在深度图像中的深度,然后由基于光学标识的估算器将光学标识点在2D图像中的位置和深度图中的深度结合起来获得标识点的3D位置。同时通过一个惯性传感器单元获得基于惯性的姿态和位置。最后整合估算器根据对象的速度、位置以及标识点信号情况给与基于标识点的位置和基于惯性的位置不同的权重来获取最后的整合位置和姿态。
由于上述方案采用单节点的运动捕捉方式,不能对多关节对象的复杂运动进行捕捉。
发明内容
本发明提供一种混合运动捕捉系统及方法,以将基于光学运动捕捉和基于惯性的运动捕捉的优点结合,同时能够避开两种运动捕捉方式各自的缺点。
为了实现上述目的,本发明实施例提供一种混合运动捕捉系统,所述的混合运动捕捉系统包括:至少一惯性传感器模块,至少一光学标识,至少两个光学摄像头及一接收处理器,所述的惯性传感器模块与所述的接收处理器无线连接,所述的光学摄像头与所述的接收处理器有线或无线连接,所述惯性传感器模块及光学标识安装在待测量对象上;其中,
所述的惯性传感器模块,用于测量自身的惯性信息及空间姿态信息;
所述的光学摄像头,用于获取所述光学标识的图像信息;
所述的接收处理器,接收所述的惯性信息、空间姿态信息及图像信息,根据所述的惯性信息及空间姿态信息生成基于惯性的位置信息;根据所述的图像信息生成基于光学标识点的位置信息,并将所述基于惯性的位置信息及基于光学标识点的位置信息 进行整合,生成所述待测对象的位置信息。
在一实施例中,所述的惯性传感器模块包括:
三轴MEMS加速度计,用于测量所述惯性传感器模块自身的加速度信息;
三轴MEMS陀螺仪,用于测量所述惯性传感器模块自身的角速度信息;
三轴MEMS磁力计,用于测量所述惯性传感器模块自身的地磁向量;
CPU,连接所述的三轴MEMS加速度计、三轴MEMS陀螺仪及三轴MEMS磁力计,用于对所述的角速度信息进行积分,生成动态空间方位,根据所述的加速度信息及地磁向量生成静态绝对空间方位,并利用所述静态绝对空间方位对所述动态空间方位进行修正,生成所述的空间姿态信息;
RF收发器,连接所述的CPU,用于将所述的空间姿态信息及惯性信息发送给所述的接收处理器,所述惯性信息包括所述的加速度信息及角速度信息。
在一实施例中,部分所述惯性传感器模块与所述部分光学标识两两集成在一起,形成至少一惯性标识点。
在一实施例中,所述的接收处理器具体用于:对所述的加速度进行二次积分生成所述基于惯性的位置信息,然后基于生物力学约束以及外界约束对所述基于惯性的位置信息进行修正,生成修正后的基于惯性的位置信息。
在一实施例中,所述的接收处理器将所述基于惯性的位置信息及基于光学标识点的位置信息进行整合时,具体用于:当光学标识发生遮挡或者重叠时,以所述修正后的基于惯性的位置信息生成所述待测对象的位置信息;当重新获得所述基于光学标识的位置时,计算基于光学标识的测量误差a及基于惯性的测量误差b,根据测量误差a及测量误差b分别赋予所述基于光学标识的位置信息及修正后的基于惯性的位置信息权重A及权重B,生成所述待测对象的位置信息。
在一实施例中,所述的权重A及权重B的计算公式如下:
Figure PCTCN2015079346-appb-000001
Figure PCTCN2015079346-appb-000002
在一实施例中,所述的光学标识为反射性的被动光学标识。
在一实施例中,所述的光学标识为发光性的主动光学标识。
在一实施例中,所述的光学摄像头为多个分立的单目摄像头,通过固定安装或者 三脚架安装的方式安装到一定的区域。
在一实施例中,所述的光学摄像头为至少一组双目摄像头或者多目摄像头,通过三脚架安装或者固定安装的方式安装到一定的区域。
为了实现上述目的,本发明实施例还提供了一种混合运动捕捉方法,应用于上述的混合运动捕捉系统,所述的混合运动捕捉方法包括:
接收所述惯性传感器模块测量的自身的惯性信息及空间姿态信息,并根据所述的惯性信息及空间姿态信息生成基于惯性的位置信息;
接收所述摄像头拍摄的所述光学标识的图像信息,并根据所述的图像信息生成基于光学标识点的位置信息;
将所述基于惯性的位置信息及基于光学标识点的位置信息进行整合,生成所述待测对象的位置信息。
在一实施例中,所述根据所述的惯性信息及空间姿态信息生成基于惯性的位置信息,包括:对所述的加速度进行二次积分生成所述基于惯性的位置信息,然后基于生物力学约束以及外界约束对所述基于惯性的位置信息进行修正,生成修正后的基于惯性的位置信息。
在一实施例中,所述将所述基于惯性的位置信息及基于光学标识点的位置信息进行整合,生成所述待测对象的位置信息,包括:当光学标识发生遮挡或者重叠时,以所述修正后的基于惯性的位置信息生成所述待测对象的位置信息;当重新获得所述基于光学标识的位置时,计算基于光学标识的测量误差a及基于惯性的测量误差b,根据测量误差a及测量误差b分别赋予所述基于光学标识的位置信息及修正后的基于惯性的位置信息权重A及权重B,生成所述待测对象的位置信息。
在一实施例中,所述的权重A及权重B的计算公式如下:
Figure PCTCN2015079346-appb-000003
Figure PCTCN2015079346-appb-000004
通过本发明,可以将基于光学运动捕捉和基于惯性的运动捕捉的优点结合,同时能够避开两种运动捕捉方式各自的缺点,从而实现对人体精细动作的精准捕捉。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现 有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例的混合运动捕捉系统的结构示意图一;
图2为本发明实施例的惯性传感器模块101的结构示意图;
图3为本发明实施例的接收处理器104的结构框图;
图4为本发明实施例的惯性标识点的结构示意图;
图5为本发明实施例的混合运动捕捉系统的结构示意图二;
图6为本发明实施例的混合运动捕捉方法流程图;
图7为本发明实施例的混合运动捕捉系统的结构示意图三;
图8为本发明实施例的混合运动捕捉系统对人体的运动进行捕捉及数据处理过程示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
如图1所示,本发明实施例提供一种混合运动捕捉系统,该混合运动捕捉系统包括:至少一惯性传感器模块101,至少一光学标识102,至少两个光学摄像头103及接收处理器104。
惯性传感器模块101与接收处理器无线连接,例如可以通过射频(RF)收发器连接。光学摄像头103与接收处理器103以有线或无线方式连接。惯性传感器模块101及光学标识102分别安装在待测量对象上。
惯性传感器模块101可以有多个,分别安装在待测对象的不同位置,惯性传感器模块101可以测量其自身的惯性信息及空间姿态信息,并将惯性信息及空间姿态信息传送给接收处理器104。
光学标识102可以有多个,分别安装在待测对象的不同位置,光学摄像头103至少有两个,固定在待测对象附近的区域。光学摄像头103可以拍摄安装在待测对象不同位置的光学标识102的图像信息,并将图像信息传送给接收处理器104。
接收处理器104接收上述的惯性信息、空间姿态信息及图像信息,可以根据惯性信息及空间姿态信息生成基于惯性的位置信息,还可以根据图像信息生成基于光学标识点的位置信息,并将基于惯性的位置信息及基于光学标识点的位置信息进行整合,生成待测对象的位置信息。
本发明可以通过惯性传感器模块101获取惯性信息及空间姿态信息,同时可以通过光学摄像头103获取安装在待测对象不同位置的光学标识102的图像信息,然后通过接收处理器104根据惯性信息及空间姿态信息生成待测对象的基于惯性的位置信息,根据图像信息生成基于光学标识点的位置信息,最后接收处理器104将基于惯性的位置信息及基于光学标识点的位置信息进行整合,生成待测对象的位置信息。通过该混合运动捕捉系统,可以将基于光学运动捕捉和基于惯性的运动捕捉的优点结合,同时能够避开两种运动捕捉方式各自的缺点。
在一实施例中,如图2所示,惯性传感器模块101包括:三轴MEMS加速度计201,三轴MEMS陀螺仪202,三轴MEMS磁力计203,CPU204,RF收发器205。
三轴MEMS加速度计201可以测量惯性传感器模块101自身的加速度信息,三轴MEMS陀螺仪202可以测量所述惯性传感器模块自身的角速度信息,三轴MEMS磁力计203可以测量所述惯性传感器模块自身的地磁向量。
CPU204连接所述的三轴MEMS加速度计201、三轴MEMS陀螺仪202及三轴MEMS磁力计203,可以对三轴MEMS陀螺仪202测量的角速度信息进行积分,生成动态空间方位,积分公式为:
Figure PCTCN2015079346-appb-000005
其中,θT及θ0为空间方位,ωt为角速度,根据上述积分公式,就可以得到动态空间方位信息。
CPU204还可以根据三轴MEMS加速度计201测量的加速度信息及地磁向量生成静态绝对空间方位,然后利用静态绝对空间方位对动态空间方位进行修正,生成惯性传感器模块101的空间姿态信息,亦即待测对象的惯性传感器安装部位的空间姿态信息。
RF收发器205连接CPU204,通过RF收发器205,可以将待测对象的空间姿态信息及惯性传感器模块101惯性信息发送给接收处理器104,在一实施例中,惯性信息包括加速度信息及角速度信息。
惯性传感器模块101还可以包括电池206及直流转换器207。
如图3所示,接收处理器104包括处理器301及RF收发器302,处理器301与 RF收发器302有线连接,RF收发器302可以无线连接RF收发器205,从RF收发器205接收惯性信息、空间姿态信息及图像信息,并发送给处理器301。
处理器301接收到惯性信息及空间姿态信息后,首先需要惯性信息中的加速度进行二次积分生成基于惯性的位置信息。
上述的二次积分公式为:
Figure PCTCN2015079346-appb-000006
其中,P表示位移,v为速度,a为加速度,T为终止时刻,0为初始时刻,t为中间时刻。
具体实施时,为了使上述得到的基于惯性的位置信息更准确,需要并根据生物力学约束(比如关节相连的约束等)及与外界约束(如与地面接触的约束等),对基于惯性的位置信息进行修正,生成修正后的基于惯性的位置信息。
生物力学约束修正公式为:P=Pa+K(Pθ-Pa),其中,Pa为根据加速度二次积分计算的某骨骼的位移,Pθ为根据骨骼相连的关系、各骨骼的空间方位以及基点的空间位置计算的同一骨骼的位移,K为采用卡曼滤波或者其他方法计算的比例因子,其大小取决于Pa与Pθ的误差的相对大小,此处仅列举了骨骼相连的生物力学约束位移修正,其他的生物力学约束如各个关节的允许自由度、允许的骨骼之间相对运动范围等不再赘述。通过上述生物力学约束修正可知,该修正需要利用空间姿态信息,空间姿态信息包括各骨骼的空间方位及基点的空间位置等。
与外界约束的修正公式为:P′=P+(Po-Pc),其中,P′为修正后的身体某部位的位移,P为计算的修正前身体某部位的位移,Pc为计算的修正前人体104处于接触点的身体部位的位移,Po为接触点外界的位移。例如当判定人体单脚站立触地时,则拿触地处地面的位移减去触地的脚底的计算的位移,把这个位移差叠加到身体所有部位的计算的位移上去,就得到修正后的全身的位移。这个修正位移的方法同样适用于全身速度的修正以及其他类型的接触修正。
处理器301接收到光学标识的图像信息后,可以根据该图像信息生成基于光学标识点的位置信息,通过多个光学摄像头从不同角度拍摄每一个光学标识的图像信息,可以得到光学标识每一时刻的空间坐标,进而得到光学标识点的位置信息。
处理器301还需要将基于惯性的位置信息及基于光学标识点的位置信息进行整合,生成待测对象的位置信息,在一实施例中,将基于惯性的位置信息及基于光学标识点的位置信息进行整合的方法是,当光学标识发生遮挡或者重叠时,以上述修正后的基于惯性的位置信息生成待测对象的位置信息。
当重新获得基于光学标识的位置时,将同时以基于光学标识的位置信息和基于惯性的位置信息对待测对象的运动进行估算。估算方法为:根据基于光学标识的测量误差a赋予基于光学标识的位置信息以权重A,根据基于惯性的测量误差b赋予基于惯性的位置信息以权重B。其中,测量误差小的位置信息被赋予的权重大,而测量误差大的位置信息被赋予的权重小。
在一实施例中,可以事先评估基于光学标识和基于惯性的相对误差的大小a和b,而赋予基于光学标识的位置信息以固定的权重A和基于惯性的位置信息以固定的权重B,权重A及权重B可以通过如下公式计算:
Figure PCTCN2015079346-appb-000007
Figure PCTCN2015079346-appb-000008
其中,a为上述基于光学标识的测量误差,b为上述基于惯性的测量误差。
权重A及权重B的计算方法不以此为限,本领域技术人员的其它权重计算方法也可适用本发明,不再赘述。
在另一实施例中,通过采用一定的滤波算法(如卡尔曼滤波算法等),在线对上述两种测量误差进行评估,并实时的赋予基于光学标识的位置信息和基于惯性的位置信息以相应的权重,权重的计算方法也可以利用公式(1)及公式(2)进行。
需要说明的是,通常光学测量的误差比较小,因而基于光学的位置信息的权重会比较大,当光学标识可见时,对象的位移会逐渐逼近基于光学标识的位置信息。当因遮挡或者重叠等情况而不能获得基于光学标识的位置信息时,基于光学标识的位置信息的权重才被赋予0,待测对象的运动将以基于惯性的位置信息为准。
光学标识的特定的位置可能被遮挡,或者出现位置重叠情况,如果仅采用基于光学标识点的位置信息进行待测对象的位置定位,将无法准确获得待测对象的位置信息。通过获得基于惯性的位置信息作为基于光学标识点的位置信息的补充,可以弥补仅采用基于光学标识点的位置信息对待测对象的位置定位的不足。
图1中的混合运动捕捉系统可以包括多个惯性传感器模块101及多个光学标识102,一个惯性传感器模块101与一个光学标识102可以集成在一起,形成惯性标识点,惯性标识点可以有多个,如图4及图5所示。
具体实施时,混合运动捕捉系统中可以包括至少一个惯性标识点、多个惯性传感器模块101及多个光学标识102,可以包括至少一个惯性标识点及多个惯性传感器模块101,可以包括至少一个惯性标识点及多个光学标识102,可以包括多个惯性标识点,还可以包括多个惯性传感器模块101及至少一个光学标识102。
光学标识可以为反射性的被动光学标识,如能够反光的标识,也可以为发光性的主动光学标识,如红外发光器等。
光学摄像头103设置于远离待测对象的某一区域,在一实施例中,光学摄像头103为多个分立的单目摄像头,通过固定安装或者三脚架安装的方式安装到一定的区域。在另一实施例中,光学摄像头103也可以为至少一组双目摄像头或者多目摄像头,通过三脚架安装或者固定安装的方式安装到一定的区域。
如图6所示,本发明实施例提供一种混合运动捕捉方法,该方法可以应用于上述的混合运动捕捉系统,该混合运动捕捉方法包括:
步骤601:接收所述惯性传感器模块测量的自身的惯性信息及空间姿态信息,并根据所述的惯性信息及空间姿态信息生成基于惯性的位置信息;
步骤602:接收所述摄像头拍摄的所述光学标识的图像信息,并根据所述的图像信息生成基于光学标识点的位置信息;
步骤603:将所述基于惯性的位置信息和基于光学标识的位置信息进行整合,得到待测对象最终的位置信息。
上述混合运动捕捉方法的执行主体可以是混合运动捕捉系统的接收处理器101,本发明不以此为限。
本发明可以根据惯性信息及空间姿态信息生成待测对象的基于惯性的位置信息,根据图像信息生成基于光学标识点的位置信息,最后将基于惯性的位置信息及基于光学标识点的位置信息进行整合,生成待测对象的位置信息。通过该混合运动捕捉方法,可以将基于光学运动捕捉和基于惯性的运动捕捉的优点结合,同时能够避开两种运动捕捉方式各自的缺点。
上述步骤601具体实施时,接收到惯性信息及空间姿态信息后,首先需要惯性信息中的加速度进行二次积分生成基于惯性的位置信息。
上述的二次积分公式为:
Figure PCTCN2015079346-appb-000009
其中,P表示位移,v为速度,a为加速度,T为终止时刻,0为初始时刻,t为中间时刻。
具体实施时,为了使上述得到的基于惯性的位置信息更准确,需要根据生物力学约束(比如关节相连的约束等)及与外界约束(如与地面接触的约束等),对基于惯性的位置信息进行修正,生成修正后的基于惯性的位置信息。
生物力学约束修正公式为:P=Pa+K(Pθ-Pa),其中,Pa为根据加速度二次积分计算的某骨骼的位移,Pθ为根据骨骼相连的关系、各骨骼的空间方位以及基点的空间位置计算的同一骨骼的位移,K为采用卡曼滤波或者其他方法计算的比例因子,其大小取决于Pa与Pθ的误差的相对大小,此处仅列举了骨骼相连的生物力学约束位移修正,其他的生物力学约束如各个关节的允许自由度、允许的骨骼之间相对运动范围等不再赘述。通过上述生物力学约束修正可知,该修正需要利用空间姿态信息。
与外界约束的修正公式为:P′=P+(Po-Pc),其中,P′为修正后的身体某部位的位移,P为计算的修正前身体某部位的位移,Pc为计算的修正前人体104处于接触点的身体部位的位移,Po为接触点外界的位移。例如当判定人体单脚站立触地时,则拿触地处地面的位移减去触地的脚底的计算的位移,把这个位移差叠加到身体所有部位的计算的位移上去,就得到修正后的全身的位移。这个修正位移的方法同样适用于全身速度的修正以及其他类型的接触修正。
在一实施例中,上述步骤602具体实施时,接收到光学标识的图像信息后,可以根据该图像信息生成基于光学标识点的位置信息,通过多个光学摄像头从不同角度拍摄每一个光学标识的图像信息,可以得到光学标识每一时刻的空间坐标,进而得到待测对象的位置信息。
在一实施例中,在步骤603中将所述基于惯性的位置信息及基于光学标识点的位置信息进行整合,生成待测对象的位置信息,在一实施例中,将基于惯性的位置信息及基于光学标识点的位置信息进行整合的方法是,当光学标识发生遮挡或者重叠时,以上述修正后的基于惯性的位置信息生成待测对象的位置信息。
当重新获得基于光学标识的位置时,将同时以基于光学标识的位置信息和基于惯性的位置信息对待测对象的运动进行估算。估算方法为:根据基于光学标识的测量误差a赋予基于光学标识的位置信息以权重A,根据基于惯性的测量误差b赋予基于惯性的位置信息以权重B。其中,测量误差小的位置信息被赋予的权重大,而测量误差大的位置信息被赋予的权重小。
在一实施例中,可以事先评估基于光学标识和基于惯性的相对误差的大小a和b, 而赋予基于光学标识的位置信息以固定的权重A和基于惯性的位置信息以固定的权重B,权重A及权重B可以通过上述公式(1)及公式(2)计算:
权重A及权重B的计算方法不以此为限,本领域技术人员的其它权重计算方法也可适用本发明,不再赘述。
在另一实施例中,通过采用一定的滤波算法(如卡尔曼滤波算法等),在线对上述两种测量误差进行评估,并实时的赋予基于光学标识的位置信息和基于惯性的位置信息以相应的权重,权重的计算方法也可以利用上述公式(1)及公式(2)进行。
需要说明的是,通常光学测量的误差比较小,因而基于光学的位置信息的权重会比较大,当光学标识可见时,对象的位移会逐渐逼近基于光学标识的位置信息。当因遮挡或者重叠等情况而不能获得基于光学标识的位置信息时,基于光学标识的位置信息的权重才被赋予0,待测对象的运动将以基于惯性的位置信息为准。
光学标识的特定的位置可能被遮挡,或者出现位置重叠情况,如果仅采用基于光学标识点的位置信息进行待测对象的位置定位,将无法准确获得待测对象的位置信息。通过获得基于惯性的位置信息作为基于光学标识点的位置信息的补充,可以弥补仅采用基于光学标识点的位置信息对待测对象的位置定位的不足。
为了更好的理解本发明实施例,下面以两个具体的例子进行说明。本发明的待测对象可以是人体,也可以是其他的运动物体,本发明仅以人体为例进行说明。
一实施例中,如图7所示,混合运动捕捉系统为以基于惯性的位置信息为主,基于光学标识点的位置信息为辅,该混合运动捕捉系统包括:绑定到人体的16个惯性传感器模块701,1个惯性标识点702,一组双目摄像头703及一个接收处理器。惯性标识点为一个惯性传感器模块与一个主动式光学标识集成在一起。惯性传感器模块以及惯性标识点通过传感器服以及绑带的方式安装到人体。
惯性传感器模块包括:三轴MEMS加速度计、三轴MEMS陀螺仪、三轴MEMS磁力计、CPU、RF收发模块、电池及直流转换器,可以参照图2所示。CPU对惯性传感器的加速度信息、角速度信息及地磁向量进行处理计算,得到惯性传感器模块701的空间姿态信息,然后将惯性信息及空间姿态信息RF收发模块发送给接收处理器。接收处理器包括一个RF收发器704以及与之相连的PC,接收处理器接收到传感器模块的姿态和惯性信息后,通过对加速度信号的二次积分,可以得到模块安装身体部位的空间位置信息,通过人体生物力学约束以及人与外界环境的接触约束,可以 对积分误差进行修正,从而得到最终的基于惯性的人体的位置信息(包括空间位置和方位信息)。
1个惯性标识点702中的主动式光学标识为一红外发光二极管,通过电池及直流转换电路供电并在惯性传感器模块开机后发射红外光。双目摄像头703通过三脚架固定在人活动场地边,把拍摄的影像通过USB发送给接收处理器。接收处理器在收到影像后根据双目定位原理确定光学标识的空间位置。接收处理器对基于惯性的位置和基于光学标识的位置进行整合,得到人体最终的空间位置。下面为具体的实施过程。
在使用图7的上述混合运动捕捉系统时,首先把双目摄像头703通过三脚架固定安装到活动场地一边,摄像头的摆放位置以摄像头的有效图像采集区域最大限度覆盖人体的活动场地为原则,摄像头的摆放高度和角度以最大限度减少遮挡为原则,对于惯性标识点702安装在人体头部的情况,摄像头可以摆放在比人体头部高的位置并向下倾斜一定的角度。把16个惯性传感器模块701以及1个惯性标识点702通过传感器服以及绑带安装到人体上,其中惯性标识点702通过绑带绑定到头部,16个惯性传感器模块701通过传感器服以及绑带绑定到躯干和四肢,惯性标识点安装在头部并且在摄像头比头部位置要高的情况下,头部的标识点发生遮挡的几率相对要小。完成安装后,开启系统,并建立各个部分之间的连接。然后穿戴者按照指示做几个校准动作,如T-姿态、自然站立姿态等,对传感器模块的安装误差进行修正,该校准动作为本领域技术人员熟知的技术,在此不再赘述。
接下来,穿戴者可以进行自由运动,上述混合运动捕捉系统会对人体的运动进行捕捉。具体捕捉及数据处理过程如图8所示:惯性传感器(包括惯性标识点中的惯性传感器)对人体安装部位的惯性信息(加速度信息和角速度信息)以及地磁信息进行测量,惯性传感器模块的CPU对传感器测得的信息进行处理,得到惯性传感器模块的空间姿态信息,然后把惯性信息以及计算得到的空间姿态信息通过RF接收模块发送给场地旁边的接收处理器。接收处理器接收到各个惯性传感器的方位以及惯性信息后,通过对加速度信息的二次积分,可以得到模块安装身体部位的基于惯性的位置信息,通过生物力学约束(比如关节相连的约束等)和外界约束(如与地面接触的约束等),对基于惯性的位置信息进行修正。由于摄像头视角的问题,摄像头只能在场地一定区域内对标识点的位置进行拍摄跟踪,如图7所示。双目摄像头703把拍摄到的图像通过USB发送给PC,PC若检测到双目摄像头703的两个摄像头都包含标识点 信息,则根据双目定位原理确定标识点的空间位置。PC把标识点的空间位置作为一个观测值,通过一定的滤波算法(如卡尔曼滤波等),将基于标识的空间位置与基于惯性的位置进行整合,使得穿戴者的模型能够平滑的靠近基于标识的位置。若由于穿戴者走出摄像头定位区域或者发生了遮挡,导致PC得不到基于标识的空间位置,则上述混合运动捕捉系统采用基于惯性的位置信息生成穿戴者的位置信息。
本实施例的优点是:设备架设与安装方便,可以方便的变换场地,不受遮挡的影响,同时能够比较准确的对实现对人体的空间位置定位,避免了惯性运动捕捉系统的积分漂移。
另一实施例中,混合运动捕捉系统为以基于惯性的位置信息为辅,基于光学标识点的位置信息为主。
混合运动捕捉系统包括绑定到人体的多个光学标识、多个惯性传感器模块以及两个惯性标识点,固定在活动场地周边的多个摄像头,以及场地边的接收处理器。其中,多个光学标识固定在人体的头部、躯干以及四肢等不易遮挡、不随肌肉伸缩而滑动的部位。多个惯性传感器模块为高度小型化的传感器模块,固定在双手的手指的各个关节以及手背上。两个惯性标识点分别固定在双手手腕处。惯性标识点包括三轴MEMS加速度计、三轴MEMS陀螺仪、三轴MEMS磁力计、串行通讯模块、RF收发模块、CPU模块、电池、直流转换模块及光学标识。惯性传感器模块包括三轴MEMS加速度计、三轴MEMS陀螺仪、三轴MEMS磁力计、CPU模块、串行通讯模块。双手上的惯性传感器模块通过串行通讯的方式把采集到的惯性信息以及CPU计算的模块的空间姿态信息发送给惯性标识点。惯性标识点通过RF收发器把整个手部各个模块的空间姿态信息(方位信息)以及惯性信息发送给接收处理器。接收处理器根据接收到的各个惯性传感器发送的惯性信息以及惯性传感器模块的方位信息,可以计算手部(包括手指)的位置和方位。场地周边的多个摄像头对人体各部位多个光学标识进行拍摄,根据接收到的各个摄像头的对于光学标识的图像信息,接收处理器可以得到人体整体的运动信息。接收处理器把通过光学标识获得的人的整体运动信息以及通过惯性传感器获得的手部运动信息进行整合,可以得到人体的包含手部运动的全身运动信息。下面为具体的实施过程。
首先,在指定的场地周边架设安装多个光学摄像头,完成安装后,对光学摄像头进行校准。然后在人体身上的指定部位安装光学标识、惯性传感器模块以及惯性标识 点。完成安装后,开启系统,对惯性传感器进行校准,校准的办法是把穿戴了惯性传感器的手放入按照指定位置摆放的特制的模具中,然后按接收处理器的操作界面上的校准按钮,处理器根据已知的手部各个骨骼的位置以及惯性传感器模块测得的方位,对惯性传感器模块的安装方位误差进行校准,该校准动作为本领域技术人员熟知的技术,在此不再赘述。
接下来用户可以在场地中进行各种运动,如表演、模拟训练场景中的模拟操控训练等。光学摄像头对身体各个部位的光学标识进行拍摄,并把图像实时传送给接收处理器,接收处理器对接收到的各个摄像头的图像进行处理,得到基于光学标识的人体的运动信息。惯性传感器模块把测得的惯性信息以及计算的方位信息实时发送给接收处理器,接收处理器根据手部的生物力学约束(如关节相连约束等)可以计算获得手部的姿态和位置信息。惯性标识点为惯性传感器和光学标识的重合点,根据这个已知条件,可以对惯性传感器的积分误差进行修正,从而得到手部的准确的运动信息。结合之前获得的基于光学标识的运动信息,可以得到人体全身(包括双手)精准的运动信息。
本实施例中,通过在容易产生遮挡的手部采用惯性传感器进行运动捕捉,而其他部位采用比较精准的光学的方式进行运动捕捉,在二者的结合处采用惯性标识点辅助进行数据整合,使得光学运动捕捉和惯性运动捕捉的优点完美的结合起来,从而实现对人体精细动作的精准捕捉。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方 框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
本发明中应用了具体实施例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (14)

  1. 一种混合运动捕捉系统,其特征在于,所述的混合运动捕捉系统包括:至少一惯性传感器模块,至少一光学标识,至少两个光学摄像头及一接收处理器,所述的惯性传感器模块与所述的接收处理器无线连接,所述的光学摄像头与所述的接收处理器有线或无线连接,所述惯性传感器模块及光学标识安装在待测量对象上;其中,
    所述的惯性传感器模块,用于测量自身的惯性信息及空间姿态信息;
    所述的光学摄像头,用于获取所述光学标识的图像信息;
    所述的接收处理器,接收所述的惯性信息、空间姿态信息及图像信息,根据所述的惯性信息及空间姿态信息生成基于惯性的位置信息;根据所述的图像信息生成基于光学标识点的位置信息,并将所述基于惯性的位置信息及基于光学标识点的位置信息进行整合,生成所述待测对象的位置信息。
  2. 根据权利要求1所述的混合运动捕捉系统,其特征在于,所述的惯性传感器模块包括:
    三轴MEMS加速度计,用于测量所述惯性传感器模块自身的加速度信息;
    三轴MEMS陀螺仪,用于测量所述惯性传感器模块自身的角速度信息;
    三轴MEMS磁力计,用于测量所述惯性传感器模块自身的地磁向量;
    CPU,连接所述的三轴MEMS加速度计、三轴MEMS陀螺仪及三轴MEMS磁力计,用于对所述的角速度信息进行积分,生成动态空间方位,根据所述的加速度信息及地磁向量生成静态绝对空间方位,并利用所述静态绝对空间方位对所述动态空间方位进行修正,生成所述的空间姿态信息;
    RF收发器,连接所述的CPU,用于将所述的空间姿态信息及惯性信息发送给所述的接收处理器,所述惯性信息包括所述的加速度信息及角速度信息。
  3. 根据权利要求2所述的混合运动捕捉系统,其特征在于,部分所述惯性传感器模块与所述部分光学标识两两集成在一起,形成至少一惯性标识点。
  4. 根据权利要求3所述的混合运动捕捉系统,其特征在于,所述的接收处理器具体用于:对所述的加速度进行二次积分生成所述基于惯性的位置信息,然后基于生物力学约束以及外界约束对所述基于惯性的位置信息进行修正,生成修正后的基于惯性的位置信息。
  5. 根据权利要求4所述的混合运动捕捉系统,其特征在于,所述的接收处理器 将所述基于惯性的位置信息及基于光学标识点的位置信息进行整合时,具体用于:当光学标识发生遮挡或者重叠时,以所述修正后的基于惯性的位置信息生成所述待测对象的位置信息;当重新获得所述基于光学标识的位置时,计算基于光学标识的测量误差a及基于惯性的测量误差b,根据测量误差a及测量误差b分别赋予所述基于光学标识的位置信息及修正后的基于惯性的位置信息权重A及权重B,生成所述待测对象的位置信息。
  6. 根据权利要求5所述的混合运动捕捉系统,其特征在于,所述的权重A及权重B的计算公式如下:
    Figure PCTCN2015079346-appb-100001
    Figure PCTCN2015079346-appb-100002
  7. 根据权利要求1-5中任一项所述的混合运动捕捉系统,其特征在于,所述的光学标识为反射性的被动光学标识。
  8. 根据权利要求1-5中任一项所述的混合运动捕捉系统,其特征在于,所述的光学标识为发光性的主动光学标识。
  9. 根据权利要求1-5中任一项所述的混合运动捕捉系统,其特征在于,所述的光学摄像头为多个分立的单目摄像头,通过固定安装或者三脚架安装的方式安装到一定的区域。
  10. 根据权利要求1-5中任一项所述的混合运动捕捉系统,其特征在于,所述的光学摄像头为至少一组双目摄像头或者多目摄像头,通过三脚架安装或者固定安装的方式安装到一定的区域。
  11. 一种混合运动捕捉方法,应用于权利要求1-3中任一项所述的混合运动捕捉系统,其特征在于,所述的混合运动捕捉方法包括:
    接收所述惯性传感器模块测量的自身的惯性信息及空间姿态信息,并根据所述的惯性信息及空间姿态信息生成基于惯性的位置信息;
    接收所述摄像头拍摄的所述光学标识的图像信息,并根据所述的图像信息生成基于光学标识点的位置信息;
    将所述基于惯性的位置信息及基于光学标识点的位置信息进行整合,生成所述待测对象的位置信息。
  12. 根据权利要求11所述的混合运动捕捉方法,其特征在于,所述根据所述的惯性信息及空间姿态信息生成基于惯性的位置信息,包括:对所述的加速度进行二次积分生成所述基于惯性的位置信息,然后基于生物力学约束以及外界约束对所述基于惯性的位置信息进行修正,生成修正后的基于惯性的位置信息。
  13. 根据权利要求12所述的混合运动捕捉方法,其特征在于,所述将所述基于惯性的位置信息及基于光学标识点的位置信息进行整合,生成所述待测对象的位置信息,包括:当光学标识发生遮挡或者重叠时,以所述修正后的基于惯性的位置信息生成所述待测对象的位置信息;当重新获得所述基于光学标识的位置时,计算基于光学标识的测量误差a及基于惯性的测量误差b,根据测量误差a及测量误差b分别赋予所述基于光学标识的位置信息及修正后的基于惯性的位置信息权重A及权重B,生成所述待测对象的位置信息。
  14. 根据权利要求13所述的混合运动捕捉方法,其特征在于,所述的权重A及权重B的计算公式如下:
    Figure PCTCN2015079346-appb-100003
    Figure PCTCN2015079346-appb-100004
PCT/CN2015/079346 2015-05-20 2015-05-20 一种混合运动捕捉系统及方法 WO2016183812A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2015/079346 WO2016183812A1 (zh) 2015-05-20 2015-05-20 一种混合运动捕捉系统及方法
US15/817,373 US10679360B2 (en) 2015-05-20 2017-11-20 Mixed motion capture system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/079346 WO2016183812A1 (zh) 2015-05-20 2015-05-20 一种混合运动捕捉系统及方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/817,373 Continuation-In-Part US10679360B2 (en) 2015-05-20 2017-11-20 Mixed motion capture system and method

Publications (1)

Publication Number Publication Date
WO2016183812A1 true WO2016183812A1 (zh) 2016-11-24

Family

ID=57319137

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/079346 WO2016183812A1 (zh) 2015-05-20 2015-05-20 一种混合运动捕捉系统及方法

Country Status (2)

Country Link
US (1) US10679360B2 (zh)
WO (1) WO2016183812A1 (zh)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10379613B2 (en) 2017-05-16 2019-08-13 Finch Technologies Ltd. Tracking arm movements to generate inputs for computer systems
US10416755B1 (en) 2018-06-01 2019-09-17 Finch Technologies Ltd. Motion predictions of overlapping kinematic chains of a skeleton model used to control a computer system
US10509469B2 (en) 2016-04-21 2019-12-17 Finch Technologies Ltd. Devices for controlling computers based on motions and positions of hands
US10509464B2 (en) 2018-01-08 2019-12-17 Finch Technologies Ltd. Tracking torso leaning to generate inputs for computer systems
US10521011B2 (en) 2017-12-19 2019-12-31 Finch Technologies Ltd. Calibration of inertial measurement units attached to arms of a user and to a head mounted device
US10540006B2 (en) 2017-05-16 2020-01-21 Finch Technologies Ltd. Tracking torso orientation to generate inputs for computer systems
US10705113B2 (en) 2017-04-28 2020-07-07 Finch Technologies Ltd. Calibration of inertial measurement units attached to arms of a user to generate inputs for computer systems
CN112256125A (zh) * 2020-10-19 2021-01-22 中国电子科技集团公司第二十八研究所 一种基于激光大空间定位与光惯互补动作捕捉系统及方法
US11009941B2 (en) 2018-07-25 2021-05-18 Finch Technologies Ltd. Calibration of measurement units in alignment with a skeleton model to control a computer system
US11016116B2 (en) 2018-01-11 2021-05-25 Finch Technologies Ltd. Correction of accumulated errors in inertial measurement units attached to a user
US20220178692A1 (en) * 2017-12-21 2022-06-09 Mindmaze Holding Sa System, method and apparatus of a motion sensing stack with a camera system
US11474593B2 (en) 2018-05-07 2022-10-18 Finch Technologies Ltd. Tracking user movements to control a skeleton model in a computer system
US11531392B2 (en) 2019-12-02 2022-12-20 Finchxr Ltd. Tracking upper arm movements using sensor modules attached to the hand and forearm

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10295651B2 (en) * 2016-09-21 2019-05-21 Pinhas Ben-Tzvi Linear optical sensor arrays (LOSA) tracking system for active marker based 3D motion tracking
US11331151B2 (en) * 2017-06-19 2022-05-17 Techmah Medical Llc Surgical navigation of the hip using fluoroscopy and tracking sensors
WO2019087658A1 (ja) * 2017-11-01 2019-05-09 ソニー株式会社 情報処理装置、情報処理方法及びプログラム
US10949992B2 (en) * 2018-04-12 2021-03-16 Francis Bretaudeau Localization system with a cooperative optronic beacon
JP2020187623A (ja) * 2019-05-16 2020-11-19 株式会社ソニー・インタラクティブエンタテインメント 姿勢推定システム、姿勢推定装置、誤差補正方法及び誤差補正プログラム
CN111947650A (zh) * 2020-07-14 2020-11-17 杭州瑞声海洋仪器有限公司 基于光学追踪与惯性追踪的融合定位系统及方法
CN112073909B (zh) * 2020-08-20 2022-05-24 哈尔滨工程大学 基于uwb/mems组合的uwb基站位置误差补偿方法
CN114697690A (zh) * 2020-12-30 2022-07-01 光阵三维科技有限公司 由组合传送的多个串流取出特定串流播放的系统及方法
CN112729346B (zh) * 2021-01-05 2022-02-11 北京诺亦腾科技有限公司 惯性动捕传感器的状态提示方法和装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8203487B2 (en) * 2009-08-03 2012-06-19 Xsens Holding, B.V. Tightly coupled UWB/IMU pose estimation system and method
CN102706336A (zh) * 2012-05-14 2012-10-03 上海海事大学 一种可携式运动捕捉装置及其使用方法
US20130028469A1 (en) * 2011-07-27 2013-01-31 Samsung Electronics Co., Ltd Method and apparatus for estimating three-dimensional position and orientation through sensor fusion
CN103279186A (zh) * 2013-05-07 2013-09-04 兰州交通大学 融合光学定位与惯性传感的多目标运动捕捉系统
CN104267815A (zh) * 2014-09-25 2015-01-07 黑龙江节点动画有限公司 基于惯性传感技术的动作捕捉系统及方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101689304A (zh) * 2007-07-10 2010-03-31 皇家飞利浦电子股份有限公司 对象动作捕捉系统和方法
US9341704B2 (en) * 2010-04-13 2016-05-17 Frederic Picard Methods and systems for object tracking
US9396385B2 (en) * 2010-08-26 2016-07-19 Blast Motion Inc. Integrated sensor and video motion analysis method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8203487B2 (en) * 2009-08-03 2012-06-19 Xsens Holding, B.V. Tightly coupled UWB/IMU pose estimation system and method
US20130028469A1 (en) * 2011-07-27 2013-01-31 Samsung Electronics Co., Ltd Method and apparatus for estimating three-dimensional position and orientation through sensor fusion
CN102706336A (zh) * 2012-05-14 2012-10-03 上海海事大学 一种可携式运动捕捉装置及其使用方法
CN103279186A (zh) * 2013-05-07 2013-09-04 兰州交通大学 融合光学定位与惯性传感的多目标运动捕捉系统
CN104267815A (zh) * 2014-09-25 2015-01-07 黑龙江节点动画有限公司 基于惯性传感技术的动作捕捉系统及方法

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10509469B2 (en) 2016-04-21 2019-12-17 Finch Technologies Ltd. Devices for controlling computers based on motions and positions of hands
US10838495B2 (en) 2016-04-21 2020-11-17 Finch Technologies Ltd. Devices for controlling computers based on motions and positions of hands
US10705113B2 (en) 2017-04-28 2020-07-07 Finch Technologies Ltd. Calibration of inertial measurement units attached to arms of a user to generate inputs for computer systems
US11093036B2 (en) 2017-05-16 2021-08-17 Finch Technologies Ltd. Tracking arm movements to generate inputs for computer systems
US10379613B2 (en) 2017-05-16 2019-08-13 Finch Technologies Ltd. Tracking arm movements to generate inputs for computer systems
US10534431B2 (en) 2017-05-16 2020-01-14 Finch Technologies Ltd. Tracking finger movements to generate inputs for computer systems
US10540006B2 (en) 2017-05-16 2020-01-21 Finch Technologies Ltd. Tracking torso orientation to generate inputs for computer systems
US10521011B2 (en) 2017-12-19 2019-12-31 Finch Technologies Ltd. Calibration of inertial measurement units attached to arms of a user and to a head mounted device
US20220178692A1 (en) * 2017-12-21 2022-06-09 Mindmaze Holding Sa System, method and apparatus of a motion sensing stack with a camera system
US10509464B2 (en) 2018-01-08 2019-12-17 Finch Technologies Ltd. Tracking torso leaning to generate inputs for computer systems
US11016116B2 (en) 2018-01-11 2021-05-25 Finch Technologies Ltd. Correction of accumulated errors in inertial measurement units attached to a user
US11474593B2 (en) 2018-05-07 2022-10-18 Finch Technologies Ltd. Tracking user movements to control a skeleton model in a computer system
US10635166B2 (en) 2018-06-01 2020-04-28 Finch Technologies Ltd. Motion predictions of overlapping kinematic chains of a skeleton model used to control a computer system
US10860091B2 (en) 2018-06-01 2020-12-08 Finch Technologies Ltd. Motion predictions of overlapping kinematic chains of a skeleton model used to control a computer system
US10416755B1 (en) 2018-06-01 2019-09-17 Finch Technologies Ltd. Motion predictions of overlapping kinematic chains of a skeleton model used to control a computer system
US11009941B2 (en) 2018-07-25 2021-05-18 Finch Technologies Ltd. Calibration of measurement units in alignment with a skeleton model to control a computer system
US11531392B2 (en) 2019-12-02 2022-12-20 Finchxr Ltd. Tracking upper arm movements using sensor modules attached to the hand and forearm
CN112256125A (zh) * 2020-10-19 2021-01-22 中国电子科技集团公司第二十八研究所 一种基于激光大空间定位与光惯互补动作捕捉系统及方法
CN112256125B (zh) * 2020-10-19 2022-09-13 中国电子科技集团公司第二十八研究所 一种基于激光大空间定位与光惯互补动作捕捉系统及方法

Also Published As

Publication number Publication date
US10679360B2 (en) 2020-06-09
US20180089841A1 (en) 2018-03-29

Similar Documents

Publication Publication Date Title
WO2016183812A1 (zh) 一种混合运动捕捉系统及方法
CN104834917A (zh) 一种混合运动捕捉系统及方法
US8165844B2 (en) Motion tracking system
Roetenberg et al. Xsens MVN: Full 6DOF human motion tracking using miniature inertial sensors
JP6852673B2 (ja) センサ装置、センサシステム及び情報処理装置
KR101751760B1 (ko) 하지 관절 각도를 이용한 보행 인자 추정 방법
KR101214227B1 (ko) 동작 추적 방법.
JP6145072B2 (ja) センサーモジュールの位置の取得方法及び装置、及び、動作計測方法及び装置
CN111091587B (zh) 一种基于视觉标志物的低成本动作捕捉方法
US20150375108A1 (en) Position sensing apparatus and method
CN110609621B (zh) 姿态标定方法及基于微传感器的人体运动捕获系统
KR101080078B1 (ko) 통합센서 기반의 모션 캡쳐 시스템
JP6288858B2 (ja) 光学式モーションキャプチャにおける光学式マーカーの位置の推定方法及び装置
CN109284006B (zh) 一种人体运动捕获装置和方法
JP5807290B2 (ja) 関節チェーンの運動を表わす情報決定のための自律的システム及び方法
CN108413965A (zh) 一种室内室外巡检机器人综合系统及巡检机器人导航方法
RU121947U1 (ru) Система захвата движения
WO2016033717A1 (zh) 一种组合式运动捕捉系统
WO2015109442A1 (zh) 一种多节点运动测量与分析系统
KR20120059824A (ko) 복합 센서를 이용한 실시간 모션 정보 획득 방법 및 시스템
CN109453505B (zh) 一种基于可穿戴设备的多关节追踪方法
PL241476B1 (pl) Sposób ustalenia pozycji obiektu, w szczególności człowieka
KR102253298B1 (ko) 골프 퍼팅라인 측정장치
CN111158482B (zh) 一种人体动作姿态捕捉方法及系统
WO2017005591A1 (en) Apparatus and method for motion tracking of at least a portion of a limb

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15892191

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15892191

Country of ref document: EP

Kind code of ref document: A1