WO2018054338A1 - Motion capture apparatus and system - Google Patents

Motion capture apparatus and system Download PDF

Info

Publication number
WO2018054338A1
WO2018054338A1 PCT/CN2017/102793 CN2017102793W WO2018054338A1 WO 2018054338 A1 WO2018054338 A1 WO 2018054338A1 CN 2017102793 W CN2017102793 W CN 2017102793W WO 2018054338 A1 WO2018054338 A1 WO 2018054338A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion capture
motion
optical
signals
capture apparatus
Prior art date
Application number
PCT/CN2017/102793
Other languages
French (fr)
Inventor
Longwei LI
Gangning CHENG
Yu Qiao
Qiaoyu SONG
Qiang Li
Yuanhui HE
Original Assignee
Shanghai Noitom Motion Picture Technology Ltd
Beijing Noitom Technology Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Noitom Motion Picture Technology Ltd, Beijing Noitom Technology Ltd. filed Critical Shanghai Noitom Motion Picture Technology Ltd
Publication of WO2018054338A1 publication Critical patent/WO2018054338A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • G06F3/0325Detection arrangements using opto-electronic means using a plurality of light emitters or reflectors or a plurality of detectors forming a reference frame from which to derive the orientation of the object, e.g. by triangulation or on the basis of reference deformation in the picked up image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors

Definitions

  • the present disclosure relates generally to the field of tracking technology, and specifically to a motion capture apparatus and a motion capture system containing the motion capture apparatus.
  • the motion capture technology typically involves data that can be directly calculated and processed by computer processors, such as data that are related to dimensional measurement, measurement data for the position and/or the orientation of an object in a physical space.
  • a typical motion capture process based on a conventional motion capture technology is as follows.
  • one or more tracking markers are arranged at key position (s) /location (s) on a moving object; then a motion capture system captures the position information of each of the one or more tracking markers; finally, data of three-dimensional coordinates can be further calculated after the computer processors processes the position information of the one or more tracking markers.
  • the motion data as described above can be widely applied during film and television production, and can also be utilized in other fields such as gait analysis, biomechanics, ergonomics, and so on.
  • the present disclosure provides a motion capture system, which comprises a motion capture apparatus, a plurality of optical sensors, and a computing device.
  • the motion capture apparatus comprises an optical marker portion and an inertial sensor portion, configured to respectively transmit lights and motion signals.
  • Each of the plurality of optical sensors has a predetermined position and orientation, and is configured to detect the lights from the motion capture apparatus, to obtain optical signals based on the lights, and to send the optical signals to the computing device.
  • the computing device is configured to receive the motion signals from the motion capture apparatus and the optical signals from the each of plurality of optical sensors, to calculate a first set of motion parameters based on the optical signals and a second set of parameters based on the motion signals, and to integrate the first set of motion parameters and the second set of motion parameters to thereby obtain a third set of motion parameters having an improved accuracy.
  • the optical marker portion comprises at least three optical markers, which are arranged such that any three of the at least three optical markers are not aligned on a same straight line.
  • the at least three optical markers comprise at least one passive optical marker.
  • Each of the at least one passive optical marker is configured to have a unique feature.
  • the plurality of optical sensors include at least two first optical sensors.
  • Each of the at least two first optical sensors is configured to be able to detect an optical signal from each of the at least one passive optical marker upon receiving a reflected light therefrom.
  • the at least three optical markers comprise at least one active optical marker.
  • Each of the at least one active optical marker is configured to emit a unique light.
  • the plurality of optical sensors include at least two second optical sensors. Each of the at least two second optical sensors is configured to be able to detect an optical signal from each of the at least one active optical marker upon receiving a light emitted therefrom.
  • each of at least two second optical sensors can comprise a CMOS (complementary metal-oxide semiconductor) camera or a CCD (charge-coupled device) camera.
  • CMOS complementary metal-oxide semiconductor
  • CCD charge-coupled device
  • each of the at least one active optical marker is configured to emit a light having a unique feature, and the unique feature is selected from one of a unique wavelength, a unique flicking frequency, or a unique flicking pattern.
  • each of the at least one active optical marker is configured to emit a light having a unique flicking frequency, or having a unique flicking pattern.
  • the motion capture apparatus further comprises a control circuit, configured to respectively control each of the at least one active optical marker to emit a unique light.
  • each of the at least three optical markers is an active optical marker, and the active optical marker can be an LED lamp. Accordingly, a number of the at least three optical markers can be 4 or 5.
  • the motion capture apparatus can further include a main body and a plurality of connecting rods.
  • the inertial sensor portion is disposed inside the main body.
  • a first end of each of the plurality of connecting rods is attached onto the main body, and a second end of the each of the plurality of connecting rods is configured to stick out away from the main body, and to have one of the at least three optical markers disposed thereat.
  • each of the at least three optical markers is an active optical marker, and each of the at least three optical markers is electrically connected to the main body through a wiring inside one of the plurality of connecting rods.
  • the motion capture apparatus further comprises a first power supply, which is configured to provide power to the motion capture apparatus.
  • the first power supply includes at least one of a battery or a power adaptor.
  • the power adaptor is configured to connect to an external power source to thereby input a power therefrom to the motion capture apparatus.
  • the inertial sensor portion includes a gyroscope, which is configured to measure angular velocity sub-signals of the motion capture apparatus. Accordingly, the motion signals comprise the angular velocity sub-signals.
  • the inertial sensor portion includes a gyroscope and a magnetometer.
  • the magnetometer is configured to measure geomagnetic direction sub-signals of the motion capture apparatus. Accordingly, the motion signals comprise the angular velocity sub-signals and the geomagnetic direction sub-signals.
  • the inertial sensor portion includes a gyroscope and an accelerometer.
  • the accelerometer is configured to measure acceleration sub-signals of the motion capture apparatus.
  • the motion signals comprise the angular velocity sub-signals and the acceleration sub-signals.
  • the inertial sensor portion includes a gyroscope, a magnetometer, and an accelerometer. Accordingly, the motion signals comprise the angular velocity sub-signals, the geomagnetic direction sub-signals, and the acceleration sub-signals.
  • the inertial sensor portion can further include a communication sub-portion, which is coupled to the inertial sensor portion, and is configured to receive the motion signals from the inertial sensor portion, and then to transmit the motion signals.
  • the communication sub-portion can be configured to output the motion signals in a wired manner or in a wireless manner.
  • the motion capture apparatus can further include an external device, which is fixedly attached with the motion capture apparatus.
  • the motion capture apparatus can further include a mounting portion, and the external device is fixedly attached with the motion capture apparatus through the mounting portion.
  • the external device can include a display panel.
  • the computing device can be further configured to calculate virtual scene data based on the third set of motion parameters.
  • the motion capture apparatus can thus be further configured to receive the virtual scene data from the computing device; and the display panel is configured to display an image based on the virtual scene data.
  • the motion capture system substantially forms a virtual camera system.
  • the external device can further include a camera, which is configured to acquire filming data.
  • the motion capture apparatus can be further configured to transmit the filming data to the computing device, and the computing device can be further configured to calculate the virtual scene data based on the third set of motion parameters and on the filming data.
  • the motion capture system substantially forms a cooperative camera system.
  • FIG. 1 illustrates a diagram of a motion capture apparatus according to some embodiments of the present disclosure
  • FIG. 2 illustrates a diagram of the optical marker portion in the motion capture apparatus as shown in FIG. 1 according to some embodiments of the present disclosure
  • FIG. 3A illustrates a diagram of the optical marker portion as shown in FIG. 2 according to some embodiments where each optical marker is a passive optical marker;
  • FIG. 3B illustrates a diagram of the optical marker portion as shown in FIG. 2 according to some embodiments where each optical marker is active optical marker;
  • FIG. 4A illustrates a diagram of the optical marker portion as shown in FIG. 3B according to some embodiments of the disclosure
  • FIG. 4B illustrates a diagram of the optical marker portion as shown in FIG. 3B according to some embodiments of the disclosure
  • FIG. 5 illustrates a structural diagram of the inertial sensor portion in the motion capture apparatus as shown in FIG. 1 according to some embodiments of the present disclosure
  • FIG. 6A illustrates a diagram of the inertial sensor portion as shown in FIG. 5 according to some embodiments of the disclosure
  • FIG. 6B illustrates a diagram of the inertial sensor portion as shown in FIG. 5 according to some embodiments of the disclosure
  • FIG. 7 illustrates a diagram of a motion capture apparatus according to some embodiments of the present disclosure
  • FIG. 8A illustrates a structural diagram of a motion capture apparatus according to some specific embodiment of the disclosure
  • FIG. 8B illustrates a planar view of an inside of the motion capture apparatus as shown in FIG. 8A;
  • FIG. 9A illustrates a motion capture apparatus according to some other embodiment of the disclosure that is attached with a display panel
  • FIG. 9B illustrates a mounting panel of the motion capture apparatus as shown in FIG. 9A;
  • FIG. 10A illustrates a motion capture system according to some embodiments of the disclosure
  • FIG. 10B illustrates one specific embodiment of the motion capture system as shown in FIG. 10A;
  • FIG. 11 illustrates a virtual camera system based on the motion capture system as shown in FIG. 10A according to some embodiments of the disclosure.
  • FIG. 12 illustrates a cooperative camera system based on the motion capture system as shown in FIG. 10A according to some embodiments of the disclosure.
  • the present disclosure provides a motion capture apparatus.
  • FIG. 1 is a structural diagram of a motion capture apparatus according to some embodiments of the present disclosure. As shown in FIG. 1, the motion capture apparatus 001 includes an optical marker portion 100 and an inertial sensor portion 200.
  • the optical marker portion 100 is configured to transmit optical signals.
  • a computing device e.g. a computer
  • the computing device can determine, after computation, a first set of motion parameters of the motion capture apparatus 001 based on the optical signals.
  • the first set of motion parameters can include data regarding at least one of a position (or location) , a posture, a displacement, a velocity, a rotation, an angular velocity, an acceleration, etc. of the motion capture apparatus 001, and can include situations where the motion capture apparatus 001 is still and where the motion capture apparatus 001 is in motion (including linear moving and rotation)
  • the inertial sensor portion 200 is configured to transmit motion signals.
  • a computing device e.g. a computer
  • the computing device can determine, after computation, a second set of motion parameters of the motion capture apparatus 001 based on the motion signals.
  • the first set of motion parameters of the motion capture apparatus 001 obtained based on the optical signals transmitted from the optical marker portion 100 of the motion capture apparatus 001 and the second set of motion parameters of the motion capture apparatus 001 obtained based on the motion signals transmitted from the inertial sensor portion 200 of the motion capture apparatus 001 can then be integrated to obtain a third set of motion parameters of the motion capture apparatus 001 which are of improved accuracy.
  • the advantages of these two portions are substantially integrated to thereby be able to provide motion parameters of the motion capture apparatus 001 that have an improved accuracy.
  • an optical tracking system generally has a more accurate measurement of a position than an inertial tracking system, due to the fact that the inertial tracking system typically suffers from integration drift, where small errors in the measurement of acceleration are integrated into progressively larger errors in velocity, which are compounded into still greater errors in position.
  • the inertial tracking system generally has a more accurate measurement of a posture of an object than the optical tracking system, due to the fact that the optical tracking system is disadvantageous in angular calculation and this disadvantage can get worse with an increased distance of the object to be detected by the camera.
  • the advantages and disadvantages of two tracking systems can be complemented to thereby allow the generation of a third set of motion parameters of an improved accuracy.
  • different weights can be given to the position data and the posture data from the first set of motion parameters and in the second set of motion parameters to thereby obtain the position data and the posture data in the third set of motion parameters.
  • a higher weight is given to the position data from the first set of motion parameters, and a lower weight to the position data from the second set of motion parameters, in order to obtain an integrated position data in the third set of motion parameters.
  • a lower weight is given to the posture data from the first set of motion parameters, and a higher weight to the posture data from the second set of motion parameters, in order to obtain an integrated posture data in the third set of motion parameters.
  • FIG. 2 illustrates a structural diagram of the optical marker portion 100 as shown in FIG. 1.
  • the optical marker portion 100 comprises at least three optical markers 110, each configured to transmit an optical signal.
  • the at least three optical markers 110 are also configured such that any three of the at least three optical markers 110 are not aligned on a same straight line.
  • each individual optical marker 110 in the optical marker portion 100 is different from one another, which substantially allows each individual optical marker 110 and the optical signal transmitted thereby to form a one-to-one corresponding relationship.
  • each optical marker 110 in the optical marker portion 100 is configured to transmit a distinct optical signal corresponding thereto, thereby allowing each optical marker 110 in the optical marker portion 100 of the motion capture apparatus 001 to be uniquely identified by the corresponding optical signal.
  • each optical marker 110 in the optical marker portion 100 can be a passive optical marker 110a, and can be, for example, a bright dot that can reflect an environmental light, which is then detected by a first optical sensor 120a (such as a camera) disposed in a distance to the motion capture apparatus 001 to thereby allow the corresponding optical signal to be obtained.
  • a first optical sensor 120a such as a camera
  • Each of the at least three optical markers 110 can have a unique feature, for example, can take a different shape (e.g. a dot or a cross) , or take a different color (e.g. red, blue, or green) to thereby allow the camera to be able to differentiate among different optical markers 110 in the optical marker portion 100.
  • a unique feature for example, can take a different shape (e.g. a dot or a cross) , or take a different color (e.g. red, blue, or green) to thereby allow the camera to be able to differentiate among different optical markers 110 in the optical marker portion 100.
  • each optical marker 110 in the optical marker portion 100 can be an active optical marker 110b, such as a light-emitting diode (LED) lamp, which is configured to actively emit a light, which can then be detected by a second optical sensor 120b disposed in a distance to the motion capture apparatus 001 to thereby allow the corresponding optical signal to be obtained.
  • the second optical sensor 120b can be a CMOS (complementary metal-oxide semiconductor) camera or a CCD (charge-coupled device) camera.
  • each of the at least three active optical markers 110b is configured to emit a different light to thereby allow the second optical sensor 120b to be able to differentiate among different active optical markers 110b in the optical marker portion 100. It is further configured such that the light emitted by and from each of the at least three active optical markers 110b in the optical marker portion 100 can be configured to have a different wavelength, a different flicking frequency, or to flick at a different pattern to thereby allow the second optical sensor 120b to be able to differentiate among different active optical markers 110b.
  • the optical marker portion 100 in the motion capture apparatus 001 consists of three active optical markers 110b, each configured to emit a light of one unique wavelength, for example, a red light, a green light, and a blue light, to thereby allow the three active optical markers 110b to be differentiated from one another.
  • the optical marker portion 100 in the motion capture apparatus 001 consists of three active optical markers 110b, each configured to emit a light flicking at a different frequency, as shown in Table 1.
  • the active optical marker #1 is configured to be on one per every one image frame (i.e. on at the image frame #1, 2, 3, ...) ; the active optical marker #1 is configured to be on once per every two image frames (i.e. on at the image frame #2, 4, 6, ...) ; and the active optical marker #1 is configured to be on once per every three image frames (i.e. on at the image frame #3, 6, 9, ...) .
  • the three active optical markers 110b can be differentiated from one another.
  • the optical marker portion 100 in the motion capture apparatus 001 consists of four active optical markers 110b, among which each is configured to flicker at a different pattern, as shown in Table 2.
  • the active optical marker #1 is configured to be on at the image frame #1, 2, 3, 4, 5, 6, 7, 8, 9, ...; the active optical marker #2 is configured to be on at the image frame #1, 3, 5, 7, 9, ...; the active optical marker #3 is configured to be on at the image frame #1, 2, 4, 5, 7, 8, ...; and the active optical marker #4 is configured to be on at the image frame #1, 2, 3, 5, 6, 7, 9, 10, 11, ... (image frame #10, 11 not shown) .
  • the different active optical markers can be differentiated from one another.
  • each active optical marker 110b in the optical marker portion 100 can comprise an active light-emitting device, configured to actively emit a light.
  • the light-emitting device in each of the at least three active optical markers 110b can be a light-emitting diode (LED) light-emitting device (termed LED lamp) , which is configured to emit an infrared light, a visible light, or a light having a wavelength of a different range.
  • LED light-emitting diode
  • the LED lamp in each active optical marker 110b in the optical marker portion is configured to emit an infrared light, so as to allow the optical signal emitted therefrom to transmit with a relatively low level of interference by any object disposed closely to, or due to structural limitations of, the motion capture apparatus 001.
  • each of the at least three active optical markers 110b can also be active light source of another type, such as a fluorescent lamp. There are no limitations herein.
  • the optical marker portion 100 in the motion capture apparatus 001 further includes a first power supply 130, configured to provide power to each of the at least three active optical markers 110b in the optical marker portion 100 to thereby allow each active optical marker 110b to emit a light, as illustrated in FIG. 4A.
  • the optical marker portion 100 in the motion capture apparatus 001 can further include a first controller 140, which is electrically connected to the first power supply 130 and is further coupled to, and thereby configured to respectively control, each of the at least three active optical markers 110b in the optical marker portion 100 to respectively emit a different light, as illustrated in FIG. 4B.
  • the first controller 140 may control each active optical marker 110b to emit a light of a different frequency, or to emit a light flicking at a different pattern.
  • the first power supply 130 can comprise a battery that is disposed in the motion capture apparatus 001 which can directly (as shown in FIG. 4A) or indirectly (via the first controller 140 as shown in FIG. 4B) supply power to each of the at least three active optical markers 110a in the optical marker portion 100.
  • the first power supply 130 can comprise a power adaptor (such as a plug) whose input terminal is electrically connected to an input power supply (e.g. an external power source) to input a power therefrom, and whose output terminal is electrically connected to, and thereby output power to, each of the at least three active optical markers 110a in the optical marker portion 100 directly (as shown in FIG. 4A) or indirectly (via the first controller 140 as shown in FIG. 4B) .
  • an input power supply e.g. an external power source
  • each of the at least three optical markers 110 in the optical marker portion 100 is a passive optical marker 110a
  • each of the at least three optical markers 110 in the optical marker portion 100 is an active optical marker 110b
  • the at least three optical markers 110 in the optical marker portion 100 can include both a passive optical marker 110a and an active optical marker 110b according to some embodiments of the disclosure.
  • both a first optical sensor 120a configured to detect a reflected light from each passive optical marker 110a to therefore obtain a first optical signal corresponding thereto (as illustrated in FIG. 3A)
  • a second optical sensor 120b configured to detect a light emitted from each active optical marker 110b to therefore obtain a second optical signal corresponding thereto (as illustrated in FIG. 3B)
  • a single optical sensor has both a functionality of the first optical sensor 120a and a functionality of the second optical sensor 120b is disposed in a distance to the motion capture apparatus 001.
  • the motion capture apparatus in order to determine the first set of motion parameters of the motion capture apparatus 001 based on the optical signals transmitted from the at least three optical markers 110 in the optical marker portion 100, typically at least two optical sensors 120 having a pre-determined location in the space are needed to determine a location of each individual optical marker 110 through a principle of triangulation.
  • At least three optical markers 110 in the optical marker portion 100 where any three of the at least three optical markers 110 are disposed in a non-linear manner in the space (i.e. not on a straight line) , are typically required such that the location information of each of the at least three optical markers 110 determined can be combined to calculate the first set of motion parameters, such as a position and posture, of the motion capture apparatus 001.
  • FIG. 5 illustrates a structural diagram of the inertial sensor portion 200 as shown in FIG. 1.
  • the inertial sensor portion 200 includes a gyroscope 210, and optionally can also include a magnetometer 220 and an accelerometer 230 (as indicated by the box with a dotted line in FIG. 5) .
  • the gyroscope 210 is configured to measure an angular velocity sub-signal of the inertial sensor portion 200, which substantially also indicates an angular velocity of the motion capture apparatus 001.
  • the gyroscope 210 can be a gyroscope based on a microelectromechanical system (MEMS) , and can be, more specifically, a three-axis MEMS gyroscope.
  • MEMS microelectromechanical system
  • the magnetometer 220 is configured to measure a geomagnetic direction sub-signal of the inertial sensor portion 200, which substantially also indicates a geomagnetic direction of the motion capture apparatus 001.
  • the magnetometer 220 can be a magnetometer based on a microelectromechanical system (MEMS) , and can be, more specifically, a three-axis MEMS magnetometer.
  • MEMS microelectromechanical system
  • the accelerometer 230 is configured to measure an acceleration sub-signal of the inertial sensor portion 200, which substantially also indicates an acceleration of the motion capture apparatus 001.
  • the accelerometer 230 can be an accelerometer based on a microelectromechanical system (MEMS) , and can be, more specifically, a three-axis MEMS accelerometer.
  • MEMS microelectromechanical system
  • the inertial sensor portion 200 can measure an angular velocity sub-signal of the motion capture apparatus 001, and the angular velocity sub-signal can be used to compliment the first set of motion parameters of the motion capture apparatus 001 obtained based on the optical signals transmitted from the optical marker portion 100.
  • the above-mentioned disadvantage in angular calculation that intrinsically exist in the optical tracking system can thus be complimented, thereby leading to motion parameters of the motion capture apparatus 001 having an improved accuracy.
  • the inertial sensor portion 200 can also respectively measure a geomagnetic direction sub-signal and an acceleration sub-signal of the motion capture apparatus 001.
  • the geomagnetic direction sub-signal and an acceleration sub-signal can accompany the angular velocity sub-signal detected by the gyroscope 210 to provide a better complementation to the first set of motion parameters of the motion capture apparatus 001 obtained based on the optical signals transmitted from the optical marker portion 100
  • the angular velocity sub-signal, and optionally the geomagnetic direction sub-signal and the acceleration sub-signal thus together form the motion signals that can be further transmitted to a computing device, which further calculate a second set of motion parameters of the motion capture apparatus 001 based on the motion signals measured by the inertial sensor portion 200.
  • the inertial sensor portion 200 can merely include a three-axis MEMS microgyroscope, which forms a three-axis inertial sensor in the motion capture apparatus 001 as disclosed herein. Accordingly, the motion signal transmitted to the computing device includes only an angle velocity sub-signal.
  • the inertial sensor portion 200 can, in addition to a three-axis MEMS microgyroscope, also include a three-axis magnetometer or a three-axis MEMS microaccelerometer, which together form a six-axis inertial sensor in the motion capture apparatus 001 as disclosed herein.
  • the motion signal transmitted to the computing device includes an angular velocity sub-signal, and one of a geomagnetic direction sub-signal and an acceleration sub-signal.
  • the inertial sensor portion 200 can include a three-axis MEMS microgyroscope, a three-axis MEMS magnetometer, and a three-axis MEMS microaccelerometer, which together form a nine-axis inertial sensor in the motion capture apparatus 001 as disclosed herein.
  • the motion signals transmitted to the computing device include a geomagnetic direction sub-signal, an angle velocity sub-signal, and an acceleration sub-signal.
  • the computing device Upon receiving of the motion signals transmitted from inertial sensor portion 200 of the motion capture apparatus 001, the computing device, such as a computer, can determine, after computation, the second set of motion parameters (such as a position, a posture, etc. ) of the motion capture apparatus 001 based on the motion signals.
  • the second set of motion parameters such as a position, a posture, etc.
  • the inertial sensor portion 200 further comprises a communication sub-portion 240, which is coupled to the gyroscope 210, and optionally the magnetometer 220 and the accelerometer 230 in the inertial sensor portion 200.
  • the communication sub-portion 240 is configured to receive an angle velocity sub-signal from the gyroscope 210, and optionally a geomagnetic direction sub-signal from the magnetometer 220 and an acceleration sub-signal from the accelerometer 230, and to then output a motion signal after integration of the angle velocity sub-signal, and optionally the geomagnetic direction sub-signal and the acceleration sub-signal.
  • the communication sub-portion 240 can comprise a wired communication circuit, configured to transmit the motion signal in a wired manner through, for example, a signal adaptor.
  • the communication sub-portion 240 can comprise a wireless communication circuit, configured to transmit the motion signal in a wireless manner.
  • the inertial sensor portion 200 further comprises a second power supply 250, configured to provide a power to each of the gyroscope 210, and optionally the magnetometer 220 and the accelerometer 230.
  • the inertial sensor portion 200 can further include a second controller 260, which is electrically connected to the second power supply 250 and is further coupled to, and thereby configured to respectively control, each of the gyroscope 210, and optionally the magnetometer 220 and the accelerometer 230 in the inertial sensor portion 200.
  • a second controller 260 which is electrically connected to the second power supply 250 and is further coupled to, and thereby configured to respectively control, each of the gyroscope 210, and optionally the magnetometer 220 and the accelerometer 230 in the inertial sensor portion 200.
  • the second power supply 250 can comprise a battery that is disposed in the motion capture apparatus 001 which can directly (as shown in FIG. 6B) or indirectly (via the second controller 260 as shown in FIG. 6C) supply a power to each of the gyroscope 210, and optionally the magnetometer 220 and the accelerometer 230 in the inertial sensor portion 200.
  • the second power supply 250 can comprise a power adaptor (such as a plug) whose input terminal is electrically connected to an input power supply (e.g. an external power source) to input a power therefrom, and whose output terminal is electrically connected to, and thereby output power to, each of the gyroscope 210, and optionally the magnetometer 220 and the accelerometer 230 in the inertial sensor portion 200 directly (as shown in FIG. 6B) or indirectly (via the second controller 260 as shown in FIG. 6C) .
  • a power adaptor such as a plug
  • first power supply 130 in the optical marker portion 100 and the second power supply 250 in the inertial sensor portion 200 together substantially form a power supply in the motion capture apparatus 001.
  • the first power supply 130 and the second power supply 250 can be a separated sub-portion in, and configured to respectively power, the optical marker portion 100 and the inertial sensor portion 200 in the motion capture apparatus 001.
  • the first power supply 130 and the second power supply 250 can form an integrated power supply in the motion capture apparatus 001 which is individually connected to, and configured to respectively power, the optical marker portion 100 and the inertial sensor portion 200 in the motion capture apparatus 001.
  • first controller 140 in the optical marker portion 100 and the second controller 260 in the inertial sensor portion 200 substantially form a controlling sub-portion in the motion capture apparatus 001, which can be separated or integrated in the motion capture apparatus 001.
  • the motion capture apparatus 001 further comprises a mounting portion 300, configured to provide a means for mounting, or attaching, the motion capture apparatus 001 to an external device 900.
  • the motion parameters such as a position, a posture, a velocity, a displacement, an angular velocity, etc.
  • the external device 900 can be substantially obtained by detecting the optical signals and the motion signals transmitted from the motion capture apparatus 001.
  • FIG. 8A illustrated a structural diagram of a motion capture apparatus according to some specific embodiment of the present disclosure.
  • the motion capture apparatus 001a includes a main body 400.
  • a total of four optical markers 110 in the optical marker portion 100 are disposed outside, and arranged to be fixedly connected to, the main body 400 through four connecting rods 500.
  • each connecting rod 500 is configured to have a first end attached onto the main body 400, and to have a second end sticking out away from the main body 400, on which one of the four optical markers 110 is disposed.
  • each of the four optical markers 110 can be a passive optical marker 110a or an active optical marker 110b.
  • each optical marker 110 is an LED lamp, and is electrically connected to the main body 400 via a wiring 111 (shown in FIG. 8B) disposed inside each connecting rod 500.
  • FIG. 8B illustrated a planar view of an inside of the motion capture apparatus as shown in FIG. 8A.
  • the motion capture apparatus comprises four optical markers 110b, which are each connected to a control circuit 112 (shown in the box in dotted lines) through a wiring 111.
  • the control circuit 112 is arranged onto a motherboard 600.
  • An inertial sensor portion 200 (also shown in the box in dotted lines) and a plug 700 are also arranged onto the motherboard 600.
  • control circuit 112 is substantially the first controller 140 as shown in FIG. 4B)
  • plug 700 is substantially the first power supply 130, the second power supply 250, or an integrated power supply as described above.
  • FIG. 9A illustrates a motion capture apparatus 001b according to some other embodiment of the disclosure.
  • a total of four optical markers 110 are fixedly connected to a main body 400 through four connecting rods 500.
  • the motion capture apparatus 001b further includes a mounting portion 300, which includes a mounting panel 310, a round column 320, a pair of handles 330, and a pair of mounting clamps 340.
  • the mounting panel 310 is fixedly attached onto the motion capture apparatus 001b, and is configured to provide a mounting means for an external device.
  • a display panel 800 is fixedly attached with the motion capture apparatus 001b as an external device through the mounting panel 310.
  • the round column 320 is also fixedly attached onto the mounting panel 310, the pair of handles 330 are each attached onto the round column 320 through one mounting clamp 340.
  • the pair of handles 330 are configured to allow the motion capture apparatus 001b to rotate along an axis of the round column 320. As such, the round column 320, the pair of handles 330, and the pair of mounting clamps 340 substantially form a position-adjusting sub-portion within the mounting portion 300.
  • the mounting panel 310 is configured to include a plurality of mounting slots 310a and a plurality of mounting holes 310b, arranged to have different locations, sizes, and orientations, as illustrated in FIG. 9B.
  • the present disclosure further provides a motion capture system.
  • FIG. 10A illustrates a motion capture system according to some embodiments of the disclosure.
  • the motion capture system comprises a motion capture apparatus 001, a plurality of optical sensors 002, and a computing device 003.
  • the motion capture apparatus 001 can be a motion capture apparatus according to any of the embodiments as described above.
  • the optical marker portion 100 of the motion capture apparatus 001 is configured to transmit lights, either passively (as illustrated in FIG. 3A) or actively (as illustrated in FIG. 3A) .
  • the inertial sensor portion 200 of the motion capture apparatus 001 is configured to transmit motion signals.
  • Each of the plurality of optical sensors 002 is configured to detect the lights transmitted from the optical marker portion 100 of the motion capture apparatus 001, and then to send optical signals based on the detected lights to the computing device 003.
  • the computing device 003 is configured to receive the optical signals from each of the plurality of optical sensors 002 and the motion signals from the inertial sensor portion 200 of the motion capture apparatus 001, to calculate a first set of motion parameters of the motion capture apparatus 001 based on the optical signals and a second set of motion parameters of the motion capture apparatus 001 based on the motion signals, and to integrate the first set of motion parameters and the second set of motion parameters to thereby obtain a third set motion parameters having an improved accuracy of the motion capture apparatus 001.
  • any of the first, the second, and the third motion parameters of the motion capture apparatus 001 can include data regarding a position (or location) , a posture, a displacement, a velocity, a rotation, an angular velocity, an acceleration, etc., and can include situations where the motion capture apparatus 001 is still and where the motion capture apparatus 001 is in motion (including linear moving and rotation) .
  • the optical marker portion 100 of the motion capture apparatus 001 is configured to comprise at least three optical markers 110, where any three of the at least three optical markers 110 are disposed to be in a non-linear manner in the physical space (i.e. not on a straight line) .
  • the plurality of optical sensors 002 are configured to include at least two optical sensors 002, each with a pre-determined location and orientation. As such, any two of the at least two optical sensors can be employed to calculate a location of each optical marker 110 through a principle of triangulation.
  • Each of the plurality of optical sensors 002 can be a camera, and can be for example, a CMOS (complementary metal-oxide semiconductor) camera or a CCD (charge-coupled device) camera.
  • CMOS complementary metal-oxide semiconductor
  • CCD charge-coupled device
  • the computing device 003 can include a processor and a memory, wherein the memory is configured to store the optical signals and the motion signals as described above, and the processor is configured to calculate the motion parameters of the motion capture apparatus 001 based on the optical signals and the motion signals stored in the memory.
  • the computing device 003 can be a stand-alone computer, or can be on an intranet or an internet, or can be in a cloud.
  • the advantages of these two portions are substantially integrated to thereby be able to provide motion parameters of the motion capture apparatus 001 that have an improved accuracy.
  • FIG. 10B illustrates one specific embodiment of the motion capture system as shown in FIG. 10A.
  • the motion capture system includes the motion capture apparatus as described above and illustrated in FIG. 8A.
  • the motion capture system also includes a total of n cameras 002a, which is substantially one specific embodiment of the optical sensors 002 as shown in FIG. 10A, where n is more than or equal to 2.
  • the motion capture system further includes a computer 003a, which can calculate the first set of motion parameters of the motion capture apparatus 001a and the second set of motion parameters of the motion capture apparatus 001a, which are respectively based on the optical signals and the motion signals received.
  • the computer 003a can further integrated the first set of motion parameters and the second set of motion parameters to thereby obtain a third set of motion parameters of an improved accuracy.
  • the motion capture system as described above can be employed to track the motion of an external device that is attached onto the motion capture apparatus 001, and as such, by detecting the optical signals (through the optical sensors 002) and motion signals transmitted from the motion capture apparatus 001, the motion parameters of the motion capture apparatus 001 can thus be obtained based on the calculation of the computing device 003.
  • the motion parameters of the motion capture apparatus 001 are substantially the motion parameters of the external device, thereby realizing the tracking of motion capture apparatus 001 and the external device that is attached therewith.
  • the present disclosure further provides a virtual camera system, as illustrated in FIG. 11.
  • the virtual camera system includes a motion capture apparatus 001, a plurality of optical sensors 002, and a computing device 003, which are substantially same, and perform substantially same function, as those in the motion capture system as described above and illustrated in FIG. 10A.
  • the virtual camera system further includes a display panel 004, which is attached onto the motion capture apparatus 001 to thereby form a virtual camera assembly 005.
  • the optical marker portion 100 of the motion capture apparatus 001 is configured to transmit lights, which are then detected by the plurality of optical sensors 002 as optical signals.
  • the plurality of optical sensors 002 are each configured to send the optical signals to the computing device 003.
  • the inertial sensor portion 200 of the motion capture apparatus 001 is configured to transmit motion signals to the computing device 003.
  • the computing device 003 is configured to respectively calculate a first set of motion parameters, and a second set of motion parameters, of the motion capture apparatus 001, and of the virtual camera assembly 005 as well, and then to integrate the first set of motion parameters and the second set of motion parameters to thereby obtain a third set of motion parameters having an improved accuracy of the virtual camera assembly 005.
  • the computing device 003 is further configured to calculate virtual scene data based on the third set of motion parameters of the virtual camera assembly 005 that have been calculated.
  • the virtual scene data is then transmitted to the display panel 004 in the virtual camera assembly 005, thereby allowing the display panel 004 to display virtual scenes based on the virtual scene data that has been received.
  • the above virtual camera assembly 005 can be a virtual reality (VR) device, such as a VR goggle, which can realize a virtual camera function to display virtual scenes by means of the plurality of optical sensors 002 and the computing device 003.
  • VR virtual reality
  • the present disclosure further provides a cooperative camera system, as illustrated in FIG. 12.
  • the cooperative camera system is substantially based on the virtual camera system as described above, and includes a motion capture apparatus 001, a plurality of optical sensors 002, a computing device 003, and a display panel 004, whose configurations and functions are substantially same as those in the virtual camera system as described above.
  • the cooperative camera system further comprises a camera 006, which is fixedly attached with the motion capture apparatus 001, and the display panel 004 to form a cooperative camera assembly 007.
  • the optical marker portion 100 of the motion capture apparatus 001 is configured to transmit lights, which are then detected by the plurality of optical sensors 002 as optical signals.
  • the plurality of optical sensors 002 are each configured to send the optical signals to the computing device 003.
  • the inertial sensor portion 200 of the motion capture apparatus 001 is configured to transmit motion signals to the computing device 003.
  • the computing device 003 is configured to respectively calculate a first set of motion parameters, and a second set of motion parameters, of the motion capture apparatus 001, and of the cooperative camera assembly 007 as well, and then to integrate the first set of motion parameters and the second set of motion parameters to thereby obtain a third set of motion parameters having an improved accuracy of the cooperative camera assembly 007.
  • the camera 006 in the cooperative camera assembly 007 is configure to obtain filming data, which is then transmitted to the computing device 003. Then based on the filming data received from the camera 006 in the cooperative camera assembly 007 and the third set of motion parameters of the cooperative camera assembly 007 that has been calculated based on the optical signals and the motion signals transmitted from the motion capture apparatus 001, the computing device 003 can calculate virtual scene data, which is then transmitted to the display panel 004 in the cooperative camera assembly 007, thereby allowing the display panel 004 to display virtual scenes based on the virtual scene data that has been received.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

A motion capture system includes a motion capture apparatus, a plurality of optical sensors, and a computing device. The motion capture apparatus includes an optical marker portion and an inertial sensor portion, configured to respectively transmit lights and motion signals. Each optical sensor detects the lights from the motion capture apparatus, obtains optical signals, and sends the optical signals to the computing device. The computing device receives the motion signals and the optical signals, calculates a first set of motion parameters based on the optical signals and a second set of parameters based on the motion signals, and integrates the first set of motion parameters and the second set of motion parameters to thereby obtain a third set of motion parameters having an improved accuracy. The optical marker portion includes at least three optical markers arranged in a non-linear manner. Each optical marker can be an active optical marker.

Description

MOTION CAPTURE APPARATUS AND SYSTEM
CROSS-REFERENCE TO RELATED APPLICATION
The present application claims priority to Chinese Patent Application Nos. 201621070412.3, filed on September 21, 2016, and 201621133345.5, filed on October 18, 2016, the disclosures of which are hereby incorporated by reference in their entirety.
TECHNICAL FIELD
The present disclosure relates generally to the field of tracking technology, and specifically to a motion capture apparatus and a motion capture system containing the motion capture apparatus.
BACKGROUND
With the rapid development of computer software and hardware, and with the increasingly high requirements for film and television production, the motion capture technology has been widely used.
Currently, the motion capture technology typically involves data that can be directly calculated and processed by computer processors, such as data that are related to dimensional measurement, measurement data for the position and/or the orientation of an object in a physical space. A typical motion capture process based on a conventional motion capture technology is as follows.
First, one or more tracking markers (or tracker) are arranged at key position (s) /location (s) on a moving object; then a motion capture system captures the position information of each of the one or more tracking markers; finally, data of three-dimensional coordinates can be further  calculated after the computer processors processes the position information of the one or more tracking markers.
The motion data as described above can be widely applied during film and television production, and can also be utilized in other fields such as gait analysis, biomechanics, ergonomics, and so on.
SUMMARY
The present disclosure provides a motion capture system, which comprises a motion capture apparatus, a plurality of optical sensors, and a computing device.
The motion capture apparatus comprises an optical marker portion and an inertial sensor portion, configured to respectively transmit lights and motion signals. Each of the plurality of optical sensors has a predetermined position and orientation, and is configured to detect the lights from the motion capture apparatus, to obtain optical signals based on the lights, and to send the optical signals to the computing device.
The computing device is configured to receive the motion signals from the motion capture apparatus and the optical signals from the each of plurality of optical sensors, to calculate a first set of motion parameters based on the optical signals and a second set of parameters based on the motion signals, and to integrate the first set of motion parameters and the second set of motion parameters to thereby obtain a third set of motion parameters having an improved accuracy.
According to some embodiments of the motion capture system, the optical marker portion comprises at least three optical markers, which are arranged such that any three of the at least three optical markers are not aligned on a same straight line.
In some embodiments of the motion capture system, the at least three optical markers comprise at least one passive optical marker. Each of the  at least one passive optical marker is configured to have a unique feature. Accordingly, the plurality of optical sensors include at least two first optical sensors. Each of the at least two first optical sensors is configured to be able to detect an optical signal from each of the at least one passive optical marker upon receiving a reflected light therefrom.
In some embodiments of the motion capture system, the at least three optical markers comprise at least one active optical marker. Each of the at least one active optical marker is configured to emit a unique light. Accordingly, the plurality of optical sensors include at least two second optical sensors. Each of the at least two second optical sensors is configured to be able to detect an optical signal from each of the at least one active optical marker upon receiving a light emitted therefrom.
Herein each of at least two second optical sensors can comprise a CMOS (complementary metal-oxide semiconductor) camera or a CCD (charge-coupled device) camera.
In the embodiments of the motion capture system as described above, each of the at least one active optical marker is configured to emit a light having a unique feature, and the unique feature is selected from one of a unique wavelength, a unique flicking frequency, or a unique flicking pattern.
According to some embodiments of the motion capture system, each of the at least one active optical marker is configured to emit a light having a unique flicking frequency, or having a unique flicking pattern. The motion capture apparatus further comprises a control circuit, configured to respectively control each of the at least one active optical marker to emit a unique light.
In the motion capture system, each of the at least three optical markers is an active optical marker, and the active optical marker can be an LED lamp. Accordingly, a number of the at least three optical markers can be 4 or 5.
According to some embodiments of the motion capture system, the  motion capture apparatus can further include a main body and a plurality of connecting rods. The inertial sensor portion is disposed inside the main body. A first end of each of the plurality of connecting rods is attached onto the main body, and a second end of the each of the plurality of connecting rods is configured to stick out away from the main body, and to have one of the at least three optical markers disposed thereat.
In the motion capture system as described above, each of the at least three optical markers is an active optical marker, and each of the at least three optical markers is electrically connected to the main body through a wiring inside one of the plurality of connecting rods.
According to some embodiments of the motion capture system, the motion capture apparatus further comprises a first power supply, which is configured to provide power to the motion capture apparatus. The first power supply includes at least one of a battery or a power adaptor. The power adaptor is configured to connect to an external power source to thereby input a power therefrom to the motion capture apparatus.
According to some embodiments of the motion capture system, the inertial sensor portion includes a gyroscope, which is configured to measure angular velocity sub-signals of the motion capture apparatus. Accordingly, the motion signals comprise the angular velocity sub-signals.
According to some other embodiments of the motion capture system, the inertial sensor portion includes a gyroscope and a magnetometer. The magnetometer is configured to measure geomagnetic direction sub-signals of the motion capture apparatus. Accordingly, the motion signals comprise the angular velocity sub-signals and the geomagnetic direction sub-signals.
According to some other embodiments of the motion capture system, the inertial sensor portion includes a gyroscope and an accelerometer. The accelerometer is configured to measure acceleration sub-signals of the motion capture apparatus. Accordingly, the motion signals comprise the angular velocity sub-signals and the acceleration sub-signals.
According to some other embodiments of the motion capture system, the inertial sensor portion includes a gyroscope, a magnetometer, and an accelerometer. Accordingly, the motion signals comprise the angular velocity sub-signals, the geomagnetic direction sub-signals, and the acceleration sub-signals.
In the motion capture system, the inertial sensor portion can further include a communication sub-portion, which is coupled to the inertial sensor portion, and is configured to receive the motion signals from the inertial sensor portion, and then to transmit the motion signals.
Herein the communication sub-portion can be configured to output the motion signals in a wired manner or in a wireless manner.
In the motion capture system, the motion capture apparatus can further include an external device, which is fixedly attached with the motion capture apparatus.
In the motion capture system as described above, the motion capture apparatus can further include a mounting portion, and the external device is fixedly attached with the motion capture apparatus through the mounting portion.
In the motion capture system as described above, the external device can include a display panel. As such, the computing device can be further configured to calculate virtual scene data based on the third set of motion parameters. The motion capture apparatus can thus be further configured to receive the virtual scene data from the computing device; and the display panel is configured to display an image based on the virtual scene data. As such, the motion capture system substantially forms a virtual camera system.
In the above-mentioned motion capture system, the external device can further include a camera, which is configured to acquire filming data. Accordingly, the motion capture apparatus can be further configured to transmit the filming data to the computing device, and the computing device can be further configured to calculate the virtual scene data based on the  third set of motion parameters and on the filming data. As such, the motion capture system substantially forms a cooperative camera system.
BRIEF DESCRIPTION OF THE DRAWINGS
To more clearly illustrate some of the embodiments, the following is a brief description of the drawings. The drawings in the following descriptions are only illustrative of some embodiment. For those of ordinary skill in the art, other drawings of other embodiments can become apparent based on these drawings.
FIG. 1 illustrates a diagram of a motion capture apparatus according to some embodiments of the present disclosure;
FIG. 2 illustrates a diagram of the optical marker portion in the motion capture apparatus as shown in FIG. 1 according to some embodiments of the present disclosure;
FIG. 3A illustrates a diagram of the optical marker portion as shown in FIG. 2 according to some embodiments where each optical marker is a passive optical marker;
FIG. 3B illustrates a diagram of the optical marker portion as shown in FIG. 2 according to some embodiments where each optical marker is active optical marker;
FIG. 4A illustrates a diagram of the optical marker portion as shown in FIG. 3B according to some embodiments of the disclosure;
FIG. 4B illustrates a diagram of the optical marker portion as shown in FIG. 3B according to some embodiments of the disclosure;
FIG. 5 illustrates a structural diagram of the inertial sensor portion in the motion capture apparatus as shown in FIG. 1 according to some embodiments of the present disclosure;
FIG. 6A illustrates a diagram of the inertial sensor portion as shown in FIG. 5 according to some embodiments of the disclosure;
FIG. 6B illustrates a diagram of the inertial sensor portion as shown in FIG. 5 according to some embodiments of the disclosure;
FIG. 7 illustrates a diagram of a motion capture apparatus according to some embodiments of the present disclosure;
FIG. 8A illustrates a structural diagram of a motion capture apparatus according to some specific embodiment of the disclosure;
FIG. 8B illustrates a planar view of an inside of the motion capture apparatus as shown in FIG. 8A;
FIG. 9A illustrates a motion capture apparatus according to some other embodiment of the disclosure that is attached with a display panel;
FIG. 9B illustrates a mounting panel of the motion capture apparatus as shown in FIG. 9A;
FIG. 10A illustrates a motion capture system according to some embodiments of the disclosure;
FIG. 10B illustrates one specific embodiment of the motion capture system as shown in FIG. 10A;
FIG. 11 illustrates a virtual camera system based on the motion capture system as shown in FIG. 10A according to some embodiments of the disclosure; and
FIG. 12 illustrates a cooperative camera system based on the motion capture system as shown in FIG. 10A according to some embodiments of the disclosure.
DETAILED DESCRIPTION
In the following, with reference to the drawings of various embodiments disclosed herein, the technical solutions provided by the disclosure will be described in a clear and fully understandable way.
It is obvious that the described embodiments are merely a portion but not all of the embodiments of the disclosure. Based on the embodiments of the disclosure as described and illustrated herein, those ordinarily skilled in the art can obtain other embodiment (s) without taking efforts of a substantially novel nature, which shall be construed to come within the scope sought for protection by the disclosure.
In a first aspect, the present disclosure provides a motion capture apparatus.
FIG. 1 is a structural diagram of a motion capture apparatus according to some embodiments of the present disclosure. As shown in FIG. 1, the motion capture apparatus 001 includes an optical marker portion 100 and an inertial sensor portion 200.
The optical marker portion 100 is configured to transmit optical signals. Upon receiving of the optical signals by a computing device (e.g. a computer) from the optical marker portion 100 of the motion capture apparatus 001, the computing device can determine, after computation, a first set of motion parameters of the motion capture apparatus 001 based on the optical signals. Herein the first set of motion parameters can include data regarding at least one of a position (or location) , a posture, a displacement, a velocity, a rotation, an angular velocity, an acceleration, etc. of the motion capture apparatus 001, and can include situations where the motion capture apparatus 001 is still and where the motion capture apparatus 001 is in motion (including linear moving and rotation)
The inertial sensor portion 200 is configured to transmit motion signals. Upon receiving of the motion signals by a computing device (e.g. a computer) from the inertial sensor portion 200 of the motion capture apparatus 001, the computing device can determine, after computation, a second set of motion parameters of the motion capture apparatus 001 based  on the motion signals.
The first set of motion parameters of the motion capture apparatus 001 obtained based on the optical signals transmitted from the optical marker portion 100 of the motion capture apparatus 001 and the second set of motion parameters of the motion capture apparatus 001 obtained based on the motion signals transmitted from the inertial sensor portion 200 of the motion capture apparatus 001 can then be integrated to obtain a third set of motion parameters of the motion capture apparatus 001 which are of improved accuracy.
By means of the motion capture apparatus 001 having both the optical marker portion 100 and the inertial sensor portion 200, the advantages of these two portions, which are essentially based on an optical tracking system and on an inertial tracking system, are substantially integrated to thereby be able to provide motion parameters of the motion capture apparatus 001 that have an improved accuracy.
On the one hand, an optical tracking system generally has a more accurate measurement of a position than an inertial tracking system, due to the fact that the inertial tracking system typically suffers from integration drift, where small errors in the measurement of acceleration are integrated into progressively larger errors in velocity, which are compounded into still greater errors in position.
On the other hand, the inertial tracking system generally has a more accurate measurement of a posture of an object than the optical tracking system, due to the fact that the optical tracking system is disadvantageous in angular calculation and this disadvantage can get worse with an increased distance of the object to be detected by the camera.
As such, by integrating the first set of motion parameters and the second set of motion parameters, which are obtained based respectively on the optical signals from the optical marker portion 100 and the motion signals from the inertial sensor portion 200, the advantages and disadvantages of two tracking systems can be complemented to thereby  allow the generation of a third set of motion parameters of an improved accuracy.
There are multiple different ways to integrate the first set of motion parameters and the second set of motion parameters into the third set of motion parameters having an improved accuracy.
In one way, for example, only the position data obtained from the optical signals in the first set of motion parameters is retained, whereas the position data obtained from the motion signals in the second set of motion parameters is removed, in the third set of motion parameters. Similarly, only the posture data obtained from the motion signals in the second set of motion parameters is retained, whereas the posture data obtained from the optical signals in the first set of motion parameters is removed, in the third set of motion parameters.
In another way, different weights can be given to the position data and the posture data from the first set of motion parameters and in the second set of motion parameters to thereby obtain the position data and the posture data in the third set of motion parameters. Specifically, a higher weight is given to the position data from the first set of motion parameters, and a lower weight to the position data from the second set of motion parameters, in order to obtain an integrated position data in the third set of motion parameters. Similarly, a lower weight is given to the posture data from the first set of motion parameters, and a higher weight to the posture data from the second set of motion parameters, in order to obtain an integrated posture data in the third set of motion parameters.
FIG. 2 illustrates a structural diagram of the optical marker portion 100 as shown in FIG. 1. As shown in the figure, the optical marker portion 100 comprises at least three optical markers 110, each configured to transmit an optical signal. The at least three optical markers 110 are also configured such that any three of the at least three optical markers 110 are not aligned on a same straight line.
It is further configured such that the optical signal transmitted from  each individual optical marker 110 in the optical marker portion 100 is different from one another, which substantially allows each individual optical marker 110 and the optical signal transmitted thereby to form a one-to-one corresponding relationship.
By such a configuration as described above, each optical marker 110 in the optical marker portion 100 is configured to transmit a distinct optical signal corresponding thereto, thereby allowing each optical marker 110 in the optical marker portion 100 of the motion capture apparatus 001 to be uniquely identified by the corresponding optical signal.
Herein according to some embodiments as illustrated in FIG. 3A, each optical marker 110 in the optical marker portion 100 can be a passive optical marker 110a, and can be, for example, a bright dot that can reflect an environmental light, which is then detected by a first optical sensor 120a (such as a camera) disposed in a distance to the motion capture apparatus 001 to thereby allow the corresponding optical signal to be obtained.
Each of the at least three optical markers 110 can have a unique feature, for example, can take a different shape (e.g. a dot or a cross) , or take a different color (e.g. red, blue, or green) to thereby allow the camera to be able to differentiate among different optical markers 110 in the optical marker portion 100.
According to some other embodiments as illustrated in FIG. 3B, each optical marker 110 in the optical marker portion 100 can be an active optical marker 110b, such as a light-emitting diode (LED) lamp, which is configured to actively emit a light, which can then be detected by a second optical sensor 120b disposed in a distance to the motion capture apparatus 001 to thereby allow the corresponding optical signal to be obtained. Herein the second optical sensor 120b can be a CMOS (complementary metal-oxide semiconductor) camera or a CCD (charge-coupled device) camera.
Herein each of the at least three active optical markers 110b is configured to emit a different light to thereby allow the second optical  sensor 120b to be able to differentiate among different active optical markers 110b in the optical marker portion 100. It is further configured such that the light emitted by and from each of the at least three active optical markers 110b in the optical marker portion 100 can be configured to have a different wavelength, a different flicking frequency, or to flick at a different pattern to thereby allow the second optical sensor 120b to be able to differentiate among different active optical markers 110b.
In a first illustrating example, the optical marker portion 100 in the motion capture apparatus 001 consists of three active optical markers 110b, each configured to emit a light of one unique wavelength, for example, a red light, a green light, and a blue light, to thereby allow the three active optical markers 110b to be differentiated from one another.
In a second illustrating example, the optical marker portion 100 in the motion capture apparatus 001 consists of three active optical markers 110b, each configured to emit a light flicking at a different frequency, as shown in Table 1.
Table 1. Three active optical markers flicking at a different frequency.
Figure PCTCN2017102793-appb-000001
As shown in Table 1, the active optical marker #1 is configured to be on one per every one image frame (i.e. on at the  image frame # 1, 2, 3, …) ; the active optical marker #1 is configured to be on once per every two image frames (i.e. on at the image frame #2, 4, 6, …) ; and the active optical marker #1 is configured to be on once per every three image frames (i.e. on at the image frame #3, 6, 9, …) . As such, the three active optical markers 110b can be differentiated from one another.
In a third illustrating example, the optical marker portion 100 in the motion capture apparatus 001 consists of four active optical markers 110b, among which each is configured to flicker at a different pattern, as shown in Table 2.
Table 2. Four active optical markers flicking at a different pattern.
Figure PCTCN2017102793-appb-000002
As shown in Table 2, the active optical marker #1 is configured to be on at the  image frame # 1, 2, 3, 4, 5, 6, 7, 8, 9, …; the active optical marker #2 is configured to be on at the image frame #1, 3, 5, 7, 9, …; the active optical marker #3 is configured to be on at the  image frame # 1, 2, 4, 5, 7, 8, …; and the active optical marker #4 is configured to be on at the  image frame # 1, 2, 3, 5, 6, 7, 9, 10, 11, … (image frame #10, 11 not shown) . As such, the different active optical markers can be differentiated from one another.
It is noted that the above specific examples represent only illustrating examples, and do not impose a limitation to the scope of the disclosure. As such, other embodiments are also possible.
In the embodiments as described above, each active optical marker 110b in the optical marker portion 100 can comprise an active light-emitting device, configured to actively emit a light. Specifically, the light-emitting device in each of the at least three active optical markers 110b can be a light-emitting diode (LED) light-emitting device (termed LED lamp) , which is configured to emit an infrared light, a visible light, or a light having a  wavelength of a different range.
According to some preferred embodiments, the LED lamp in each active optical marker 110b in the optical marker portion is configured to emit an infrared light, so as to allow the optical signal emitted therefrom to transmit with a relatively low level of interference by any object disposed closely to, or due to structural limitations of, the motion capture apparatus 001.
It is noted that in addition to the light-emitting diode (LED) as described above, each of the at least three active optical markers 110b can also be active light source of another type, such as a fluorescent lamp. There are no limitations herein.
In the embodiments where each of the at least three optical markers 110 is an active optical marker 110b, the optical marker portion 100 in the motion capture apparatus 001 further includes a first power supply 130, configured to provide power to each of the at least three active optical markers 110b in the optical marker portion 100 to thereby allow each active optical marker 110b to emit a light, as illustrated in FIG. 4A.
According to some embodiments, the optical marker portion 100 in the motion capture apparatus 001 can further include a first controller 140, which is electrically connected to the first power supply 130 and is further coupled to, and thereby configured to respectively control, each of the at least three active optical markers 110b in the optical marker portion 100 to respectively emit a different light, as illustrated in FIG. 4B. Herein the first controller 140 may control each active optical marker 110b to emit a light of a different frequency, or to emit a light flicking at a different pattern.
It is noted that in the embodiments as illustrated in FIG. 3B and FIG. 3C, the first power supply 130 can comprise a battery that is disposed in the motion capture apparatus 001 which can directly (as shown in FIG. 4A) or indirectly (via the first controller 140 as shown in FIG. 4B) supply power to each of the at least three active optical markers 110a in the optical marker portion 100.
Alternatively, the first power supply 130 can comprise a power adaptor (such as a plug) whose input terminal is electrically connected to an input power supply (e.g. an external power source) to input a power therefrom, and whose output terminal is electrically connected to, and thereby output power to, each of the at least three active optical markers 110a in the optical marker portion 100 directly (as shown in FIG. 4A) or indirectly (via the first controller 140 as shown in FIG. 4B) .
It should be noted that in addition to the embodiments as shown in FIG. 3A, where each of the at least three optical markers 110 in the optical marker portion 100 is a passive optical marker 110a, and the embodiments as shown in FIG. 3B, where each of the at least three optical markers 110 in the optical marker portion 100 is an active optical marker 110b, the at least three optical markers 110 in the optical marker portion 100 can include both a passive optical marker 110a and an active optical marker 110b according to some embodiments of the disclosure.
In accordance to such an arrangement, both a first optical sensor 120a configured to detect a reflected light from each passive optical marker 110a to therefore obtain a first optical signal corresponding thereto (as illustrated in FIG. 3A) , and a second optical sensor 120b configured to detect a light emitted from each active optical marker 110b to therefore obtain a second optical signal corresponding thereto (as illustrated in FIG. 3B) , can be both disposed respectively in a distance to the motion capture apparatus 001. According to some preferred embodiments, a single optical sensor has both a functionality of the first optical sensor 120a and a functionality of the second optical sensor 120b is disposed in a distance to the motion capture apparatus 001.
It is further noted that in the motion capture apparatus disclosed herein, in order to determine the first set of motion parameters of the motion capture apparatus 001 based on the optical signals transmitted from the at least three optical markers 110 in the optical marker portion 100, typically at least two optical sensors 120 having a pre-determined location in the space are needed to determine a location of each individual optical marker  110 through a principle of triangulation.
Additionally, at least three optical markers 110 in the optical marker portion 100, where any three of the at least three optical markers 110 are disposed in a non-linear manner in the space (i.e. not on a straight line) , are typically required such that the location information of each of the at least three optical markers 110 determined can be combined to calculate the first set of motion parameters, such as a position and posture, of the motion capture apparatus 001.
FIG. 5 illustrates a structural diagram of the inertial sensor portion 200 as shown in FIG. 1. As shown in the figure, the inertial sensor portion 200 includes a gyroscope 210, and optionally can also include a magnetometer 220 and an accelerometer 230 (as indicated by the box with a dotted line in FIG. 5) .
Specifically, the gyroscope 210 is configured to measure an angular velocity sub-signal of the inertial sensor portion 200, which substantially also indicates an angular velocity of the motion capture apparatus 001. Herein the gyroscope 210, according to some embodiments, can be a gyroscope based on a microelectromechanical system (MEMS) , and can be, more specifically, a three-axis MEMS gyroscope.
The magnetometer 220 is configured to measure a geomagnetic direction sub-signal of the inertial sensor portion 200, which substantially also indicates a geomagnetic direction of the motion capture apparatus 001. Herein the magnetometer 220, according to some embodiments, can be a magnetometer based on a microelectromechanical system (MEMS) , and can be, more specifically, a three-axis MEMS magnetometer.
The accelerometer 230 is configured to measure an acceleration sub-signal of the inertial sensor portion 200, which substantially also indicates an acceleration of the motion capture apparatus 001. Herein the accelerometer 230, according to some embodiments, can be an accelerometer based on a microelectromechanical system (MEMS) , and can be, more specifically, a three-axis MEMS accelerometer.
By means of the gyroscope 210, the inertial sensor portion 200 can measure an angular velocity sub-signal of the motion capture apparatus 001, and the angular velocity sub-signal can be used to compliment the first set of motion parameters of the motion capture apparatus 001 obtained based on the optical signals transmitted from the optical marker portion 100. As such, the above-mentioned disadvantage in angular calculation that intrinsically exist in the optical tracking system can thus be complimented, thereby leading to motion parameters of the motion capture apparatus 001 having an improved accuracy.
In addition, further by means of the magnetometer 220 and the accelerometer 230, the inertial sensor portion 200 can also respectively measure a geomagnetic direction sub-signal and an acceleration sub-signal of the motion capture apparatus 001. The geomagnetic direction sub-signal and an acceleration sub-signal can accompany the angular velocity sub-signal detected by the gyroscope 210 to provide a better complementation to the first set of motion parameters of the motion capture apparatus 001 obtained based on the optical signals transmitted from the optical marker portion 100
Herein the angular velocity sub-signal, and optionally the geomagnetic direction sub-signal and the acceleration sub-signal thus together form the motion signals that can be further transmitted to a computing device, which further calculate a second set of motion parameters of the motion capture apparatus 001 based on the motion signals measured by the inertial sensor portion 200.
According to some embodiments of the disclosure, the inertial sensor portion 200 can merely include a three-axis MEMS microgyroscope, which forms a three-axis inertial sensor in the motion capture apparatus 001 as disclosed herein. Accordingly, the motion signal transmitted to the computing device includes only an angle velocity sub-signal.
According to some other embodiments of the disclosure, the inertial sensor portion 200 can, in addition to a three-axis MEMS microgyroscope,  also include a three-axis magnetometer or a three-axis MEMS microaccelerometer, which together form a six-axis inertial sensor in the motion capture apparatus 001 as disclosed herein. Accordingly, the motion signal transmitted to the computing device includes an angular velocity sub-signal, and one of a geomagnetic direction sub-signal and an acceleration sub-signal.
According to yet some other embodiments of the disclosure, the inertial sensor portion 200 can include a three-axis MEMS microgyroscope, a three-axis MEMS magnetometer, and a three-axis MEMS microaccelerometer, which together form a nine-axis inertial sensor in the motion capture apparatus 001 as disclosed herein. Accordingly, the motion signals transmitted to the computing device include a geomagnetic direction sub-signal, an angle velocity sub-signal, and an acceleration sub-signal.
Upon receiving of the motion signals transmitted from inertial sensor portion 200 of the motion capture apparatus 001, the computing device, such as a computer, can determine, after computation, the second set of motion parameters (such as a position, a posture, etc. ) of the motion capture apparatus 001 based on the motion signals.
According to some embodiments as illustrated in FIG. 6A, the inertial sensor portion 200 further comprises a communication sub-portion 240, which is coupled to the gyroscope 210, and optionally the magnetometer 220 and the accelerometer 230 in the inertial sensor portion 200. The communication sub-portion 240 is configured to receive an angle velocity sub-signal from the gyroscope 210, and optionally a geomagnetic direction sub-signal from the magnetometer 220 and an acceleration sub-signal from the accelerometer 230, and to then output a motion signal after integration of the angle velocity sub-signal, and optionally the geomagnetic direction sub-signal and the acceleration sub-signal.
Herein the communication sub-portion 240 can comprise a wired communication circuit, configured to transmit the motion signal in a wired manner through, for example, a signal adaptor. Alternatively, the  communication sub-portion 240 can comprise a wireless communication circuit, configured to transmit the motion signal in a wireless manner.
According to some embodiments as illustrated in FIG. 6B, the inertial sensor portion 200 further comprises a second power supply 250, configured to provide a power to each of the gyroscope 210, and optionally the magnetometer 220 and the accelerometer 230.
According to some embodiments as illustrated in FIG. 6C, the inertial sensor portion 200 can further include a second controller 260, which is electrically connected to the second power supply 250 and is further coupled to, and thereby configured to respectively control, each of the gyroscope 210, and optionally the magnetometer 220 and the accelerometer 230 in the inertial sensor portion 200.
It is noted that in the embodiments as illustrated in FIG. 6B and FIG. 6C, the second power supply 250 can comprise a battery that is disposed in the motion capture apparatus 001 which can directly (as shown in FIG. 6B) or indirectly (via the second controller 260 as shown in FIG. 6C) supply a power to each of the gyroscope 210, and optionally the magnetometer 220 and the accelerometer 230 in the inertial sensor portion 200.
Alternatively, the second power supply 250 can comprise a power adaptor (such as a plug) whose input terminal is electrically connected to an input power supply (e.g. an external power source) to input a power therefrom, and whose output terminal is electrically connected to, and thereby output power to, each of the gyroscope 210, and optionally the magnetometer 220 and the accelerometer 230 in the inertial sensor portion 200 directly (as shown in FIG. 6B) or indirectly (via the second controller 260 as shown in FIG. 6C) .
It is noted that the first power supply 130 in the optical marker portion 100 and the second power supply 250 in the inertial sensor portion 200 together substantially form a power supply in the motion capture apparatus 001. The first power supply 130 and the second power supply 250 can be a separated sub-portion in, and configured to respectively power, the optical  marker portion 100 and the inertial sensor portion 200 in the motion capture apparatus 001.
Alternatively, the first power supply 130 and the second power supply 250 can form an integrated power supply in the motion capture apparatus 001 which is individually connected to, and configured to respectively power, the optical marker portion 100 and the inertial sensor portion 200 in the motion capture apparatus 001.
It is further noted that the first controller 140 in the optical marker portion 100 and the second controller 260 in the inertial sensor portion 200 substantially form a controlling sub-portion in the motion capture apparatus 001, which can be separated or integrated in the motion capture apparatus 001.
According to some embodiments, the motion capture apparatus 001 further comprises a mounting portion 300, configured to provide a means for mounting, or attaching, the motion capture apparatus 001 to an external device 900. As such, by attaching the motion capture apparatus 001 with an external device, the motion parameters (such as a position, a posture, a velocity, a displacement, an angular velocity, etc. ) of the external device 900 can be substantially obtained by detecting the optical signals and the motion signals transmitted from the motion capture apparatus 001.
FIG. 8A illustrated a structural diagram of a motion capture apparatus according to some specific embodiment of the present disclosure. As shown in the figure, the motion capture apparatus 001a includes a main body 400. A total of four optical markers 110 in the optical marker portion 100 are disposed outside, and arranged to be fixedly connected to, the main body 400 through four connecting rods 500.
In other words, each connecting rod 500 is configured to have a first end attached onto the main body 400, and to have a second end sticking out away from the main body 400, on which one of the four optical markers 110 is disposed.
Herein each of the four optical markers 110 can be a passive optical marker 110a or an active optical marker 110b.
In the embodiments where each of the at least three optical markers 110 is an active optical marker 110b, each optical marker 110 is an LED lamp, and is electrically connected to the main body 400 via a wiring 111 (shown in FIG. 8B) disposed inside each connecting rod 500.
FIG. 8B illustrated a planar view of an inside of the motion capture apparatus as shown in FIG. 8A. As shown in the figure, the motion capture apparatus comprises four optical markers 110b, which are each connected to a control circuit 112 (shown in the box in dotted lines) through a wiring 111. The control circuit 112 is arranged onto a motherboard 600. An inertial sensor portion 200 (also shown in the box in dotted lines) and a plug 700 are also arranged onto the motherboard 600.
Herein the control circuit 112 is substantially the first controller 140 as shown in FIG. 4B) , and the plug 700 is substantially the first power supply 130, the second power supply 250, or an integrated power supply as described above.
FIG. 9A illustrates a motion capture apparatus 001b according to some other embodiment of the disclosure. As shown in the figure, a total of four optical markers 110 are fixedly connected to a main body 400 through four connecting rods 500. The motion capture apparatus 001b further includes a mounting portion 300, which includes a mounting panel 310, a round column 320, a pair of handles 330, and a pair of mounting clamps 340.
The mounting panel 310 is fixedly attached onto the motion capture apparatus 001b, and is configured to provide a mounting means for an external device. Herein, as shown in FIG. 9A, a display panel 800 is fixedly attached with the motion capture apparatus 001b as an external device through the mounting panel 310.
In addition, the round column 320 is also fixedly attached onto the mounting panel 310, the pair of handles 330 are each attached onto the  round column 320 through one mounting clamp 340.
The pair of handles 330 are configured to allow the motion capture apparatus 001b to rotate along an axis of the round column 320. As such, the round column 320, the pair of handles 330, and the pair of mounting clamps 340 substantially form a position-adjusting sub-portion within the mounting portion 300.
It is noted that in order to facilitate the attachment of various different external devices with the motion capture apparatus 001b, the mounting panel 310 is configured to include a plurality of mounting slots 310a and a plurality of mounting holes 310b, arranged to have different locations, sizes, and orientations, as illustrated in FIG. 9B.
It is noted that the above embodiments represent only part, but not all, of the embodiments of the optical marker portion 100 in the motion capture apparatus 001 disclosed herein, and thus shall not be considered as a limitation to the scope of the disclosure.
In a second aspect, the present disclosure further provides a motion capture system.
FIG. 10A illustrates a motion capture system according to some embodiments of the disclosure. As shown in the figure, the motion capture system comprises a motion capture apparatus 001, a plurality of optical sensors 002, and a computing device 003.
The motion capture apparatus 001 can be a motion capture apparatus according to any of the embodiments as described above. The optical marker portion 100 of the motion capture apparatus 001 is configured to transmit lights, either passively (as illustrated in FIG. 3A) or actively (as illustrated in FIG. 3A) . The inertial sensor portion 200 of the motion capture apparatus 001 is configured to transmit motion signals.
Each of the plurality of optical sensors 002 is configured to detect the lights transmitted from the optical marker portion 100 of the motion capture  apparatus 001, and then to send optical signals based on the detected lights to the computing device 003.
The computing device 003 is configured to receive the optical signals from each of the plurality of optical sensors 002 and the motion signals from the inertial sensor portion 200 of the motion capture apparatus 001, to calculate a first set of motion parameters of the motion capture apparatus 001 based on the optical signals and a second set of motion parameters of the motion capture apparatus 001 based on the motion signals, and to integrate the first set of motion parameters and the second set of motion parameters to thereby obtain a third set motion parameters having an improved accuracy of the motion capture apparatus 001.
Herein any of the first, the second, and the third motion parameters of the motion capture apparatus 001 can include data regarding a position (or location) , a posture, a displacement, a velocity, a rotation, an angular velocity, an acceleration, etc., and can include situations where the motion capture apparatus 001 is still and where the motion capture apparatus 001 is in motion (including linear moving and rotation) .
The various different ways to integrate the first set of motion parameters and the second set of motion parameters into the third set of motion parameters having an improved accuracy, have been described above, and will not be repeated herein.
The following are noted for the motion capture system disclosed herein.
The optical marker portion 100 of the motion capture apparatus 001 is configured to comprise at least three optical markers 110, where any three of the at least three optical markers 110 are disposed to be in a non-linear manner in the physical space (i.e. not on a straight line) .
The plurality of optical sensors 002 are configured to include at least two optical sensors 002, each with a pre-determined location and orientation. As such, any two of the at least two optical sensors can be employed to  calculate a location of each optical marker 110 through a principle of triangulation.
Each of the plurality of optical sensors 002 can be a camera, and can be for example, a CMOS (complementary metal-oxide semiconductor) camera or a CCD (charge-coupled device) camera.
The computing device 003 can include a processor and a memory, wherein the memory is configured to store the optical signals and the motion signals as described above, and the processor is configured to calculate the motion parameters of the motion capture apparatus 001 based on the optical signals and the motion signals stored in the memory. Specifically, the computing device 003 can be a stand-alone computer, or can be on an intranet or an internet, or can be in a cloud.
By means of the motion capture apparatus 001 having both the optical marker portion 100 and the inertial sensor portion 200, the advantages of these two portions, which are essentially based on an optical tracking system and on an inertial tracking system, are substantially integrated to thereby be able to provide motion parameters of the motion capture apparatus 001 that have an improved accuracy.
FIG. 10B illustrates one specific embodiment of the motion capture system as shown in FIG. 10A. As shown in the figure, the motion capture system includes the motion capture apparatus as described above and illustrated in FIG. 8A. The motion capture system also includes a total of n cameras 002a, which is substantially one specific embodiment of the optical sensors 002 as shown in FIG. 10A, where n is more than or equal to 2.
The motion capture system further includes a computer 003a, which can calculate the first set of motion parameters of the motion capture apparatus 001a and the second set of motion parameters of the motion capture apparatus 001a, which are respectively based on the optical signals and the motion signals received. The computer 003a can further integrated the first set of motion parameters and the second set of motion parameters to thereby obtain a third set of motion parameters of an improved accuracy.
The motion capture system as described above can be employed to track the motion of an external device that is attached onto the motion capture apparatus 001, and as such, by detecting the optical signals (through the optical sensors 002) and motion signals transmitted from the motion capture apparatus 001, the motion parameters of the motion capture apparatus 001 can thus be obtained based on the calculation of the computing device 003.
Due to the secure attachment between the motion capture apparatus 001 and the external device, the motion parameters of the motion capture apparatus 001 are substantially the motion parameters of the external device, thereby realizing the tracking of motion capture apparatus 001 and the external device that is attached therewith.
In a third aspect, the present disclosure further provides a virtual camera system, as illustrated in FIG. 11. The virtual camera system includes a motion capture apparatus 001, a plurality of optical sensors 002, and a computing device 003, which are substantially same, and perform substantially same function, as those in the motion capture system as described above and illustrated in FIG. 10A.
In addition, the virtual camera system further includes a display panel 004, which is attached onto the motion capture apparatus 001 to thereby form a virtual camera assembly 005.
Similarly as in the motion capture system as described above, the optical marker portion 100 of the motion capture apparatus 001 is configured to transmit lights, which are then detected by the plurality of optical sensors 002 as optical signals. The plurality of optical sensors 002 are each configured to send the optical signals to the computing device 003. The inertial sensor portion 200 of the motion capture apparatus 001 is configured to transmit motion signals to the computing device 003.
Based on the optical signals and the motion signals that have been received, the computing device 003 is configured to respectively calculate a first set of motion parameters, and a second set of motion parameters, of the  motion capture apparatus 001, and of the virtual camera assembly 005 as well, and then to integrate the first set of motion parameters and the second set of motion parameters to thereby obtain a third set of motion parameters having an improved accuracy of the virtual camera assembly 005.
The various different ways to integrate the first set of motion parameters and the second set of motion parameters into the third set of motion parameters having an improved accuracy, have been described above, and will not be repeated herein.
Then the computing device 003 is further configured to calculate virtual scene data based on the third set of motion parameters of the virtual camera assembly 005 that have been calculated. The virtual scene data is then transmitted to the display panel 004 in the virtual camera assembly 005, thereby allowing the display panel 004 to display virtual scenes based on the virtual scene data that has been received.
The above virtual camera assembly 005 can be a virtual reality (VR) device, such as a VR goggle, which can realize a virtual camera function to display virtual scenes by means of the plurality of optical sensors 002 and the computing device 003.
In a fourth aspect, the present disclosure further provides a cooperative camera system, as illustrated in FIG. 12. The cooperative camera system is substantially based on the virtual camera system as described above, and includes a motion capture apparatus 001, a plurality of optical sensors 002, a computing device 003, and a display panel 004, whose configurations and functions are substantially same as those in the virtual camera system as described above.
As shown in FIG. 12, the cooperative camera system further comprises a camera 006, which is fixedly attached with the motion capture apparatus 001, and the display panel 004 to form a cooperative camera assembly 007.
Similarly as in the virtual camera system as described above, the  optical marker portion 100 of the motion capture apparatus 001 is configured to transmit lights, which are then detected by the plurality of optical sensors 002 as optical signals. The plurality of optical sensors 002 are each configured to send the optical signals to the computing device 003. The inertial sensor portion 200 of the motion capture apparatus 001 is configured to transmit motion signals to the computing device 003.
Based on the optical signals and the motion signals that have been received, the computing device 003 is configured to respectively calculate a first set of motion parameters, and a second set of motion parameters, of the motion capture apparatus 001, and of the cooperative camera assembly 007 as well, and then to integrate the first set of motion parameters and the second set of motion parameters to thereby obtain a third set of motion parameters having an improved accuracy of the cooperative camera assembly 007.
The various different ways to integrate the first set of motion parameters and the second set of motion parameters into the third set of motion parameters having an improved accuracy, have been described above, and will not be repeated herein.
The camera 006 in the cooperative camera assembly 007 is configure to obtain filming data, which is then transmitted to the computing device 003. Then based on the filming data received from the camera 006 in the cooperative camera assembly 007 and the third set of motion parameters of the cooperative camera assembly 007 that has been calculated based on the optical signals and the motion signals transmitted from the motion capture apparatus 001, the computing device 003 can calculate virtual scene data, which is then transmitted to the display panel 004 in the cooperative camera assembly 007, thereby allowing the display panel 004 to display virtual scenes based on the virtual scene data that has been received.
All references cited in the present disclosure are incorporated by reference in their entirety. Although specific embodiments have been described above in detail, the description is merely for purposes of  illustration. It should be appreciated, therefore, that many aspects described above are not intended as required or essential elements unless explicitly stated otherwise.
Various modifications of, and equivalent acts corresponding to, the disclosed aspects of the exemplary embodiments, in addition to those described above, can be made by a person of ordinary skill in the art, having the benefit of the present disclosure, without departing from the spirit and scope of the disclosure defined in the following claims, the scope of which is to be accorded the broadest interpretation so as to encompass such modifications and equivalent structures.

Claims (23)

  1. A motion capture system, comprising a motion capture apparatus, a plurality of optical sensors, and a computing device, wherein:
    the motion capture apparatus comprises an optical marker portion and an inertial sensor portion, configured to respectively transmit lights and motion signals;
    each of the plurality of optical sensors has a predetermined position and orientation, and is configured to detect the lights from the motion capture apparatus, to obtain optical signals based on the lights, and to send the optical signals to the computing device; and
    the computing device is configured:
    to receive the motion signals from the motion capture apparatus and the optical signals from the each of plurality of optical sensors;
    to calculate a first set of motion parameters based on the optical signals and a second set of parameters based on the motion signals; and
    to integrate the first set of motion parameters and the second set of motion parameters to thereby obtain a third set of motion parameters having an improved accuracy.
  2. The motion capture system of Claim 1, wherein the optical marker portion comprises at least three optical markers, arranged such that any three of the at least three optical markers are not aligned on a same straight line.
  3. The motion capture system of Claim 2, wherein:
    the at least three optical markers comprise at least one passive optical marker, each configured to have a unique feature; and
    the plurality of optical sensors comprises at least two first optical sensors, each configured to be able to detect an optical signal from each of the at least one passive optical marker upon receiving a reflected light therefrom.
  4. The motion capture system of Claim 2, wherein:
    the at least three optical markers comprise at least one active optical  marker, each configured to emit a unique light; and
    the plurality of optical sensors comprises at least two second optical sensors, each configured to be able to detect an optical signal from each of the at least one active optical marker upon receiving a light emitted therefrom.
  5. The motion capture system of Claim 4, wherein each of at least two second optical sensors comprises a CMOS (complementary metal-oxide semiconductor) camera or a CCD (charge-coupled device) camera.
  6. The motion capture system of Claim 4, wherein each of the at least one active optical marker is configured to emit a light having a unique feature, wherein the unique feature is selected from one of a unique wavelength, a unique flicking frequency, or a unique flicking pattern.
  7. The motion capture system of Claim 6, wherein each of the at least one active optical marker is configured to emit a light having a unique flicking frequency, or having a unique flicking pattern, wherein:
    the motion capture apparatus further comprises a control circuit, configured to respectively control each of the at least one active optical marker to emit a unique light.
  8. The motion capture system of Claim 4, wherein each of the at least three optical markers is an active optical marker.
  9. The motion capture system of Claim 8, wherein a number of the at least three optical markers is 4 or 5.
  10. The motion capture system of Claim 8, wherein the active optical marker is an LED lamp.
  11. The motion capture system of Claim 2, wherein the motion capture apparatus further comprises a main body and a plurality of connecting rods, wherein:
    the inertial sensor portion is disposed inside the main body;
    a first end of each of the plurality of connecting rods is attached onto the main body; and
    a second end of the each of the plurality of connecting rods is configured to stick out away from the main body, and to have one of the at least three optical markers disposed thereat.
  12. The motion capture system of Claim 11, wherein:
    each of the at least three optical markers is an active optical marker; and
    each of the at least three optical markers is electrically connected to the main body through a wiring inside one of the plurality of connecting rods.
  13. The motion capture system of Claim 1, wherein the motion capture apparatus further comprises a first power supply, configured to provide power to the motion capture apparatus, wherein the first power supply comprises at least one of a battery or a power adaptor configured to connect to an external power source to thereby input a power therefrom.
  14. The motion capture system of Claim 1, wherein:
    the inertial sensor portion comprises a gyroscope, configured to measure angular velocity sub-signals of the motion capture apparatus; and
    the motion signals comprise the angular velocity sub-signals.
  15. The motion capture system of Claim 14, wherein:
    the inertial sensor portion further comprises a magnetometer, configured to measure geomagnetic direction sub-signals of the motion capture apparatus; and
    the motion signals further comprise the geomagnetic direction sub-signals.
  16. The motion capture system of Claim 14, wherein:
    the inertial sensor portion further comprises an accelerometer, configured to measure acceleration sub-signals of the motion capture apparatus; and
    the motion signals further comprise the acceleration sub-signals.
  17. The motion capture system of Claim 14, wherein:
    the inertial sensor portion further comprises a magnetometer and an accelerometer, configured to respectively measure geomagnetic direction sub-signals and acceleration sub-signals of the motion capture apparatus; and
    the motion signals further comprise the geomagnetic direction sub-signals and the acceleration sub-signals.
  18. The motion capture system of any one of Claims 14-17, wherein the inertial sensor portion further comprises a communication sub-portion, coupled to the inertial sensor portion and configured to receive the motion signals from the inertial sensor portion, and then to transmit the motion signals.
  19. The motion capture system of Claim 18, wherein the communication sub-portion is configured to output the motion signals in a wireless manner.
  20. The motion capture system of Claim 1, wherein the motion capture apparatus further comprises an external device, fixedly attached with the motion capture apparatus.
  21. The motion capture system of Claim 20, wherein the motion capture apparatus further comprises a mounting portion, and the external device is fixedly attached with the motion capture apparatus through the mounting portion.
  22. The motion capture system of Claim 20, wherein the external device comprises a display panel, wherein:
    the computing device is further configured to calculate virtual scene data based on the third set of motion parameters;
    the motion capture apparatus is further configured to receive the virtual scene data from the computing device; and
    the display panel is configured to display an image based on the virtual  scene data.
  23. The motion capture system of Claim 22, wherein:
    the external device further comprises a camera, configured to acquire filming data;
    the motion capture apparatus is further configured to transmit the filming data to the computing device; and
    the computing device is further configured to calculate the virtual scene data based on the third set of motion parameters and on the filming data.
PCT/CN2017/102793 2016-09-21 2017-09-21 Motion capture apparatus and system WO2018054338A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201621070412.3 2016-09-21
CN201621070412 2016-09-21
CN201621133345.5 2016-10-18
CN201621133345.5U CN206178668U (en) 2016-09-21 2016-10-18 A motion capturing device for virtual reality

Publications (1)

Publication Number Publication Date
WO2018054338A1 true WO2018054338A1 (en) 2018-03-29

Family

ID=58679011

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/102793 WO2018054338A1 (en) 2016-09-21 2017-09-21 Motion capture apparatus and system

Country Status (2)

Country Link
CN (2) CN107844191A (en)
WO (1) WO2018054338A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021173008A1 (en) * 2020-02-28 2021-09-02 Weta Digital Limited Active marker detection for performance capture
US11308644B2 (en) 2020-08-28 2022-04-19 Weta Digital Limited Multi-presence detection for performance capture
US11403775B2 (en) 2020-02-28 2022-08-02 Unity Technologies Sf Active marker enhancements for performance capture

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107844191A (en) * 2016-09-21 2018-03-27 北京诺亦腾科技有限公司 Motion capture device for virtual reality
CN108535734B (en) * 2018-04-12 2024-02-09 上海逸动医学科技有限公司 Optical positioning structure, optical positioning system and method
CN108627157A (en) * 2018-05-11 2018-10-09 重庆爱奇艺智能科技有限公司 A kind of head based on three-dimensional marking plate shows localization method, device and three-dimensional marking plate
CN109674535B (en) * 2018-12-28 2020-09-04 北京诺亦腾科技有限公司 Position posture navigation device
CN111639631A (en) * 2020-07-17 2020-09-08 北京轻威科技有限责任公司 Motion capture device and method
CN112184758B (en) * 2020-11-05 2024-04-19 北京虚拟动点科技有限公司 Active motion capture rigid body and motion capture system
CN115793874B (en) * 2023-02-03 2023-05-05 广州奇境科技有限公司 Virtual reality remote interaction equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080080789A1 (en) * 2006-09-28 2008-04-03 Sony Computer Entertainment Inc. Object detection using video input combined with tilt angle information
CN101699237A (en) * 2009-11-20 2010-04-28 中国航空工业空气动力研究院 Three-dimensional model attitude angle video measuring system for wind tunnel model test
CN206178668U (en) * 2016-09-21 2017-05-17 北京诺亦腾科技有限公司 A motion capturing device for virtual reality

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2233201B1 (en) * 2003-11-21 2006-07-16 Seat, S.A. MIXED REALITY SIMULATION SYSTEM.
CN104197987A (en) * 2014-09-01 2014-12-10 北京诺亦腾科技有限公司 Combined-type motion capturing system
CN104536579B (en) * 2015-01-20 2018-07-27 深圳威阿科技有限公司 Interactive three-dimensional outdoor scene and digital picture high speed fusion processing system and processing method
CN104834917A (en) * 2015-05-20 2015-08-12 北京诺亦腾科技有限公司 Mixed motion capturing system and mixed motion capturing method
CN205540575U (en) * 2016-03-14 2016-08-31 北京诺亦腾科技有限公司 A motion capture gloves and virtual reality system for virtual reality system
CN105653044A (en) * 2016-03-14 2016-06-08 北京诺亦腾科技有限公司 Motion capture glove for virtual reality system and virtual reality system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080080789A1 (en) * 2006-09-28 2008-04-03 Sony Computer Entertainment Inc. Object detection using video input combined with tilt angle information
CN101699237A (en) * 2009-11-20 2010-04-28 中国航空工业空气动力研究院 Three-dimensional model attitude angle video measuring system for wind tunnel model test
CN206178668U (en) * 2016-09-21 2017-05-17 北京诺亦腾科技有限公司 A motion capturing device for virtual reality

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021173008A1 (en) * 2020-02-28 2021-09-02 Weta Digital Limited Active marker detection for performance capture
WO2021173009A1 (en) * 2020-02-28 2021-09-02 Weta Digital Limited Active marker strobing for performance capture communication
US11232293B2 (en) 2020-02-28 2022-01-25 Weta Digital Limited Active marker device for performance capture
US11288496B2 (en) 2020-02-28 2022-03-29 Weta Digital Limited Active marker strobing for performance capture communication
US11403883B2 (en) 2020-02-28 2022-08-02 Unity Technologies Sf Strobing of active marker groups in performance capture
US11403775B2 (en) 2020-02-28 2022-08-02 Unity Technologies Sf Active marker enhancements for performance capture
US11308644B2 (en) 2020-08-28 2022-04-19 Weta Digital Limited Multi-presence detection for performance capture

Also Published As

Publication number Publication date
CN206178668U (en) 2017-05-17
CN107844191A (en) 2018-03-27

Similar Documents

Publication Publication Date Title
WO2018054338A1 (en) Motion capture apparatus and system
US10679360B2 (en) Mixed motion capture system and method
US10162057B2 (en) Portable distance measuring device and method for capturing relative positions
US8704857B2 (en) Three-dimensional display device, mobile terminal and three-dimensional display tracking method
CN106681510B (en) Pose recognition device, virtual reality display device and virtual reality system
CN103591955B (en) Integrated navigation system
US10295651B2 (en) Linear optical sensor arrays (LOSA) tracking system for active marker based 3D motion tracking
CN107014378A (en) A kind of eye tracking aims at control system and method
JP2008152751A (en) Inertia sensing method and system
US9910507B2 (en) Image display apparatus and pointing method for same
AU2017271003B2 (en) Accelerometers
CN108151738B (en) Codified active light marked ball with attitude algorithm
CN107923740A (en) Sensor device, sensing system and information processing equipment
US20220107415A1 (en) Light direction detector systems and methods
JP6383439B2 (en) Method and system for calibrating a sensor using a recognized object
JP2008311690A (en) Eyeball movement controller employing principle of vestibulo-ocular reflex
US20060185431A1 (en) Camera motion detection system
CN102654917A (en) Method and system for sensing motion gestures of moving body
CN112815834A (en) Optical positioning system
Dai et al. A multi-spectral dataset for evaluating motion estimation systems
TWI468997B (en) Pointing system and image system having improved operable range
WO2011016302A1 (en) Marker for motion capture
CN103443580A (en) System and method for calibrating a vehicle measurement reference system
CN108225316B (en) Carrier attitude information acquisition method, device and system
CN116518959A (en) Positioning method and device of space camera based on combination of UWB and 3D vision

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17852406

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17852406

Country of ref document: EP

Kind code of ref document: A1