CN111199576A - Outdoor large-range human body posture reconstruction method based on mobile platform - Google Patents

Outdoor large-range human body posture reconstruction method based on mobile platform Download PDF

Info

Publication number
CN111199576A
CN111199576A CN201911360084.9A CN201911360084A CN111199576A CN 111199576 A CN111199576 A CN 111199576A CN 201911360084 A CN201911360084 A CN 201911360084A CN 111199576 A CN111199576 A CN 111199576A
Authority
CN
China
Prior art keywords
human body
camera
mobile platform
shooting
body posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911360084.9A
Other languages
Chinese (zh)
Other versions
CN111199576B (en
Inventor
闫野
乔运浩
姜志杰
杜欣未
谢良
邓宝松
印二威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
National Defense Technology Innovation Institute PLA Academy of Military Science
Original Assignee
Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
National Defense Technology Innovation Institute PLA Academy of Military Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center, National Defense Technology Innovation Institute PLA Academy of Military Science filed Critical Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
Priority to CN201911360084.9A priority Critical patent/CN111199576B/en
Publication of CN111199576A publication Critical patent/CN111199576A/en
Application granted granted Critical
Publication of CN111199576B publication Critical patent/CN111199576B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C9/00Measuring inclination, e.g. by clinometers, by levels
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/53Determining attitude
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an outdoor large-range human body posture reconstruction method based on a mobile platform, which comprises the following steps: carrying out camera pose calculation and camera calibration by using multiple sensor data of multiple mobile platforms, wherein each mobile platform carries a camera bound with an inertial measurement unit, an artificial visual identifier and a satellite positioning system; the mobile platform follows the shooting human body posture and establishes a human body model in real time; and calculating a proper shooting angle according to the human body pose in the visual field, and controlling the mobile platform to move to the optimal shooting angle for shooting. The invention has the following beneficial effects: the camera is not required to be calibrated manually, and the operation is simple. The real-time self-calibration can solve the three-dimensional coordinates of the joint points in real time. And the calibration is robust because the data of various sensors are combined during self-calibration. The camera is located on the mobile platform, so that shooting is flexible. And finally, the system controls the mobile platform to move and shoot at a better shooting angle, so that better human body pose information can be obtained.

Description

Outdoor large-range human body posture reconstruction method based on mobile platform
Technical Field
The invention relates to the fields of multi-sensor combination, automatic camera calibration, human body posture estimation, human body posture reconstruction and the like, in particular to an outdoor large-range human body posture reconstruction technology based on real-time camera automatic calibration of a mobile platform.
Background
The outdoor human body posture reconstruction plays an important role in dangerous behavior analysis, athlete auxiliary training, battlefield environment analysis, future holographic communication and the like. The human body posture reconstruction is a core step of dangerous behavior analysis, athlete assisted training and battlefield environment analysis, and the analysis of joint point positions of people in a scene is essential to obtain the behavior state of the people in the scene. However, the current human body posture analysis needs a complex calibration procedure, only aims at indoor small-range fixed scenes, and needs to be calibrated again after personnel adjust the position of the camera, so that the use efficiency is low, and the scene adaptability is weak.
The convenient high-precision calibration and reasonable shooting method of the camera are core problems of human body posture reconstruction, but the current human body posture reconstruction basically uses a single camera or a plurality of fixed cameras. These two methods mainly face the following difficulties: the method of a single camera can only obtain the 2d pose of the human body, has a large estimation error on the 3d pose, and cannot process the occlusion joint points under one visual angle. Although the method using a plurality of fixed cameras has high reconstruction accuracy and can also deal with partial occlusion problems, the calibration of the fixed cameras is time-consuming and labor-consuming, and once the calibration is completed, the cameras are not allowed to move. Because the shooting range of a fixed camera is limited, the method has little capacity for outdoor moving scenes. The common defect of the two modes is that the position of the camera is fixed, the camera is often not at an ideal shooting angle, and the shooting angle cannot be automatically adjusted according to the human body in the scene.
Disclosure of Invention
The purpose of the invention is realized by the following technical scheme.
In view of the problem of human body posture reconstruction of outdoor scenes, the invention aims to provide an outdoor large-range human body posture reconstruction technology based on automatic calibration of a real-time camera on a mobile platform. The robust camera pose can be automatically calculated by combining various sensor data, and the calibration between cameras is completed; the shooting platforms can be controlled to shoot movably, and the human body model can be reconstructed in real time; after the human body posture is reconstructed, a proper shooting angle can be calculated according to the human body posture in the visual field, and the mobile platform is controlled to move to the optimal shooting angle for shooting.
According to a first aspect of the present invention, there is provided an outdoor large-scale human body posture reconstruction method based on a mobile platform, comprising:
carrying out camera pose calculation and camera calibration by using multiple sensor data of multiple mobile platforms, wherein each mobile platform carries a camera bound with an inertial measurement unit, an artificial visual identifier and a satellite positioning system;
the mobile platform follows the shooting human body posture and establishes a human body model in real time;
and calculating a proper shooting angle according to the human body pose in the visual field, and controlling the mobile platform to move to the optimal shooting angle for shooting.
Further, the camera pose calculation and inter-camera calibration using the multiple sensor data of the multiple mobile platforms includes:
the method comprises the steps that camera coordinates are obtained through a satellite positioning system, the rotation angle and the displacement of each camera relative to an initialization position are obtained through an inertial measurement unit fixed together with the cameras, each camera adjusts the angle to shoot artificial visual identification of other cameras, the displacement and the angle of each camera relative to the other cameras are calculated, external parameters of the cameras are calculated by combining various data, and calibration is completed.
Further, the camera pose calculation and inter-camera calibration using the multiple sensor data of the multiple mobile platforms includes:
measuring internal parameters of each camera and recording the internal parameters into a central computer;
distributing artificial visual identification for each camera;
all the mobile platforms start from the same position and the same angle by the camera, and the initialization of the inertia measurement unit is carried out when the cameras start;
and wirelessly transmitting data of the satellite positioning system, the inertial measurement unit and the artificial visual identification to a central computer, and calibrating the camera in real time according to the data.
Further, the pose of the camera to the initialization position and the relative pose relative to other cameras are respectively calculated through an inertial measurement unit and an artificial visual identifier, and the relative displacement of the camera is calculated through a satellite positioning system.
Further, the mobile platform follows the shooting human body posture, establishes a human body model in real time, and comprises:
and (3) obtaining the three-dimensional coordinates of the human joint points by utilizing triangulation, and reconstructing the three-dimensional human body pose through animation software.
Further, the mobile platform follows the shooting human body posture, establishes a human body model in real time, and comprises:
each camera takes a scene picture;
predicting 2d coordinates of joint points distinguished by people by using a deep learning method for each frame of image;
each camera wirelessly transmits the obtained 2d joint point coordinates to a central computer, and a conversion matrix between the cameras is obtained through calibration;
the central computer resolves the 3d position of the joint point at each view angle from the real-time calculated camera pose.
Further, calculating a proper shooting angle according to the human body pose in the visual field, and controlling the mobile platform to move to the optimal shooting angle for shooting comprises:
judging whether the shooting angle is reasonable or not according to the human body posture reconstruction result, if the shooting angle is reasonable, the mobile platform does not move, and if the shooting angle is unreasonable, the reasonable shooting position is calculated;
and controlling the mobile platform to reach the target position.
Further, calculating a proper shooting angle according to the human body pose in the visual field, and controlling the mobile platform to move to the optimal shooting angle for shooting comprises:
judging whether the reconstructed human body posture is shielded or on one side;
if the shielding or the side body exists, calculating the front position capable of reducing the shielding or the human body as a target position;
and controlling the mobile platform to reach the target position.
According to the second aspect of the present invention, there is also provided an outdoor large-scale human body posture reconstruction system based on a mobile platform, comprising:
the real-time self-calibration module is used for solving the relative pose between cameras carried on the mobile platform through the parameters of the combination of the satellite positioning unit, the inertia measurement unit and the artificial visual identification, calculating a transfer matrix and completing the self-calibration of the cameras;
the real-time posture detection and reconstruction module is used for respectively detecting the human body posture of the RGB images shot by each camera to obtain the two-dimensional joint point coordinates of the human body posture shot by each camera, and using the data calibrated by the real-time self-calibration module and triangulation to calculate the three-dimensional coordinates of the detected human body joint points to complete posture reconstruction;
and the mobile platform position automatic optimization module is used for calculating a proper shooting angle according to the human body pose in the visual field and controlling the mobile platform to move to the proper shooting angle for shooting.
As mentioned above, the invention discloses an outdoor large-range human body posture reconstruction technology based on real-time camera automatic calibration on a mobile platform, which has the following beneficial effects: the camera is not required to be calibrated manually, and the operation is simple. The real-time self-calibration can solve the three-dimensional coordinates of the joint points in real time. And the calibration is robust because the data of various sensors are combined during self-calibration. The camera is located on the mobile platform, so that shooting is flexible. And finally, the system controls the mobile platform to move and shoot at a better shooting angle, so that better human body pose information can be obtained.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a processing flow chart of the outdoor large-scale human body posture reconstruction technique integrally based on the automatic calibration of the real-time camera on the mobile platform, which is disclosed in the embodiment of the invention.
Fig. 2 shows a flow chart of a real-time robust camera self-calibration process incorporating multiple sensors as disclosed in an embodiment of the present invention.
Fig. 3 is a flowchart illustrating a process of reconstructing a moving real-time human body pose according to an embodiment of the present invention.
Fig. 4 is a flowchart illustrating automatic optimization of the position of the mobile platform according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The invention discloses an outdoor large-range human body posture reconstruction technology based on a mobile platform, which can enable a camera on the mobile platform to be self-calibrated in real time so as to reconstruct outdoor large-range human body postures in real time. The complicated manual calibration process is not needed, so that the gesture reconstruction of the outdoor human body moving in a large range is possible. The mobile human body posture reconstruction system is composed of a real-time self-calibration module, a real-time posture detection and reconstruction module and a mobile platform position automatic optimization module. Firstly, after a plurality of mobile platforms such as three unmanned aerial vehicles start, a self-calibration module calculates relative poses between cameras carried on the mobile platforms through parameters of satellite positioning, IMU (inertial measurement unit, IMU for short) inertial measurement units and artificial visual identification combination, wherein the IMU is a device for measuring three-axis attitude angles (or angular rates) and acceleration of an object, and a transfer matrix is calculated, so that self-calibration of the cameras is completed. Secondly, the real-time posture detection and reconstruction module respectively detects the human body posture through RGB images shot by each camera to obtain the two-dimensional joint point coordinates of the human body posture shot by each camera, and three-dimensional coordinates of the detected human body joint points are calculated by using data calibrated at the time of the self-calibration module and triangulation to complete posture reconstruction. And finally, the automatic mobile platform position optimization module calculates a proper shooting angle according to the human body pose in the visual field, and controls the mobile platform to move to the proper shooting angle for shooting.
Generally, an IMU includes three single-axis accelerometers and three single-axis gyroscopes, the accelerometers detect acceleration signals of an object in three independent axes of a carrier coordinate system, and the gyroscopes detect angular velocity signals of the carrier relative to a navigation coordinate system, and measure angular velocity and acceleration of the object in three-dimensional space, and then solve the attitude of the object.
Specifically, fig. 1 shows a general flowchart of the present invention, which includes 3 key steps, namely, real-time robust camera self-calibration, mobile real-time human body posture reconstruction, and mobile platform position automatic optimization, and the embodiment will describe the 3 key steps in detail.
The invention provides an outdoor large-range human body posture reconstruction technology based on automatic calibration of a real-time camera on a mobile platform, the processing flow is shown in figure 1, and the technology comprises the following steps:
s1, carrying one camera bound with an IMU, an artificial visual identifier and a satellite positioning system on each mobile platform (such as three unmanned aerial vehicles), and combining the data of the three sensors to automatically perform robust camera pose calculation and camera calibration;
firstly, three unmanned aerial vehicles take off at the same position once, and an IMU is initialized. The camera coordinates are obtained through satellite positioning, the IMU fixed together with the cameras obtains the rotation angle and displacement of each camera relative to the initialization position, each camera adjusts the angle to shoot the artificial markers of other cameras, the displacement and the angle relative to other cameras are calculated, the external parameters of the cameras are calculated by combining various data, and calibration is completed.
And S2, moving the real-time human body posture to reconstruct. And (3) obtaining the three-dimensional coordinates of the human joint points by utilizing triangulation, and reconstructing the three-dimensional human body pose through animation software.
And S3, after the human body posture is reconstructed, calculating a proper shooting angle according to the human body posture in the visual field, and controlling the mobile platform to move to the optimal shooting angle for shooting.
Wherein fig. 2 presents a flow chart of a real-time robust camera self-calibration process incorporating a variety of sensors. In order to achieve the purpose of calibrating the camera by robustness, the invention adopts a method of calculating the pose of three sensors together. The three sensors have the advantages that the satellite positioning and the visual marker can correct drift errors generated by the IMU in long-term use, and the IMU can still provide pose information to enable calibration to be continuous when satellite positioning signals are poor or the visual marker is not shot. And the advantages of the three sensors are complementary, so that better calibration can be obtained. The camera is fixed by the same clamp and then the IMU is initialized, so that the initialization state of the IMU can be adjusted to be consistent. The internal reference of the camera is measured by a measuring tool before the camera is used or is carried by the camera, and the internal reference is basically unchanged when the camera is used. In the invention, in step 1, three sensors are combined to comprehensively analyze and calibrate the pose of the camera, and the key steps are initialization of an IMU, binding of artificial visual identification and design of data combination strategies of various sensors. The processing flow is shown in fig. 2, and comprises the following steps:
and S11, measuring the internal reference of each camera and recording the internal reference into the central computer.
And S12, allocating a specific two-dimensional code or artificial visual identification in other shapes to each camera, and fixing the identification and the cameras together. Shooting the mark can calculate the relative pose between the shooting camera and the shot camera. Fixing the clamping groove just matched with the shape of the camera on the ground, and placing the camera in the clamping groove.
And S13, initializing IMUs when the cameras start from the same position and the same angle from the same mobile platform, and enabling IMUs of all the mobile platforms to have the same initialization parameters.
S14, wirelessly transmitting the satellite positioning data, the IMU data and the artificial visual identification data to a central computer, respectively calculating the pose of the camera to the initialization position and the relative pose relative to other cameras through the IMU data and the artificial visual identification data, and calculating the relative displacement of the camera through a satellite positioning system. If a certain accuracy is considered to be low, the calculation result is discarded. The data is then transmitted wirelessly to a central computer. And (4) integrating the parameters of the three sources, discarding unreliable parameters and calibrating the camera in real time. If the three accuracies are all within the acceptable threshold, the three parameters are averaged to obtain the final value, and the external parameter between the cameras is obtained.
In step S1, the camera captures an artificial visual marker suspended on a mobile platform other than itself to calculate the relative pose between the mobile platforms, using the position information provided by the satellite positioning system, and the position and angle information provided by the IMU relative to the initial point. When a certain sensor parameter cannot be used, the remaining one or two sensors can still successfully calculate the relative pose between the cameras for calibration.
In step 2, the invention calculates the 3d coordinates of the human body joints within the shooting range in real time. As shown in fig. 3, a flow chart of the process of reconstructing the moving real-time human body posture is given. 2d joint point prediction is carried out on a single color image by using a deep neural network, and 2d information of human joints can be obtained well. Each mobile platform wirelessly transmits the analyzed 2d joint information to a central computer. And (3) using the camera pose relationship obtained in the previous step and the 2d joint coordinates in the current step in a central computer, namely using epipolar set constraint to solve the 3d position of the human joint. The key steps are as follows:
s21, each camera shoots a scene picture;
s22, predicting the 2d coordinates of the joint points distinguished by people by using a deep learning method (neural network) for each frame of image;
s23, each camera wirelessly transmits the obtained 2d joint point coordinates to a central computer; and calibrating to obtain a conversion matrix between the cameras.
And S24, the central computer calculates the 3d position of the joint point under each visual angle by the camera pose calculated in real time. And resolving the 3d coordinates of the joints by an antipodal set, and reconstructing and displaying the human body posture through animation software.
In step S2, the poses of the human body are calculated in real time and modeled in real time while the plurality of mobile platforms follow the shooting from a plurality of viewpoints. The real poses of the human body can be obtained as much as possible by shooting from a plurality of visual angles, and uncertainty of the true poses caused by factors such as shielding is reduced. The real-time analysis modeling has good timeliness and can feed back information to people in the scene in time.
Through the steps S1 and S2, the real-time human body posture reconstruction can be obtained. Because the personnel are in a moving state, body shielding, side body and other phenomena often occur, and the shooting position and other adjustments need to be carried out in combination with a specific scene to obtain accurate human body posture information. As shown in fig. 4, a flow chart for automatic optimization of the position of the mobile platform is given. Because personnel are in a moving state, phenomena such as body shielding and side body often occur, and when the phenomena occur, 2d joint analysis is difficult, and the accuracy of joint three-dimensional information calculation is influenced. The invention designs an automatic optimization strategy for the position of the mobile platform, and judges whether a reconstructed object in a scene is shielded or in a side state at any time. If the situations are detected to occur, calculating the position for reducing the occlusion of a reconstruction object or the front and back positions of a human body, selecting a proper position as the target position of the current frame, and controlling the mobile platform to arrive at the position by combining the target position of the frame after the target position of the previous frame is predicted. The key steps are as follows:
s31, judging whether the shooting angle is reasonable or not according to the human body posture reconstruction result, for example, judging whether the reconstructed human body posture has a shelter or a side body or not;
s32, if the shooting angle is reasonable, the unmanned aerial vehicle does not move, if the shooting angle is unreasonable, for example, if the shooting angle is blocked or the unmanned aerial vehicle is on one side, the front position of the human body capable of reducing the blocking or the human body is calculated to be used as the target position;
and S33, controlling the mobile platform to reach the target position.
Through step S3, since a fixed shooting angle or a random shooting angle cannot cope with each specific scene, the human body poses in the scenes are different, and the shooting angles at which the human body poses can be obtained with higher accuracy are different. According to the invention, the mobile platform can be controlled to automatically move to a proper shooting position according to the condition in the scene, so that the human body pose information can be better obtained, and more accurate human body pose can be reconstructed.
The invention starts from a robust camera automatic calibration and reasonable shooting method, develops an outdoor large-range human body posture reconstruction system based on the real-time camera self-calibration of a mobile platform, when the system is used, only a shooting center needs to be set, the operation of a mobile platform individual is not needed, the manual calibration of a camera on the mobile platform is also not needed, the system can automatically complete the control of the mobile platform individual, and the camera posture is automatically calculated by combining a satellite positioning system, an IMU and an artificial visual marker shot by the camera, so that the camera calibration is completed. And the system can automatically adjust the mobile platform to a proper shooting position according to the human body pose in the shooting range.
The method does not need to calibrate the camera manually, and is simple to operate. The real-time self-calibration can solve the three-dimensional coordinates of the joint points in real time. And the calibration is robust because the data of various sensors are combined during self-calibration. The camera is located on the mobile platform, so that shooting is flexible. And finally, the system controls the mobile platform to move and shoot at a better shooting angle, so that better human body pose information can be obtained. The system can help to analyze outdoor athlete movement and detect abnormal behaviors in the security field and the like.
From the above description, those skilled in the art can easily understand other advantages and effects of the present invention from the disclosure of the present specification.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (9)

1. An outdoor large-range human body posture reconstruction method based on a mobile platform is characterized by comprising the following steps:
carrying out camera pose calculation and camera calibration by using multiple sensor data of multiple mobile platforms, wherein each mobile platform carries a camera bound with an inertial measurement unit, an artificial visual identifier and a satellite positioning system;
the mobile platform follows the shooting human body posture and establishes a human body model in real time;
and calculating a proper shooting angle according to the human body pose in the visual field, and controlling the mobile platform to move to the optimal shooting angle for shooting.
2. The outdoor wide-range human body posture reconstruction method based on the mobile platform as claimed in claim 1,
the camera pose calculation and inter-camera calibration using multiple sensor data of multiple mobile platforms includes:
the method comprises the steps that camera coordinates are obtained through a satellite positioning system, the rotation angle and the displacement of each camera relative to an initialization position are obtained through an inertial measurement unit fixed together with the cameras, each camera adjusts the angle to shoot artificial visual identification of other cameras, the displacement and the angle of each camera relative to the other cameras are calculated, external parameters of the cameras are calculated by combining various data, and calibration is completed.
3. The outdoor wide-range human body posture reconstruction method based on the mobile platform as claimed in claim 1,
the camera pose calculation and inter-camera calibration using multiple sensor data of multiple mobile platforms includes:
measuring internal parameters of each camera and recording the internal parameters into a central computer;
distributing artificial visual identification for each camera;
all the mobile platforms start from the same position and the same angle by the camera, and the initialization of the inertia measurement unit is carried out when the cameras start;
and wirelessly transmitting data of the satellite positioning system, the inertial measurement unit and the artificial visual identification to a central computer, and calibrating the camera in real time according to the data.
4. The outdoor wide-range human body posture reconstruction method based on the mobile platform as claimed in claim 3,
the pose of the camera to the initial position and the relative pose relative to other cameras are respectively calculated through the inertial measurement unit and the artificial visual identification, and the relative displacement of the camera is calculated through a satellite positioning system.
5. The outdoor wide-range human body posture reconstruction method based on the mobile platform as claimed in claim 1,
the mobile platform follows the shooting human body posture, establishes the human body model in real time, includes:
and (3) obtaining the three-dimensional coordinates of the human joint points by utilizing triangulation, and reconstructing the three-dimensional human body pose through animation software.
6. The outdoor wide-range human body posture reconstruction method based on the mobile platform as claimed in claim 1,
the mobile platform follows the shooting human body posture, establishes the human body model in real time, includes:
each camera takes a scene picture;
predicting 2d coordinates of joint points distinguished by people by using a deep learning method for each frame of image;
each camera wirelessly transmits the obtained 2d joint point coordinates to a central computer, and a conversion matrix between the cameras is obtained through calibration;
the central computer resolves the 3d position of the joint point at each view angle from the real-time calculated camera pose.
7. The outdoor wide-range human body posture reconstruction method based on the mobile platform as claimed in claim 1,
according to the human body pose in the visual field, calculating a proper shooting angle, and controlling the mobile platform to move to the optimal shooting angle for shooting, the method comprises the following steps:
judging whether the shooting angle is reasonable or not according to the human body posture reconstruction result, if the shooting angle is reasonable, the mobile platform does not move, and if the shooting angle is unreasonable, the reasonable shooting position is calculated;
and controlling the mobile platform to reach the target position.
8. The outdoor wide-range human body posture reconstruction method based on the mobile platform as claimed in claim 1,
according to the human body pose in the visual field, calculating a proper shooting angle, and controlling the mobile platform to move to the optimal shooting angle for shooting, the method comprises the following steps:
judging whether the reconstructed human body posture is shielded or on one side;
if the shielding or the side body exists, calculating the front position capable of reducing the shielding or the human body as a target position;
and controlling the mobile platform to reach the target position.
9. The utility model provides an outdoor human posture system of rebuilding on a large scale based on mobile platform which characterized in that includes:
the real-time self-calibration module is used for solving the relative pose between cameras carried on the mobile platform through the parameters of the combination of the satellite positioning unit, the inertia measurement unit and the artificial visual identification, calculating a transfer matrix and completing the self-calibration of the cameras;
the real-time posture detection and reconstruction module is used for respectively detecting the human body posture of the RGB images shot by each camera to obtain the two-dimensional joint point coordinates of the human body posture shot by each camera, and using the data calibrated by the real-time self-calibration module and triangulation to calculate the three-dimensional coordinates of the detected human body joint points to complete posture reconstruction;
and the mobile platform position automatic optimization module is used for calculating a proper shooting angle according to the human body pose in the visual field and controlling the mobile platform to move to the proper shooting angle for shooting.
CN201911360084.9A 2019-12-25 2019-12-25 Outdoor large-range human body posture reconstruction method based on mobile platform Active CN111199576B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911360084.9A CN111199576B (en) 2019-12-25 2019-12-25 Outdoor large-range human body posture reconstruction method based on mobile platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911360084.9A CN111199576B (en) 2019-12-25 2019-12-25 Outdoor large-range human body posture reconstruction method based on mobile platform

Publications (2)

Publication Number Publication Date
CN111199576A true CN111199576A (en) 2020-05-26
CN111199576B CN111199576B (en) 2023-08-18

Family

ID=70746614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911360084.9A Active CN111199576B (en) 2019-12-25 2019-12-25 Outdoor large-range human body posture reconstruction method based on mobile platform

Country Status (1)

Country Link
CN (1) CN111199576B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113432603A (en) * 2021-06-25 2021-09-24 中国船舶重工集团公司第七0七研究所 Method for measuring relative pose of mother platform and son platform in complex environment
CN114283447A (en) * 2021-12-13 2022-04-05 凌云光技术股份有限公司 Motion capture system and method

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101086681A (en) * 2006-06-09 2007-12-12 中国科学院自动化研究所 Game control system and method based on stereo vision
CN101482928A (en) * 2009-02-24 2009-07-15 清华大学 Behavior description method and apparatus
US20100158354A1 (en) * 2008-12-22 2010-06-24 Electronics And Telecommunications Research Institute Method of creating animatable digital clone from multi-view images
CN102222361A (en) * 2010-04-06 2011-10-19 清华大学 Method and system for capturing and reconstructing 3D model
US20110267344A1 (en) * 2010-04-30 2011-11-03 Liberovision Ag Method for estimating a pose of an articulated object model
KR20130088507A (en) * 2012-01-31 2013-08-08 한국전자통신연구원 Apparatus and method for estimating skeleton structure of human body
CN104461013A (en) * 2014-12-25 2015-03-25 中国科学院合肥物质科学研究院 Human body movement reconstruction and analysis system and method based on inertial sensing units
CN104508704A (en) * 2012-05-25 2015-04-08 波可斯有限公司 Body measurement
CN104700433A (en) * 2015-03-24 2015-06-10 中国人民解放军国防科学技术大学 Vision-based real-time general movement capturing method and system for human body
CN105847684A (en) * 2016-03-31 2016-08-10 深圳奥比中光科技有限公司 Unmanned aerial vehicle
CN106959747A (en) * 2017-02-14 2017-07-18 深圳奥比中光科技有限公司 Three-dimensional human body measurement method and its equipment
CN107154073A (en) * 2017-04-17 2017-09-12 嘉兴超维信息技术有限公司 A kind of three-dimensional modeling device and method
CN107239728A (en) * 2017-01-04 2017-10-10 北京深鉴智能科技有限公司 Unmanned plane interactive device and method based on deep learning Attitude estimation
WO2018103407A1 (en) * 2016-12-09 2018-06-14 中山大学 Unmanned aerial vehicle calibration method and system based on colour 3d calibration object
CN109003301A (en) * 2018-07-06 2018-12-14 东南大学 A kind of estimation method of human posture and rehabilitation training system based on OpenPose and Kinect
CN109032329A (en) * 2018-05-31 2018-12-18 中国人民解放军军事科学院国防科技创新研究院 Space Consistency keeping method towards the interaction of more people's augmented realities
CN109242887A (en) * 2018-07-27 2019-01-18 浙江工业大学 A kind of real-time body's upper limks movements method for catching based on multiple-camera and IMU
CN109242960A (en) * 2018-09-15 2019-01-18 武汉智觉空间信息技术有限公司 Using the human body Real-time modeling set system and its modeling method of double Kinect and rotating platform
CN109801374A (en) * 2019-01-14 2019-05-24 盾钰(上海)互联网科技有限公司 A kind of method, medium and system reconstructing threedimensional model by multi-angle image collection
CN109854893A (en) * 2018-12-06 2019-06-07 东南大学 A kind of adjustable multiple views three-dimensional reconstruction acquisition system
CN110032278A (en) * 2019-03-29 2019-07-19 华中科技大学 A kind of method for recognizing position and attitude, the apparatus and system of human eye attention object
US20190278983A1 (en) * 2018-03-12 2019-09-12 Nvidia Corporation Three-dimensional (3d) pose estimation from a monocular camera
CN110308459A (en) * 2019-06-30 2019-10-08 南京理工大学 The non-cooperation satellite relative pose measurement method of independent of model
CN110322544A (en) * 2019-05-14 2019-10-11 广东康云科技有限公司 A kind of visualization of 3 d scanning modeling method, system, equipment and storage medium
US20190325209A1 (en) * 2018-07-04 2019-10-24 Baidu Online Network Technology (Beijing) Co., Ltd. Method, apparatus and system for human body tracking processing

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101086681A (en) * 2006-06-09 2007-12-12 中国科学院自动化研究所 Game control system and method based on stereo vision
US20100158354A1 (en) * 2008-12-22 2010-06-24 Electronics And Telecommunications Research Institute Method of creating animatable digital clone from multi-view images
CN101482928A (en) * 2009-02-24 2009-07-15 清华大学 Behavior description method and apparatus
CN102222361A (en) * 2010-04-06 2011-10-19 清华大学 Method and system for capturing and reconstructing 3D model
US20110267344A1 (en) * 2010-04-30 2011-11-03 Liberovision Ag Method for estimating a pose of an articulated object model
KR20130088507A (en) * 2012-01-31 2013-08-08 한국전자통신연구원 Apparatus and method for estimating skeleton structure of human body
CN104508704A (en) * 2012-05-25 2015-04-08 波可斯有限公司 Body measurement
CN104461013A (en) * 2014-12-25 2015-03-25 中国科学院合肥物质科学研究院 Human body movement reconstruction and analysis system and method based on inertial sensing units
CN104700433A (en) * 2015-03-24 2015-06-10 中国人民解放军国防科学技术大学 Vision-based real-time general movement capturing method and system for human body
CN105847684A (en) * 2016-03-31 2016-08-10 深圳奥比中光科技有限公司 Unmanned aerial vehicle
WO2018103407A1 (en) * 2016-12-09 2018-06-14 中山大学 Unmanned aerial vehicle calibration method and system based on colour 3d calibration object
CN107239728A (en) * 2017-01-04 2017-10-10 北京深鉴智能科技有限公司 Unmanned plane interactive device and method based on deep learning Attitude estimation
CN106959747A (en) * 2017-02-14 2017-07-18 深圳奥比中光科技有限公司 Three-dimensional human body measurement method and its equipment
CN107154073A (en) * 2017-04-17 2017-09-12 嘉兴超维信息技术有限公司 A kind of three-dimensional modeling device and method
US20190278983A1 (en) * 2018-03-12 2019-09-12 Nvidia Corporation Three-dimensional (3d) pose estimation from a monocular camera
CN109032329A (en) * 2018-05-31 2018-12-18 中国人民解放军军事科学院国防科技创新研究院 Space Consistency keeping method towards the interaction of more people's augmented realities
US20190325209A1 (en) * 2018-07-04 2019-10-24 Baidu Online Network Technology (Beijing) Co., Ltd. Method, apparatus and system for human body tracking processing
CN109003301A (en) * 2018-07-06 2018-12-14 东南大学 A kind of estimation method of human posture and rehabilitation training system based on OpenPose and Kinect
CN109242887A (en) * 2018-07-27 2019-01-18 浙江工业大学 A kind of real-time body's upper limks movements method for catching based on multiple-camera and IMU
CN109242960A (en) * 2018-09-15 2019-01-18 武汉智觉空间信息技术有限公司 Using the human body Real-time modeling set system and its modeling method of double Kinect and rotating platform
CN109854893A (en) * 2018-12-06 2019-06-07 东南大学 A kind of adjustable multiple views three-dimensional reconstruction acquisition system
CN109801374A (en) * 2019-01-14 2019-05-24 盾钰(上海)互联网科技有限公司 A kind of method, medium and system reconstructing threedimensional model by multi-angle image collection
CN110032278A (en) * 2019-03-29 2019-07-19 华中科技大学 A kind of method for recognizing position and attitude, the apparatus and system of human eye attention object
CN110322544A (en) * 2019-05-14 2019-10-11 广东康云科技有限公司 A kind of visualization of 3 d scanning modeling method, system, equipment and storage medium
CN110308459A (en) * 2019-06-30 2019-10-08 南京理工大学 The non-cooperation satellite relative pose measurement method of independent of model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHEN JIA等: "3D Human Body Shape and Motion Tracking by LBS and Snake", vol. 24, no. 24, pages 357 - 63 *
王永雄等: "动态多视角复杂3D人体行为数据库及行为识别", vol. 34, no. 34, pages 68 - 79 *
邓宝松等: "多视点图像中线特征的三维重建", pages 713 - 718 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113432603A (en) * 2021-06-25 2021-09-24 中国船舶重工集团公司第七0七研究所 Method for measuring relative pose of mother platform and son platform in complex environment
CN113432603B (en) * 2021-06-25 2022-09-16 中国船舶重工集团公司第七0七研究所 Method for measuring relative pose of mother platform and son platform in complex environment
CN114283447A (en) * 2021-12-13 2022-04-05 凌云光技术股份有限公司 Motion capture system and method
CN114283447B (en) * 2021-12-13 2024-03-26 北京元客方舟科技有限公司 Motion capturing system and method

Also Published As

Publication number Publication date
CN111199576B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN110296691B (en) IMU calibration-fused binocular stereo vision measurement method and system
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN106780601B (en) Spatial position tracking method and device and intelligent equipment
KR101645392B1 (en) Tracking system and tracking method using the tracking system
JP5992184B2 (en) Image data processing apparatus, image data processing method, and image data processing program
CN103020952B (en) Messaging device and information processing method
EP3361948B1 (en) Integration of inertial tracking and position aiding for motion capture
CN104217439B (en) Indoor visual positioning system and method
KR101768958B1 (en) Hybird motion capture system for manufacturing high quality contents
KR101220527B1 (en) Sensor system, and system and method for preparing environment map using the same
CN111750853A (en) Map establishing method, device and storage medium
JP4132068B2 (en) Image processing apparatus, three-dimensional measuring apparatus, and program for image processing apparatus
WO2022000713A1 (en) Augmented reality self-positioning method based on aviation assembly
CN106370160A (en) Robot indoor positioning system and method
CN111199576B (en) Outdoor large-range human body posture reconstruction method based on mobile platform
US20070076096A1 (en) System and method for calibrating a set of imaging devices and calculating 3D coordinates of detected features in a laboratory coordinate system
KR20190063153A (en) System and method for simultaneous reconsttuction of initial 3d trajectory and velocity using single camera images
KR20110119589A (en) System for determining 3-dimensional coordinates of objects using overlapping omni-directional images and method thereof
CN111307146A (en) Virtual reality wears display device positioning system based on binocular camera and IMU
KR20170074388A (en) System and method for high precise positioning
CN106959051A (en) The automatized calibration method of weapon-aiming system based on spatial perception location technology
Muffert et al. The estimation of spatial positions by using an omnidirectional camera system
CN112862818A (en) Underground parking lot vehicle positioning method combining inertial sensor and multi-fisheye camera
Qian et al. Optical flow based step length estimation for indoor pedestrian navigation on a smartphone
CN111862146B (en) Target object positioning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant