CN111199576B - Outdoor large-range human body posture reconstruction method based on mobile platform - Google Patents

Outdoor large-range human body posture reconstruction method based on mobile platform Download PDF

Info

Publication number
CN111199576B
CN111199576B CN201911360084.9A CN201911360084A CN111199576B CN 111199576 B CN111199576 B CN 111199576B CN 201911360084 A CN201911360084 A CN 201911360084A CN 111199576 B CN111199576 B CN 111199576B
Authority
CN
China
Prior art keywords
human body
camera
mobile platform
shooting
cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911360084.9A
Other languages
Chinese (zh)
Other versions
CN111199576A (en
Inventor
闫野
乔运浩
姜志杰
杜欣未
谢良
邓宝松
印二威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
National Defense Technology Innovation Institute PLA Academy of Military Science
Original Assignee
Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
National Defense Technology Innovation Institute PLA Academy of Military Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center, National Defense Technology Innovation Institute PLA Academy of Military Science filed Critical Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
Priority to CN201911360084.9A priority Critical patent/CN111199576B/en
Publication of CN111199576A publication Critical patent/CN111199576A/en
Application granted granted Critical
Publication of CN111199576B publication Critical patent/CN111199576B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C9/00Measuring inclination, e.g. by clinometers, by levels
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/53Determining attitude
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses an outdoor large-range human body posture reconstruction method based on a mobile platform, which comprises the following steps of: performing camera pose calculation and inter-camera calibration by using various sensor data of a plurality of mobile platforms, wherein each mobile platform is provided with a camera binding an inertial measurement unit, an artificial visual identifier and a satellite positioning system; the moving platform follows the photographed human body gesture, and builds a human body model in real time; and calculating a proper shooting angle according to the pose of the human body in the visual field, and controlling the mobile platform to move to the optimal shooting angle for shooting. The invention has the following beneficial effects: the camera is not required to be calibrated manually, and the operation is simple. The real-time self-calibration can solve the three-dimensional coordinates of the joint points in real time. And because the data of various sensors are combined during self-calibration, the calibration is robust. The camera is located on the mobile platform, so that shooting is flexible. Finally, the system controls the mobile platform to move and the better shooting angle to shoot, so that better human body pose information can be obtained.

Description

Outdoor large-range human body posture reconstruction method based on mobile platform
Technical Field
The invention relates to the fields of multi-sensor combination, automatic camera calibration, human body posture estimation, human body posture reconstruction and the like, in particular to an outdoor large-range human body posture reconstruction technology based on real-time camera calibration of a mobile platform.
Background
The outdoor human body posture reconstruction plays an important role in dangerous behavior analysis, athlete auxiliary training, battlefield environment analysis, future holographic communication and the like. Human body posture reconstruction is a core step of dangerous behavior analysis, athlete auxiliary training and battlefield environment analysis, and is necessary to obtain the behavior states of people in a scene and analyze the positions of the joints of people. However, the current human body posture analysis requires a complex calibration procedure, only aims at indoor small-range fixed scenes, and needs recalibration after personnel adjust the camera position, so that the use efficiency is low and the scene adaptability is weak.
The convenient high-precision calibration and reasonable shooting method of the camera are core problems of reconstructing the human body posture, but the current reconstruction of the human body posture basically uses a single camera or a plurality of fixed cameras. These two methods face mainly the following difficulties: the method of a single camera can only obtain the 2d pose of the human body, the estimation error of the 3d pose is larger, and the occlusion node under one view angle cannot be processed. Although the method using a plurality of fixed cameras can solve the problem of partial occlusion even though the reconstruction accuracy is high, the calibration of the fixed cameras is time-consuming and labor-consuming, and the cameras are not allowed to move once the calibration is completed. Because the photographing range of the fixed camera is limited, it is basically impossible to move the scene outdoors. The two modes have the common defect that the position of a camera is fixed, the camera is often not at an ideal shooting angle, and the shooting angle cannot be automatically adjusted according to a human body in a scene.
Disclosure of Invention
The aim of the invention is achieved by the following technical scheme.
In view of the problem of reconstructing human body gestures of outdoor scenes, the invention aims to provide an outdoor large-range human body gesture reconstruction technology based on automatic calibration of a real-time camera on a mobile platform. The camera pose can be automatically calculated in a robust way by combining various sensor data, and the calibration among cameras is completed; the movable shooting of a plurality of shooting platforms can be controlled, and the human body model is reconstructed in real time; after the human body posture is rebuilt, a proper shooting angle can be calculated according to the human body posture in the visual field, and the mobile platform is controlled to move to the optimal shooting angle for shooting.
According to a first aspect of the present invention, there is provided a mobile platform-based outdoor large-scale human body posture reconstruction method, comprising:
performing camera pose calculation and inter-camera calibration by using various sensor data of a plurality of mobile platforms, wherein each mobile platform is provided with a camera binding an inertial measurement unit, an artificial visual identifier and a satellite positioning system;
the moving platform follows the photographed human body gesture, and builds a human body model in real time;
and calculating a proper shooting angle according to the pose of the human body in the visual field, and controlling the mobile platform to move to the optimal shooting angle for shooting.
Further, the calculating of the pose of the camera and the calibration between cameras by using the various sensor data of the plurality of mobile platforms comprise:
the camera coordinates are obtained through the satellite positioning system, the inertial measurement unit fixed with the cameras obtains the rotation angle and the displacement of each camera relative to the initial position, the adjustment angle of each camera shoots the artificial vision identification of other cameras, the displacement and the angle relative to other cameras are calculated, the external parameters of the cameras are calculated by combining various data, and the calibration is completed.
Further, the calculating of the pose of the camera and the calibration between cameras by using the various sensor data of the plurality of mobile platforms comprise:
measuring internal references of each camera and recording the internal references into a central computer;
distributing an artificial visual identifier for each camera;
all the mobile platforms start from the same position and the cameras start from the same angle, and the inertial measurement unit is initialized when the cameras start;
and wirelessly transmitting data of the satellite positioning system, the inertial measurement unit and the artificial vision identifier to a central computer, and calibrating the camera in real time according to the data.
Further, the pose of the camera to the initialization position and the relative pose relative to other cameras are calculated through an inertial measurement unit and an artificial visual mark, and the relative displacement of the camera is calculated through a satellite positioning system.
Further, the mobile platform follows the photographed human body gesture to build a human body model in real time, including:
and three-dimensional coordinates of the joint points of the human are obtained by utilizing triangulation, and the three-dimensional human body pose is reconstructed through animation software.
Further, the mobile platform follows the photographed human body gesture to build a human body model in real time, including:
each camera shoots a scene picture;
predicting the 2d coordinates of the joint points differentiated by people by using a deep learning method for each frame of image;
each camera wirelessly transmits the obtained 2d joint point coordinates to a central computer, and a conversion matrix among the cameras is obtained through calibration;
the central computer calculates the 3d position of the articulation point under each view angle from the camera pose calculated in real time.
Further, according to the pose of the human body in the field of view, a suitable shooting angle is calculated, and the mobile platform is controlled to move to an optimal shooting angle for shooting, including:
judging whether the shooting angle is reasonable or not according to the human body posture reconstruction result, if the shooting angle is reasonable, the mobile platform does not move, and if the shooting angle is not reasonable, a reasonable shooting position is calculated;
and controlling the mobile platform to reach the target position.
Further, according to the pose of the human body in the field of view, a suitable shooting angle is calculated, and the mobile platform is controlled to move to an optimal shooting angle for shooting, including:
judging whether the reconstructed human body posture is blocked or sideways;
if the shielding or the leaning is present, calculating the position of the front surface of the human body which can reduce the shielding or the leaning as a target position;
and controlling the mobile platform to reach the target position.
According to a second aspect of the present invention, there is also provided a mobile platform based outdoor wide-range human body posture reconstruction system, comprising:
the real-time self-calibration module is used for calculating the relative pose between cameras carried on the mobile platform through the parameters combined by the satellite positioning unit, the inertial measurement unit and the artificial visual identification, calculating a transfer matrix and completing the self-calibration of the cameras;
the real-time gesture detection and reconstruction module is used for respectively detecting the human body gesture of the RGB image shot by each camera to obtain the two-dimensional joint point coordinates of the human body gesture shot by each camera, calculating the three-dimensional coordinates of the detected human body joint point by using the data calibrated by the real-time self-calibration module and utilizing triangulation to finish gesture reconstruction;
and the automatic mobile platform position optimizing module is used for calculating a proper shooting angle according to the pose of the human body in the visual field and controlling the mobile platform to move to the proper shooting angle for shooting.
As described above, the invention discloses an outdoor large-scale human body posture reconstruction technology based on automatic calibration of a real-time camera on a mobile platform, which has the following beneficial effects: the camera is not required to be calibrated manually, and the operation is simple. The real-time self-calibration can solve the three-dimensional coordinates of the joint points in real time. And because the data of various sensors are combined during self-calibration, the calibration is robust. The camera is located on the mobile platform, so that shooting is flexible. Finally, the system controls the mobile platform to move and the better shooting angle to shoot, so that better human body pose information can be obtained.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
fig. 1 shows a flowchart of the process of the outdoor large-scale human body posture reconstruction technology disclosed in the embodiment of the invention, which is based on the automatic calibration of a real-time camera on a mobile platform.
FIG. 2 shows a flow chart of a real-time robust camera self-calibration process incorporating multiple sensors disclosed in an embodiment of the present invention.
Fig. 3 shows a flow chart of a mobile real-time human body posture reconstruction process disclosed in an embodiment of the present invention.
FIG. 4 is a flow chart illustrating automatic optimization of mobile platform position according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The invention discloses an outdoor large-range human body posture reconstruction technology based on a mobile platform, which can enable a camera on the mobile platform to be self-calibrated in real time so as to reconstruct the outdoor large-range human body posture in real time. The manual calibration process is unnecessary, so that the posture reconstruction of the human body moving in a large range outdoors is possible. The mobile human body posture reconstruction system consists of a real-time self-calibration module, a real-time posture detection and reconstruction module and a mobile platform position automatic optimization module. Firstly, after a plurality of mobile platforms, such as three unmanned aerial vehicles, start, a self-calibration module calculates a transfer matrix by satellite positioning, and a three-axis attitude angle (or angular velocity) and acceleration measuring device of an object, which is an inertial measurement unit of an IMU (English: inertial measurement unit, abbreviated as IMU), and parameters combined with manual visual identification, so as to calculate the relative pose between cameras carried on the mobile platforms, and complete self-calibration of the cameras. Secondly, the real-time gesture detection and reconstruction module respectively detects the human gesture of the RGB images shot by each camera to obtain the two-dimensional joint point coordinates of the human gesture shot by each camera, and the three-dimensional coordinates of the detected human joint point are calculated by using the data calibrated by the self-calibration module at the moment and by using triangulation to finish gesture reconstruction. And finally, the automatic mobile platform position optimizing module calculates a proper shooting angle according to the pose of the human body in the visual field, and controls the mobile platform to move to the proper shooting angle for shooting.
Generally, an IMU includes three single-axis accelerometers and three single-axis gyroscopes, where the accelerometers detect acceleration signals of the object in the carrier coordinate system on three independent axes, and the gyroscopes detect angular velocity signals of the carrier relative to the navigation coordinate system, measure angular velocity and acceleration of the object in three-dimensional space, and calculate the attitude of the object based on the angular velocity and acceleration.
Specifically, fig. 1 shows an overall flowchart of the present invention, which includes 3 key steps, namely, self-calibration of a real-time robust camera, reconstruction of a moving real-time human body posture, and automatic optimization of a moving platform position, and the embodiment will describe the above 3 key steps in detail.
The invention provides an outdoor large-scale human body posture reconstruction technology based on automatic calibration of a real-time camera on a mobile platform, wherein the processing flow is shown in figure 1 and comprises the following steps:
s1, a plurality of mobile platforms (such as three unmanned aerial vehicles) are respectively provided with a camera which is bound with an IMU, an artificial visual identification and a satellite positioning system, and robust camera pose calculation and camera calibration are automatically carried out by combining the three sensor data;
wherein, at first three unmanned aerial vehicles take off at same position once, initialize IMU. The camera coordinates are obtained through satellite positioning, the IMU fixed with the cameras obtains the rotation angle and displacement of each camera relative to the initialized position, the adjustment angle of each camera shoots the artificial markers of other cameras, the displacement and angle relative to other cameras are calculated, the external parameters of the cameras are calculated by combining various data, and the calibration is completed.
S2, reconstructing the moving real-time human body posture. And three-dimensional coordinates of the joint points of the human are obtained by utilizing triangulation, and the three-dimensional human body pose is reconstructed through animation software.
And S3, after the human body posture is rebuilt, calculating a proper shooting angle according to the human body posture in the visual field, and controlling the mobile platform to move to the optimal shooting angle for shooting.
Wherein fig. 2 presents a flow chart of a real-time robust camera self-calibration process incorporating a variety of sensors. In order to achieve the purpose of calibrating the camera in a robust way, the method for calculating the pose by three sensors is adopted. The three sensors have the advantages that the satellite positioning and visual markers can correct drift errors generated by long-time use of the IMU, and the IMU can still provide pose information to enable calibration to be uninterrupted when satellite positioning signals are poor or the visual markers are not shot. By the complementary advantages of the three sensors, better calibration can be obtained. The camera is fixed by using the same fixture, and then the IMU is initialized, so that the initialization state of the IMU can be adjusted to be consistent. The internal parameters of the camera are taken from the camera or measured by a measuring tool before use, and the internal parameters are basically unchanged when in use. In the invention, in the step 1, three sensors are combined to comprehensively analyze and calibrate the pose of the camera, and the key steps are initialization of the IMU, binding of the artificial visual identification and design of a data combination strategy of various sensors. The process flow is shown in fig. 2, and comprises the following steps:
s11, measuring internal parameters of each camera and recording the internal parameters into a central computer.
S12, distributing a specific two-dimensional code or artificial visual identification in other shapes to each camera, and fixing the identification and the cameras together. The relative pose between the shooting camera and the shot camera can be calculated by shooting the mark. And fixing the clamping groove which is exactly matched with the shape of the camera to the ground, and placing the camera into the clamping groove.
S13, initializing the IMU when the cameras start from the same position and the same angle by all the mobile platforms, so that the IMU initialization parameters of all the mobile platforms are the same.
S14, satellite positioning, IMU and artificial visual identification data are transmitted to a central computer in a wireless mode, the pose of the camera to the initialized position and the relative pose of the camera to other cameras can be calculated through the IMU and the artificial visual identification, and the relative displacement of the camera can be calculated through a satellite positioning system. When a certain accuracy is considered to be low, the calculation result of this is discarded. The data is then transmitted wirelessly to a central computer. And (3) synthesizing parameters of three sources, discarding unreliable parameters, and calibrating the camera in real time. And if the three precision values are within the acceptable threshold values, averaging the three parameters to obtain a final value, and obtaining the external parameters between cameras.
Through step S1, the position information provided by the satellite positioning system is utilized, the position and angle information provided by the IMU relative to the initial point is utilized, and the camera shoots the artificial vision identifiers hung on other mobile platforms except the camera to calculate the relative pose between the mobile platforms. When a certain sensor parameter cannot be used, the relative pose between the cameras can still be successfully calculated by using the rest one or two sensor parameters to calibrate.
In step 2, the invention calculates the 3d coordinates of the human joint in the shooting range in real time. As shown in fig. 3, a flow chart of a mobile real-time human body pose reconstruction process is presented. 2d joint point prediction is carried out on a single color image by using a deep neural network, so that 2d information of human joints can be obtained well. Each mobile platform wirelessly transmits the analyzed 2d joint information to a central computer. And 3d positions of joints of the human body can be solved by using the camera pose relation obtained in the last step and the 2d joint coordinates in the last step in a central computer through epipolar set constraint. The key steps are as follows:
s21, each camera shoots a scene picture;
s22, predicting joint point 2d coordinates distinguished by people by using a deep learning method (neural network) for each frame of image;
s23, each camera wirelessly transmits the obtained 2d joint point coordinates to a central computer; and calibrating to obtain a conversion matrix between cameras.
S24, the central computer calculates the 3d position of the articulation point under each view angle according to the camera pose calculated in real time. And resolving the 3d coordinates of the joints by the epipolar set, and reconstructing and displaying the human body posture by using animation software.
Through step S2, the plurality of mobile platforms calculate the pose of the human body in real time and perform real-time modeling while following shooting from a plurality of viewpoints. The real pose of the human body can be obtained as much as possible from multiple view angles, and the uncertainty of the pose caused by factors such as shielding is reduced. Real-time analysis modeling has good timeliness and can timely feed back information to people in a scene.
And (3) obtaining real-time human body posture reconstruction through the steps S1 and S2. Because the personnel are in a moving state, the phenomena of body shielding, body tilting and the like often occur, and the shooting position and the like are required to be adjusted by combining a specific scene to obtain accurate human body posture information. As shown in fig. 4, a flow chart for automatic optimization of mobile platform position is presented. Because personnel are in a moving state, the phenomena of body shielding, body tilting and the like often occur, and the phenomena can cause difficulty in 2d joint analysis when occurring, so that the accuracy of joint three-dimensional information estimation is affected. The invention designs an automatic optimization strategy for the position of the mobile platform, and judges whether a reconstructed object in a scene is shielded or is in a sideways state at any time. If the states are detected, the position of the shielding of the reconstruction object or the position of the front side and the back side of the human body is calculated, the proper position is selected as the target position of the current frame, and the target position of the frame after the target position of the previous frame is predicted is combined, so that the mobile platform is controlled to reach the position. The key steps are as follows:
s31, judging whether the shooting angle is reasonable or not according to the human body posture reconstruction result, for example judging whether the reconstructed human body posture is shielded or sideways;
s32, if the shooting angle is reasonable, the unmanned aerial vehicle does not move, and if the shooting angle is not reasonable, for example, if the shielding or the sidestep exists, the front position of the shielding or the human body can be calculated to be used as a target position;
s33, controlling the mobile platform to reach the target position.
Through step S3, because the fixed shooting angle or the random shooting angle cannot cope with each specific scene, the human body pose in the scene is different, and the shooting angle for obtaining the human body pose with higher precision is different. According to the method and the device, the mobile platform can be controlled to automatically move to the proper shooting position according to the conditions in the scene, so that the human body pose information can be better obtained, and the more accurate human body pose can be reconstructed.
The invention starts from a robust camera automatic calibration and reasonable shooting method, develops an outdoor large-range human body posture reconstruction system based on a real-time camera automatic calibration of a mobile platform, and when the system is used, only needs to set a shooting center, does not need to operate an individual mobile platform or manually calibrate a camera on the mobile platform, can automatically complete the control of the individual mobile platform, automatically combines with a satellite positioning system, calculates the pose of the camera by using an IMU and an artificial vision identifier shot by the camera, and completes the camera calibration. And the system can automatically adjust the mobile platform to a proper shooting position according to the pose of the human body in the shooting range.
The method does not need to calibrate the camera manually, and is simple to operate. The real-time self-calibration can solve the three-dimensional coordinates of the joint points in real time. And because the data of various sensors are combined during self-calibration, the calibration is robust. The camera is located on the mobile platform, so that shooting is flexible. Finally, the system controls the mobile platform to move and the better shooting angle to shoot, so that better human body pose information can be obtained. The system can help to perform outdoor athlete movement analysis, abnormal behavior detection in the security field and the like.
Other advantages and effects of the present invention will be readily apparent to those skilled in the art from the disclosure of the present invention.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. An outdoor large-range human body posture reconstruction method based on a mobile platform is characterized by comprising the following steps of:
using multiple sensor data of multiple mobile platforms for camera pose calculations and inter-camera calibration, comprising: acquiring camera coordinates through a satellite positioning system, acquiring a rotation angle and a displacement of each camera relative to an initialization position through an inertial measurement unit fixed with the cameras, shooting artificial vision identifications of other cameras through each camera adjusting angle, calculating the displacement and the angle relative to the other cameras, calculating external parameters of the cameras by combining various data, and completing calibration; each mobile platform is provided with a camera which is bound with an inertial measurement unit, an artificial visual identification and a satellite positioning system;
the moving platform follows the photographed human body gesture, and builds a human body model in real time;
and calculating a proper shooting angle according to the pose of the human body in the visual field, and controlling the mobile platform to move to the optimal shooting angle for shooting.
2. The method for reconstructing the outdoor wide-range human body posture based on the mobile platform of claim 1, wherein,
the method for calculating the pose of the camera and calibrating the cameras by using the various sensor data of a plurality of mobile platforms comprises the following steps:
measuring internal references of each camera and recording the internal references into a central computer;
distributing an artificial visual identifier for each camera;
all the mobile platforms start from the same position and the cameras start from the same angle, and the inertial measurement unit is initialized when the cameras start;
and wirelessly transmitting data of the satellite positioning system, the inertial measurement unit and the artificial vision identifier to a central computer, and calibrating the camera in real time according to the data.
3. The method for reconstructing the outdoor wide-range human body posture based on the mobile platform according to claim 2, wherein,
the pose of the camera to the initialized position and the relative pose relative to other cameras are calculated through an inertial measurement unit and an artificial vision mark, and the relative displacement of the camera is calculated through a satellite positioning system.
4. The method for reconstructing the outdoor wide-range human body posture based on the mobile platform of claim 1, wherein,
the mobile platform follows the human gesture of shooting, builds human model in real time, includes:
and three-dimensional coordinates of the joint points of the human are obtained by utilizing triangulation, and the three-dimensional human body pose is reconstructed through animation software.
5. The method for reconstructing the outdoor wide-range human body posture based on the mobile platform of claim 1, wherein,
the mobile platform follows the human gesture of shooting, builds human model in real time, includes:
each camera shoots a scene picture;
predicting the 2d coordinates of the joint points differentiated by people by using a deep learning method for each frame of image;
each camera wirelessly transmits the obtained 2d joint point coordinates to a central computer, and a conversion matrix among the cameras is obtained through calibration;
the central computer calculates the 3d position of the articulation point under each view angle from the camera pose calculated in real time.
6. The method for reconstructing the outdoor wide-range human body posture based on the mobile platform of claim 1, wherein,
according to the human body pose in the visual field, calculating a proper shooting angle, and controlling the mobile platform to move to an optimal shooting angle for shooting, wherein the method comprises the following steps:
judging whether the shooting angle is reasonable or not according to the human body posture reconstruction result, if the shooting angle is reasonable, the mobile platform does not move, and if the shooting angle is not reasonable, a reasonable shooting position is calculated;
and controlling the mobile platform to reach the target position.
7. The method for reconstructing the outdoor wide-range human body posture based on the mobile platform of claim 1, wherein,
according to the human body pose in the visual field, calculating a proper shooting angle, and controlling the mobile platform to move to an optimal shooting angle for shooting, wherein the method comprises the following steps:
judging whether the reconstructed human body posture is blocked or sideways;
if the shielding or the leaning is present, calculating the position of the front surface of the human body which can reduce the shielding or the leaning as a target position;
and controlling the mobile platform to reach the target position.
8. An outdoor large-scale human body posture reconstruction system based on a mobile platform, which is characterized by comprising:
the real-time self-calibration module is used for solving the relative pose between cameras carried on a mobile platform through the parameters of satellite positioning, an inertial measurement unit and an artificial vision identifier, calculating a transfer matrix and completing self-calibration of the cameras, and comprises the following steps: acquiring camera coordinates through a satellite positioning system, acquiring a rotation angle and a displacement of each camera relative to an initialization position through an inertial measurement unit fixed with the cameras, shooting artificial vision identifications of other cameras through each camera adjusting angle, calculating the displacement and the angle relative to the other cameras, calculating external parameters of the cameras by combining various data, and completing calibration;
the real-time gesture detection and reconstruction module is used for respectively detecting the human body gesture of the RGB image shot by each camera to obtain the two-dimensional joint point coordinates of the human body gesture shot by each camera, calculating the three-dimensional coordinates of the detected human body joint point by using the data calibrated by the real-time self-calibration module and utilizing triangulation to finish gesture reconstruction;
and the automatic mobile platform position optimizing module is used for calculating a proper shooting angle according to the pose of the human body in the visual field and controlling the mobile platform to move to the proper shooting angle for shooting.
CN201911360084.9A 2019-12-25 2019-12-25 Outdoor large-range human body posture reconstruction method based on mobile platform Active CN111199576B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911360084.9A CN111199576B (en) 2019-12-25 2019-12-25 Outdoor large-range human body posture reconstruction method based on mobile platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911360084.9A CN111199576B (en) 2019-12-25 2019-12-25 Outdoor large-range human body posture reconstruction method based on mobile platform

Publications (2)

Publication Number Publication Date
CN111199576A CN111199576A (en) 2020-05-26
CN111199576B true CN111199576B (en) 2023-08-18

Family

ID=70746614

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911360084.9A Active CN111199576B (en) 2019-12-25 2019-12-25 Outdoor large-range human body posture reconstruction method based on mobile platform

Country Status (1)

Country Link
CN (1) CN111199576B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113432603B (en) * 2021-06-25 2022-09-16 中国船舶重工集团公司第七0七研究所 Method for measuring relative pose of mother platform and son platform in complex environment
CN114283447B (en) * 2021-12-13 2024-03-26 北京元客方舟科技有限公司 Motion capturing system and method

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101086681A (en) * 2006-06-09 2007-12-12 中国科学院自动化研究所 Game control system and method based on stereo vision
CN101482928A (en) * 2009-02-24 2009-07-15 清华大学 Behavior description method and apparatus
CN102222361A (en) * 2010-04-06 2011-10-19 清华大学 Method and system for capturing and reconstructing 3D model
KR20130088507A (en) * 2012-01-31 2013-08-08 한국전자통신연구원 Apparatus and method for estimating skeleton structure of human body
CN104461013A (en) * 2014-12-25 2015-03-25 中国科学院合肥物质科学研究院 Human body movement reconstruction and analysis system and method based on inertial sensing units
CN104508704A (en) * 2012-05-25 2015-04-08 波可斯有限公司 Body measurement
CN104700433A (en) * 2015-03-24 2015-06-10 中国人民解放军国防科学技术大学 Vision-based real-time general movement capturing method and system for human body
CN105847684A (en) * 2016-03-31 2016-08-10 深圳奥比中光科技有限公司 Unmanned aerial vehicle
CN106959747A (en) * 2017-02-14 2017-07-18 深圳奥比中光科技有限公司 Three-dimensional human body measurement method and its equipment
CN107154073A (en) * 2017-04-17 2017-09-12 嘉兴超维信息技术有限公司 A kind of three-dimensional modeling device and method
CN107239728A (en) * 2017-01-04 2017-10-10 北京深鉴智能科技有限公司 Unmanned plane interactive device and method based on deep learning Attitude estimation
WO2018103407A1 (en) * 2016-12-09 2018-06-14 中山大学 Unmanned aerial vehicle calibration method and system based on colour 3d calibration object
CN109003301A (en) * 2018-07-06 2018-12-14 东南大学 A kind of estimation method of human posture and rehabilitation training system based on OpenPose and Kinect
CN109032329A (en) * 2018-05-31 2018-12-18 中国人民解放军军事科学院国防科技创新研究院 Space Consistency keeping method towards the interaction of more people's augmented realities
CN109242960A (en) * 2018-09-15 2019-01-18 武汉智觉空间信息技术有限公司 Using the human body Real-time modeling set system and its modeling method of double Kinect and rotating platform
CN109242887A (en) * 2018-07-27 2019-01-18 浙江工业大学 A kind of real-time body's upper limks movements method for catching based on multiple-camera and IMU
CN109801374A (en) * 2019-01-14 2019-05-24 盾钰(上海)互联网科技有限公司 A kind of method, medium and system reconstructing threedimensional model by multi-angle image collection
CN109854893A (en) * 2018-12-06 2019-06-07 东南大学 A kind of adjustable multiple views three-dimensional reconstruction acquisition system
CN110032278A (en) * 2019-03-29 2019-07-19 华中科技大学 A kind of method for recognizing position and attitude, the apparatus and system of human eye attention object
CN110308459A (en) * 2019-06-30 2019-10-08 南京理工大学 The non-cooperation satellite relative pose measurement method of independent of model
CN110322544A (en) * 2019-05-14 2019-10-11 广东康云科技有限公司 A kind of visualization of 3 d scanning modeling method, system, equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101221451B1 (en) * 2008-12-22 2013-01-11 한국전자통신연구원 Methodlogy of animatable digital clone creation from multi-view images capturing dynamic performance
EP2383696A1 (en) * 2010-04-30 2011-11-02 LiberoVision AG Method for estimating a pose of an articulated object model
US10929654B2 (en) * 2018-03-12 2021-02-23 Nvidia Corporation Three-dimensional (3D) pose estimation from a monocular camera
CN108921874B (en) * 2018-07-04 2020-12-29 百度在线网络技术(北京)有限公司 Human body tracking processing method, device and system

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101086681A (en) * 2006-06-09 2007-12-12 中国科学院自动化研究所 Game control system and method based on stereo vision
CN101482928A (en) * 2009-02-24 2009-07-15 清华大学 Behavior description method and apparatus
CN102222361A (en) * 2010-04-06 2011-10-19 清华大学 Method and system for capturing and reconstructing 3D model
KR20130088507A (en) * 2012-01-31 2013-08-08 한국전자통신연구원 Apparatus and method for estimating skeleton structure of human body
CN104508704A (en) * 2012-05-25 2015-04-08 波可斯有限公司 Body measurement
CN104461013A (en) * 2014-12-25 2015-03-25 中国科学院合肥物质科学研究院 Human body movement reconstruction and analysis system and method based on inertial sensing units
CN104700433A (en) * 2015-03-24 2015-06-10 中国人民解放军国防科学技术大学 Vision-based real-time general movement capturing method and system for human body
CN105847684A (en) * 2016-03-31 2016-08-10 深圳奥比中光科技有限公司 Unmanned aerial vehicle
WO2018103407A1 (en) * 2016-12-09 2018-06-14 中山大学 Unmanned aerial vehicle calibration method and system based on colour 3d calibration object
CN107239728A (en) * 2017-01-04 2017-10-10 北京深鉴智能科技有限公司 Unmanned plane interactive device and method based on deep learning Attitude estimation
CN106959747A (en) * 2017-02-14 2017-07-18 深圳奥比中光科技有限公司 Three-dimensional human body measurement method and its equipment
CN107154073A (en) * 2017-04-17 2017-09-12 嘉兴超维信息技术有限公司 A kind of three-dimensional modeling device and method
CN109032329A (en) * 2018-05-31 2018-12-18 中国人民解放军军事科学院国防科技创新研究院 Space Consistency keeping method towards the interaction of more people's augmented realities
CN109003301A (en) * 2018-07-06 2018-12-14 东南大学 A kind of estimation method of human posture and rehabilitation training system based on OpenPose and Kinect
CN109242887A (en) * 2018-07-27 2019-01-18 浙江工业大学 A kind of real-time body's upper limks movements method for catching based on multiple-camera and IMU
CN109242960A (en) * 2018-09-15 2019-01-18 武汉智觉空间信息技术有限公司 Using the human body Real-time modeling set system and its modeling method of double Kinect and rotating platform
CN109854893A (en) * 2018-12-06 2019-06-07 东南大学 A kind of adjustable multiple views three-dimensional reconstruction acquisition system
CN109801374A (en) * 2019-01-14 2019-05-24 盾钰(上海)互联网科技有限公司 A kind of method, medium and system reconstructing threedimensional model by multi-angle image collection
CN110032278A (en) * 2019-03-29 2019-07-19 华中科技大学 A kind of method for recognizing position and attitude, the apparatus and system of human eye attention object
CN110322544A (en) * 2019-05-14 2019-10-11 广东康云科技有限公司 A kind of visualization of 3 d scanning modeling method, system, equipment and storage medium
CN110308459A (en) * 2019-06-30 2019-10-08 南京理工大学 The non-cooperation satellite relative pose measurement method of independent of model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邓宝松等.多视点图像中线特征的三维重建.计算机辅助设计与图形学学报.2007,( 6 ),713-718. *

Also Published As

Publication number Publication date
CN111199576A (en) 2020-05-26

Similar Documents

Publication Publication Date Title
CN106780601B (en) Spatial position tracking method and device and intelligent equipment
CN106529495B (en) Obstacle detection method and device for aircraft
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN106643699B (en) Space positioning device and positioning method in virtual reality system
JP5992184B2 (en) Image data processing apparatus, image data processing method, and image data processing program
CN108022302B (en) Stereo display device of Inside-Out space orientation's AR
CN103020952A (en) Information processing apparatus and information processing method
CN108227929B (en) Augmented reality lofting system based on BIM technology and implementation method
KR101220527B1 (en) Sensor system, and system and method for preparing environment map using the same
Zhang et al. A universal and flexible theodolite-camera system for making accurate measurements over large volumes
JP7203105B2 (en) CALIBRATION DEVICE, MONITORING DEVICE, WORKING MACHINE, AND CALIBRATION METHOD FOR IMAGE SENSOR
CN111220126A (en) Space object pose measurement method based on point features and monocular camera
CN111199576B (en) Outdoor large-range human body posture reconstruction method based on mobile platform
CN111750853A (en) Map establishing method, device and storage medium
Zienkiewicz et al. Extrinsics autocalibration for dense planar visual odometry
US20070076096A1 (en) System and method for calibrating a set of imaging devices and calculating 3D coordinates of detected features in a laboratory coordinate system
JP2022089269A (en) Calibration device and calibration method
WO2022000713A1 (en) Augmented reality self-positioning method based on aviation assembly
CN112129263B (en) Distance measurement method of separated mobile stereo distance measurement camera
JP2006234703A (en) Image processing device, three-dimensional measuring device, and program for image processing device
CN111307146A (en) Virtual reality wears display device positioning system based on binocular camera and IMU
CN106959051B (en) The automatized calibration method of weapon-aiming system based on spatial perception location technology
CN113587934A (en) Robot, indoor positioning method and device and readable storage medium
KR20190063153A (en) System and method for simultaneous reconsttuction of initial 3d trajectory and velocity using single camera images
KR20170074388A (en) System and method for high precise positioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant