CN116518959A - Positioning method and device of space camera based on combination of UWB and 3D vision - Google Patents

Positioning method and device of space camera based on combination of UWB and 3D vision Download PDF

Info

Publication number
CN116518959A
CN116518959A CN202310551049.5A CN202310551049A CN116518959A CN 116518959 A CN116518959 A CN 116518959A CN 202310551049 A CN202310551049 A CN 202310551049A CN 116518959 A CN116518959 A CN 116518959A
Authority
CN
China
Prior art keywords
camera
data
uwb
positioning
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310551049.5A
Other languages
Chinese (zh)
Inventor
王凯飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiangsu Technology Co ltd
Original Assignee
Beijing Xiangsu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiangsu Technology Co ltd filed Critical Beijing Xiangsu Technology Co ltd
Priority to CN202310551049.5A priority Critical patent/CN116518959A/en
Publication of CN116518959A publication Critical patent/CN116518959A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S1/00Beacons or beacon systems transmitting signals having a characteristic or characteristics capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters; Receivers co-operating therewith
    • G01S1/02Beacons or beacon systems transmitting signals having a characteristic or characteristics capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters; Receivers co-operating therewith using radio waves
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • H04W64/006Locating users or terminals or network equipment for network management purposes, e.g. mobility management with additional information processing, e.g. for direction or speed determination
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Abstract

The invention relates to the technical field of space positioning, in particular to a positioning method and a positioning device of a space camera based on combination of UWB and 3D vision, which are characterized in that pose information acquired by an IMU pose sensor in a sensor of the 3D camera is firstly matched with a certain target image and depth information thereof acquired by the 3D sensor, and then is matched with an IMU pose sensor contained in a UWB receiving beacon installed in the 3D camera, and the two IMU pose sensors are arranged in parallel and in the same direction, and the high-precision positioning of the 3D camera in the target direction in the space of 1mm-5mm can be obtained through matching and fusion of an extended Kalman filtering algorithm; through presetting a plurality of UWB positioning beacons in the whole periphery of ground place equipartition position, link to each other with the UWB receiving beacon of aerial 3D camera, through presetting the serial number of UWB positioning beacon, at first with the serial number of computer connection and in proper order with the serial number order that the computer is connected, make the computer acquire the omnidirectional accurate location in air 3D camera in place in real time, not only reduced the debugging cost, satisfied the accurate location demand of large-scale place camera shooting moreover.

Description

Positioning method and device of space camera based on combination of UWB and 3D vision
Technical Field
The invention relates to the technical field of space positioning, in particular to a positioning method and device of a space camera based on combination of UWB and 3D vision.
Background
In the shooting process of the broadcast television virtual program, the position of a real camera needs to be known in a virtual space, so that the camera is required to have gesture data and space position data, a positioning system is installed at the top or bottom of the camera, the positioning system outputs data, namely, the position of the camera is obtained, the surface of a reflector is usually irradiated in space through infrared emission infrared beams, after the data are received, the surface of the reflector is output or visually seen, calculated depth information is sent to a software system to calculate the specific position of the camera, therefore, the field environment can have very complex objects for shielding the positioning system, and meanwhile, the accurate position of the camera can be calculated only by infrared positioning or visual positioning, and dozens or hundreds of pieces of reflection positioning information are required to calculate the accurate position of the camera, so that complicated debugging and positioning loss and inaccurate conditions caused by external interference are caused, in order to avoid signal data loss, a plurality of fluorescent reflection labels are required to be attached on the field ceiling or the ground of a studio, the infrared system and the camera can be used for obtaining the position clearly, the method is very inconvenient, the debugging time usually needs 2 to 3 hours, time is consumed, and huge debugging cost is generated especially for large places such as stadium, and the like; at present, UWB positioning is also used, but the positioning accuracy of the position in a 3D space is between 5mm and 5cm, and the positioning accuracy is not high, so that a method for accurately and quickly positioning a camera in a large space is urgently needed.
Disclosure of Invention
The invention discloses a positioning method and device of a space camera based on combination of UWB and 3D vision, and aims to solve the technical problems in the prior art.
The invention adopts the following technical scheme:
a positioning method of a space camera based on combination of UWB and 3D vision comprises the following steps:
s1, acquiring a plurality of UWB positioning beacon information uniformly distributed at preset positions around the whole periphery of a ground field;
s2, acquiring position data and pose data of a 3D camera positioned above the field, wherein the steps comprise: acquiring first data information acquired by a first IMU pose sensor arranged in a double lens of the 3D camera;
acquiring second data information acquired by a UWB receiving beacon which is arranged on the 3D camera and comprises a second IMU pose sensor; the first IMU pose sensor and the second IMU pose sensor are placed side by side and in the same direction, so that the first data information and the second data information are both data information of the 3D camera when the lens acquires a certain target image and depth information thereof;
s3, carrying out relevant data matching and EKF fusion on the first data information and the second data information to obtain the fused data information of the 3D camera;
s4, each UWB positioning beacon respectively acquires the data information of the space of the 3D camera corresponding to the respective position match in real time through the connected UWB receiving beacon, wherein the data information comprises position data and direction;
s5, sequentially acquiring information transmitted by a plurality of UWB positioning beacons according to a preset sequence, and acquiring the position data and the pose data of the 3D camera in all directions in space in real time;
s6, correcting the omnibearing position data and the position and pose data through an extended Kalman filtering algorithm and then converting the corrected omnibearing position data and the position and pose data into a preset data format.
In some embodiments, in S1, the horizontal spacing between adjacent UWB positioning beacons should be set to be no greater than 200m, and the height of the UWB positioning beacons from the ground surface should be no greater than 50m.
In some embodiments, in S2, the step of acquiring the first data information includes the following steps:
respectively acquiring a certain target image and depth information thereof acquired by a 3D sensor in the double lens, and matching and fusing the target image and the depth information thereof to form a unique target image and the depth information thereof;
and acquiring data information acquired by an IMU arranged in the 3D sensor, and correspondingly matching with the unique target image and the depth information thereof to acquire the first data information of the space of the 3D camera in the target direction.
In some embodiments, in S2, the data collection process of the first data information and the second data information is as shown in formula (1) -formula (9):
acquiring coordinate information from the 3D camera position coordinate system to acquire a center point of a world coordinate system, as in formula (1):
g w =Rwbg b (1)
wherein g is attraction; g b Is the coordinate system of the camera; g w Is a world coordinate system;
acquiring a rotation matrix from a world coordinate system to a camera coordinate system, as in formula (2):
(2)
wherein Rwb is Euler angle, phi is roll angle, theta is pitch angle, phi is yaw angle, c is cos, and the cosine of the trigonometric function is defined; s is sin, and the sine definition of the trigonometric function is adopted; the world coordinate system provides a reference coordinate system for the camera coordinate system, wherein the x-axis and the y-axis are tangential to the ground, the z-axis is downward, and the gravity vector formula (3) is substituted:
(3)
the triaxial accelerometer gives a component of gravitational acceleration expressed in the camera reference frame, see equation (4):
g b =[g bx g by g bz ] T (4)
where T represents the transposed matrix, and therefore, substituting the gravity vector is related by rotating the matrix, the relationship is as in equation (5):
(5)
obtaining the pitch angle θ, see formula (6)
(6)
Obtaining the roll angleSee formula (7):
(7)
equations for acquiring the speed and the position are shown in formula (8) and formula (9):
(8)
(9)
wherein ab is acceleration; vb is the speed; sb is the position; k, (k+1) is a time interval; Δt is the sampling time; the angular rate is integrated to determine the direction of the gyroscope.
In some embodiments, in S3, performing relevant data matching and EKF fusion on the first data information and the second data information, to obtain fused accurate data information of the 3D camera, and the calculating process includes:
EKF estimates the position and orientation of the robot by using predictions and corrections of a nonlinear system model, see equations (10), 11):
(10)
(11)
wherein, the liquid crystal display device comprises a liquid crystal display device,is a state vector +.>For a known control input, +.>For process noise->Is measurement noise->For measuring vector +.>An observation matrix at the moment k; process noise->With covariance matrix Q, measurement noiseHaving a covariance matrix R; the EKF is a special case of kalman filters, assumed to be a mutually independent zero-mean gaussian white noise process, for nonlinear systems; the EKF is used to estimate the position and orientation of the robot by employing predictions and corrections of the nonlinear system model;
the time prediction update equation is shown in formulas (12) - (13):
=A/> (12)
(13)
wherein A is a transfer matrix, B is a control matrix,for the measure of the accuracy of the state estimation in verifying the estimated covariance matrix +.>For metric value-1, < >>Covariance-1 for process noise;
the measurement update equations are shown in formulas (14) - (15):
(14)
(15)
wherein, the liquid crystal display device comprises a liquid crystal display device,is Kalman filtering gain value, +.>Sampling the observed value for the Kalman filtering estimation value, and mapping the real state space to the observed space for the observed model by H, +.>For observing model values, +.>Adding a metric value to the posterior estimated covariance matrix;
wherein the Kalman gain is as shown in formula (16):
(16)
wherein, the liquid crystal display device comprises a liquid crystal display device,is jacobian matrix->For observing the covariance of the noise +.>Is the prediction error in the case of continuous time;
the partial derivative of the measurement function h with respect to state x prior to the state estimate x k-equation is shown in equation (17):
(17)
wherein X is the partial derivative, X is the partial derivative,for the jacobian matrix measurement function,is the X partial derivative of the jacobian matrix;
wherein the motion model and the observation model in the EKF are established using kinematics using accelerometer data as control input, gyroscope data and vision data as measurements, model process noise and covariance noise are appropriately adjusted, and the state vector is shown in formula (18):
=[p v q ω/>(18)
wherein v is a state variable corresponding to the 3D position and speed of the IMU in the world coordinate system, q is a direction quaternion corresponding to the rotation matrix R and ω is the angular speed of the gyroscope;
the fusion transformation matrix is shown in formula (19):
(19)
the observation matrix and matrix data from the gyroscope are calculated as shown in equations (20) - (22):
(20)
(21)
(22)
where H is the calculated data check on the measurement matrix.
In some embodiments, in S5, the step of sequentially obtaining the information transmitted by the plurality of UWB positioning beacons according to a preset sequence includes:
sequentially numbering all UWB positioning beacons uniformly distributed around the whole periphery of the field, and storing the UWB positioning beacons in a computer;
according to a preset program, the serial numbers of the UWB positioning beacons connected with the computer at first are obtained, and the serial numbers of all serial numbers connected in sequence are obtained;
and sequentially acquiring information of UWB receiving beacons connected with the UWB positioning beacons according to a preset sequence so as to acquire the position data and the pose data of the 3D camera in all directions in space in real time.
In some embodiments, in S6, the step of correcting the omnidirectional position data and the pose data by using an extended kalman filter algorithm includes:
the IMU data of the roll and pitch fusion is modified through an extended Kalman filtering algorithm, and the IMU data is shown in a formula (23) (24):
(23)
(24)
the rotation matrix is an orthogonal matrix, two coordinate systems u and v are set, the rotation matrix rotates a vector X from v frame to u frame, t is a translation matrix, and the two coordinate systems u and v are processed;
the unit quaternion is a 4-D expression of direction and rotation may be defined using the unit quaternion, as in equation (25):
 (25)
where q is the unit quaternion, representing the rotation axis and the rotation angle, the direction is calculated as a quaternion that rotates the gravity vector from the earth coordinate system to the sensor coordinate system where the gravity vector is the accelerometer reading.
In some embodiments, the extended Kalman filter may be expressed using a nonlinear spatial model, a dynamic system block diagram.
The invention also discloses a positioning device of the space camera based on the combination of UWB and 3D vision, which is used for realizing the positioning method of the space camera based on the combination of UWB and 3D vision, and comprises the following steps:
the image sensor module is used for acquiring a target image and depth information of the target image in real time;
the IMU attitude sensor is used for acquiring the attitude shape of the 3D camera in space and comprises inclination, rotation and pitching data;
the UWB receives the beacon, is used for transmitting position data and pose data of the 3D camera in space to UWB positioning beacon through the no-carrier communication technology;
the matching module is used for matching a certain target image acquired by the 3D sensor in the double lens with depth information of the target image; the method comprises the steps of correspondingly matching data information acquired by an IMU attitude sensor arranged in a 3D sensor with a certain corresponding target image and depth information thereof to acquire first data information of the 3D camera in the position and the direction of space; the method comprises the steps that when a certain target image and depth information of the target image are acquired, second data information acquired by an IMU gesture sensor in a UWB receiving beacon of a 3D camera is correspondingly matched with the first data information;
UWB positioning beacons, which are arranged at preset positions around the ground field and are used for ultra-wideband radio positioning and receiving the UWB receiving beacons;
the calculation module calculates the data according to the program instruction;
the 3D camera is provided with a double lens containing a 3D sensor, an IMU gesture sensor is arranged in the lens, a UWB receiving beacon is arranged on the camera body, the IMU gesture sensor is arranged in the UWB receiving beacon, and the two IMU gesture sensors are arranged side by side and in the same direction and are used for acquiring a target image and depth information thereof and accurate position data and position data of the 3D camera in space in the target direction;
the acquisition module is used for acquiring data information; the method comprises the steps of obtaining the number of a preset UWB positioning beacon, the number of the UWB positioning beacon connected with a computer at first and the number sequence connected with the computer in sequence;
the memory is used for storing data information and computer instructions;
a processor, executing the computer program instructions: acquiring a target image and depth information thereof in real time; acquiring the posture shape of the 3D camera in space, wherein the posture shape comprises inclination, rotation and pitching data; matching a certain target image acquired by a 3D sensor in the double lens and depth information thereof; the method comprises the steps of correspondingly matching data information acquired by an IMU attitude sensor arranged in a 3D sensor with a certain corresponding target image and depth information thereof to acquire first data information of the 3D camera in the position and the direction of space; the method comprises the steps that when a certain target image and depth information of the target image are acquired, second data information acquired by an IMU gesture sensor in a UWB receiving beacon of a 3D camera is correspondingly matched with the first data information; transmitting the position data and the pose data of the 3D camera in space to a UWB positioning beacon through a carrier-free communication technology; acquiring the number of a preset UWB positioning beacon, the number of the UWB positioning beacon connected with a computer at first and the number sequence connected with the computer in sequence; calculating data according to the program instructions;
and the computer is used for controlling the processor to run according to the instruction control and executing a positioning method of the space camera based on the combination of UWB and 3D vision.
Advantageous effects
The invention discloses a positioning method and a device of a space camera based on the combination of UWB and 3D vision, compared with the prior art, the invention has the following advantages:
the utility model provides a positioning method and device based on UWB and 3D vision combines the space camera, through setting up the 3D camera, be equipped with IMU position appearance sensor in the twin-lens, gather certain target image and its degree of depth information with IMU position appearance sensor and obtain the position appearance information phase-match of 3D camera through the matching module with the 3D sensor in the twin-lens, obtain the space first data information of the 3D camera of this target direction; acquiring second data information of the 3D camera in space by using an IMU position sensor in a UWB receiving beacon arranged in the 3D camera, arranging two IMUs in parallel at the same direction at the minimum distance, calculating the two IMUs as the same center point, correspondingly matching the first data information with the second data information by using a matching module and a calculating module, and then fusing the first data information and the second data information to acquire high-precision positioning of the 3D camera in space of 1mm-5 mm; a plurality of UWB positioning beacons are uniformly distributed at the whole periphery of a ground site in a preset mode, UWB receiving beacons containing IMU pose sensors are fixedly arranged on an aerial 3D camera, the UWB positioning beacons are sequentially connected with a computer, a creation module is used for creating the serial numbers of the UWB positioning beacons, the serial numbers of the UWB positioning beacons connected with the computer at first and the serial numbers connected with the computer in sequence, so that the computer can acquire the position data and pose data of the aerial 3D camera in all directions of the site in real time, the data are converted into broadcast television industry standard protocol FreeD-D1 data, real camera data in a virtual environment are acquired, and AR and XR virtual reality manufacturing contents are manufactured; the method is not limited by on-site shielding objects, the whole set of debugging time is shortened to be less than or equal to 45 minutes, the debugging cost is greatly reduced, and the requirement of accurate positioning of shooting of the cameras in large-scale sports fields is met.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments are briefly described below to form a part of the present invention, and the exemplary embodiments of the present invention and the description thereof illustrate the present invention and do not constitute undue limitations of the present invention; in the drawings:
FIG. 1 is a flowchart of a positioning method of a spatial camera based on combination of UWB and 3D vision according to an embodiment of the present invention;
FIG. 2 is a block diagram of a dynamic system using a nonlinear spatial model for extended Kalman filtering;
FIG. 3 is a schematic structural diagram of a technical scheme of UWB positioning beacons uniformly distributed around the ground;
fig. 4 is a schematic structural diagram of a technical scheme of a positioning device of a space camera based on combination of UWB and 3D vision.
In the figure:
a 3D camera 100; an image sensor module 101; a matching module 102; a processor 103; a calculation module 104; an acquisition module 105; a memory 106; a 3D camera 107; a first IMU attitude sensor 1071; UWB receive beacon 108; a second IMU attitude sensor 1081; UWB positioning beacons 200; UWB positioning beacon number 1 201; UWB positioning beacon No. 2 202; UWB positioning beacon No. 3 203; UWB positioning beacon No. 4 204; UWB positioning beacon No. 5 205; UWB positioning beacon No. 6 206; UWB positioning beacon No. 7 207; UWB positioning beacon No. 8 208; UWB positioning beacon No. 9 209; UWB positioning beacon number 10 210; UWB positioning beacon number 11 211; UWB positioning beacon No. 12 212; a computer 300; site 400.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to specific embodiments of the present invention and corresponding drawings. In the description of the present invention, it should be noted that, as used in the specification and the claims, the term "comprising" is an open-ended term, and should be interpreted to mean "including, but not limited to.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention.
As shown in fig. 1-4, the technical scheme of the invention comprises:
a positioning method of a space camera based on combination of UWB and 3D vision comprises the following steps:
s1, an acquisition module 105 acquires information of a plurality of UWB positioning beacons 200 uniformly distributed at preset positions around the whole periphery of a ground site 400;
s2, the acquiring module 105 acquires position data and pose data of the 3D camera 100 located above the site 400, and the steps include: the acquiring module 105 acquires first data information acquired by a first IMU pose sensor 1071 arranged in two 3D cameras 107 of the 3D camera 100;
the acquiring module 105 acquires second data information acquired by the UWB receiving beacon 108 which is provided with the 3D camera 100 and contains the second IMU pose sensor 1081; wherein the first IMU pose sensor 1071 and the second IMU pose sensor 1081 are placed side by side and in the same direction, so that the first data information and the second data information are both data information of the 3D camera 100 when the 3D camera 107 obtains a certain target image and depth information thereof;
s3, a matching module 102 performs relevant data matching on the first data information and the second data information, a calculation module 104 performs EKF fusion, and an acquisition module 105 acquires the fused data information of the 3D camera 100;
s4, each UWB positioning beacon 200 respectively acquires data information of the space of the 3D camera 100 corresponding to the respective position matching in real time through the connected UWB receiving beacons, wherein the data information comprises position data and directions;
s5, sequentially acquiring information transmitted by a plurality of UWB positioning beacons 200 according to a preset sequence by the computer 300, and acquiring the position data and the pose data of the 3D camera 100 in all directions in space in real time;
s6, correcting the omnibearing position data and the position and pose data through an extended Kalman filtering algorithm by a calculation module 104, and then converting the corrected position data and pose data into a preset data format.
Preferred embodiments of the present invention are shown in fig. 3-4:
the invention discloses a positioning device of a space camera based on combination of UWB and 3D vision, which is used for realizing a positioning method of the space camera based on combination of UWB and 3D vision, and comprises the following steps:
an image sensor module 101 for acquiring a target image and depth information thereof in real time;
a first IMU attitude sensor 1071 and a second IMU attitude sensor 1081, configured to acquire an attitude shape of the 3D camera 100 in space, including tilt, rotation, and pitch data;
UWB reception beacon 108 for transmitting the position data and pose data of 3D camera 100 in space to UWB positioning beacon 200 by a carrierless communication technique;
the matching module 102 is configured to match a certain target image acquired by the 3D sensor in the dual 3D camera 107 with depth information thereof; the method comprises the steps of correspondingly matching data information acquired by a first IMU attitude sensor 1071 arranged in a 3D sensor with a certain corresponding target image and depth information thereof to acquire first data information of the 3D camera 100 in the position and the direction of space; the second data information acquired by the second IMU posture sensor 1081 in the UWB receiving beacon 108 installed in the 3D camera 100 when acquiring the certain target image and the depth information thereof is correspondingly matched with the first data information;
UWB positioning beacons 200, which are arranged at preset positions around the ground site 400 and are used for ultra-wideband radio positioning and receiving the UWB receiving beacons 108; the numbering of the UWB positioning beacon 200 includes a UWB positioning beacon 201 No. 1, a UWB positioning beacon 202 No. 2, a UWB positioning beacon 203 No. 3, a UWB positioning beacon 204 No. 5, a UWB positioning beacon 205 No. 6, a UWB positioning beacon 206 No. 7, a UWB positioning beacon 207 No. 8, a UWB positioning beacon 208 No. 9, a UWB positioning beacon 209 No. 10, a UWB positioning beacon 210 No. 11, a UWB positioning beacon 211 No. 12, and a UWB positioning beacon 212, and the UWB positioning beacon 201 No. 1 is preset to be connected with the computer 300 first;
a calculation module 104 for calculating the data according to the program instruction;
the 3D camera 100 is provided with a double lens comprising a 3D sensor, a first IMU attitude sensor 1071 is arranged in the 3D camera 107, a UWB receiving beacon 108 is arranged on the camera 100 body, a second IMU attitude sensor 1081 is arranged in the UWB receiving beacon 108, and the first IMU attitude sensor 1071 and the second IMU attitude sensor 1081 are arranged in parallel and in the same direction at the nearest installation distance and are used for acquiring a target image and depth information thereof and accurate position data and pose data of the 3D camera 100 in space in the target direction;
an acquisition module 105 for acquiring data information; the method comprises the steps of acquiring the number of a preset UWB positioning beacon 200, the number of the UWB positioning beacon 200 connected with a computer 300 at first and the number sequence connected with the computer 300 in sequence;
memory 106 for storing data information and computer 300 instructions;
processor 103, when executing computer 300 program instructions, implements: acquiring a target image and depth information thereof in real time; acquiring the posture shape of the 3D camera 100 in space, including tilt, rotation and pitching data; matching a certain target image acquired by a 3D sensor in the double lens and depth information thereof; the method comprises the steps of correspondingly matching data information acquired by a first IMU attitude sensor 1071 arranged in a 3D sensor with a certain corresponding target image and depth information thereof to acquire first data information of the 3D camera 100 in the position and the direction of space; the second data information acquired by the second IMU posture sensor 1081 in the UWB receiving beacon 108 installed in the 3D camera 100 when acquiring the certain target image and the depth information thereof is correspondingly matched with the first data information; transmitting the position data and the pose data of the 3D camera 100 in space to the UWB positioning beacon 200 through a carrier-free communication technology; acquiring the serial numbers of the preset UWB positioning beacons 200, firstly, calculating the serial numbers of the UWB positioning beacons 200 connected with the computer 300 and the serial number sequence connected with the computer 300 in sequence according to program instructions;
and a computer 300 for controlling the processor 103 to operate according to the instruction control, and executing the accurate positioning method of the spatial 3D camera 100 based on the combination of UWB and 3D vision.
Another preferred embodiment of the present invention is shown in fig. 1-4:
a positioning method of a space camera based on combination of UWB and 3D vision comprises the following steps:
step S1, firstly, uniformly distributing a plurality of UWB positioning beacons 200 information at preset positions around the whole periphery of a ground site 400, wherein the horizontal distance between every two adjacent UWB positioning beacons 200 is not more than 200m, and the height between each UWB positioning beacon 200 and the ground of the site 400 is not more than 50m.
In this example, 12 UWB positioning beacons 200 are uniformly distributed around the site 400 and numbered respectively, namely, a No. 1 UWB positioning beacon 201, a No. 2 UWB positioning beacon 202, a No. 3 UWB positioning beacon 203, a No. 4 UWB positioning beacon 204, a No. 5 UWB positioning beacon 205, a No. 6 UWB positioning beacon 206, a No. 7 UWB positioning beacon 207, a No. 8 UWB positioning beacon 208, a No. 9 UWB positioning beacon 209, a No. 10 UWB positioning beacon 210, a No. 11 UWB positioning beacon 211, and a No. 12 UWB positioning beacon 212; the preset No. 1 UWB positioning beacon 201 is firstly in transmission connection with the computer 300, and secondly, sequentially in transmission connection with the computer 300 according to the preset No. 1 UWB positioning beacons 201 to No. 12 UWB positioning beacons 212.
The processor 103 controls the acquisition module 105 to acquire the numbers of 12 UWB positioning beacons uniformly distributed at preset positions around the ground site 400 according to the instruction, and the numbers of the UWB positioning beacons which are firstly connected with the computer 300 in transmission during data transmission, and other serial numbers which are sequentially transmitted are stored in the memory 106.
Step S2, the processor 103 respectively acquires a certain target image and depth information thereof acquired by the image sensor module 101 in the double 3D cameras 107 according to the instruction control acquisition module 105, and forms a unique target image and depth information thereof acquired by the 3D cameras 107 through matching calculation of the calculation module 104;
the acquiring module 105 acquires data information acquired by the first IMU posture sensor 1071 provided in the sensor of the 3D camera 107, and the matching module 102 correspondingly matches the data information with the unique target image and the depth information thereof, so as to obtain first spatial data information of the 3D camera 100 in the target direction.
Acquiring second data information acquired by a UWB receiving beacon 108 which is provided with the 3D camera 100 and contains a second IMU pose sensor 1081; because the first IMU pose sensor 1071 and the second IMU pose sensor 1081 are disposed side by side and in the same direction with the closest installation distance, the first data information and the second data information are images of the lens when acquiring the same target image and depth information thereof, and the two images are fused together to become a 3D image.
The data acquisition process of the acquisition module 105 for the first data information and the second data information is as shown in formula (1) -formula (9):
acquiring coordinate information from the 3D camera position coordinate system to acquire a center point of a world coordinate system, as in formula (1):
g w =Rwbg b (1)
wherein g is attraction; g b Is the coordinate system of the camera; g w Is a world coordinate system;
acquiring a rotation matrix from a world coordinate system to a camera coordinate system, as in formula (2):
(2)
wherein Rwb is Euler angle, phi is roll angle, theta is pitch angle, phi is yaw angle, c is cos, and the cosine of the trigonometric function is defined; s is sin, and the sine definition of the trigonometric function is adopted; the world coordinate system provides a reference coordinate system for the camera coordinate system, wherein the x-axis and the y-axis are tangential to the ground, the z-axis is downward, and the gravity vector formula (3) is substituted:
(3)
the triaxial accelerometer gives a component of gravitational acceleration expressed in the camera reference frame, see equation (4):
g b =[g bx g by g bz ] T (4)
where T represents the transposed matrix, and therefore, substituting the gravity vector is related by rotating the matrix, the relationship is as in equation (5):
(5)
obtaining the pitch angle θ, see formula (6)
(6)
Obtaining the roll angleSee formula (7):
(7)
equations for acquiring the speed and the position are shown in formula (8) and formula (9):
(8)
(9)
wherein ab is acceleration; vb is the speed; sb is the position; k, (k+1) is a time interval; Δt is the sampling time; the angular rate is integrated to determine the direction of the gyroscope.
In step S3, the processor 103 controls the matching module 102 and the calculating module 104 to perform relevant data matching and EKF fusion on the first data information and the second data information according to the instruction, so as to obtain the fused accurate data information of the 3D camera 100, and the calculating process includes:
EKF estimates the position and orientation of the robot by using predictions and corrections of a nonlinear system model, see equations (10), 11):
(10)
(11)
wherein, the liquid crystal display device comprises a liquid crystal display device,is a state vector +.>For a known control input, +.>For process noise->Is measurement noise->For measuring vector +.>An observation matrix at the moment k; process noise->With covariance matrix Q, measurement noiseHaving a covariance matrix R; the EKF is a special case of kalman filters, assumed to be a mutually independent zero-mean gaussian white noise process, for nonlinear systems; the EKF is used to estimate the position and orientation of the robot by employing predictions and corrections of the nonlinear system model;
the time prediction update equation is shown in formulas (12) - (13):
=A/> (12)
(13)
wherein A is a transfer matrix, B is a control matrix,for the measure of the accuracy of the state estimation in verifying the estimated covariance matrix +.>For metric value-1, < >>Covariance-1 for process noise;
the measurement update equations are shown in formulas (14) - (15):
(14)
(15)
wherein, the liquid crystal display device comprises a liquid crystal display device,is Kalman filtering gain value, +.>Sampling the observed value for the Kalman filtering estimation value, and mapping the real state space to the observed space for the observed model by H, +.>For observing model values, +.>Adding a metric value to the posterior estimated covariance matrix;
wherein the Kalman gain is as shown in formula (16):
(16)
wherein, the liquid crystal display device comprises a liquid crystal display device,is jacobian matrix->For observing the covariance of the noise +.>Is the prediction error in the case of continuous time;
the partial derivative of the measurement function h with respect to state x prior to the state estimate x k-equation is shown in equation (17):
(17)
wherein X is the partial derivative, X is the partial derivative,for the jacobian matrix measurement function,is the X partial derivative of the jacobian matrix;
wherein the motion model and the observation model in the EKF are established using kinematics using accelerometer data as control input, gyroscope data and vision data as measurements, model process noise and covariance noise are appropriately adjusted, and the state vector is shown in formula (18):
=[p v q ω/> (18)
wherein v is a state variable corresponding to the 3D position and speed of the IMU in the world coordinate system, q is a direction quaternion corresponding to the rotation matrix R and ω is the angular speed of the gyroscope;
the fusion transformation matrix is shown in formula (19):
(19)/>
the observation matrix and matrix data from the gyroscope are calculated as shown in equations (20) - (22):
(20)
(21)
(22)
where H is the calculated data check on the measurement matrix.
S4, each UWB positioning beacon 200 is respectively connected with the UWB receiving beacon 108, and the acquisition module 105 acquires data information of the space of the 3D camera 100 corresponding to the respective position matching in real time, including position data and directions.
S5, the computer 300 sequentially acquires all the information transmitted by the UWB positioning beacons 200 according to a preset sequence, and acquires the position data and the pose data of the 3D camera 100 in all directions in space in real time, wherein the steps comprise:
the computer 300 acquires, from the memory 106, all the UWB positioning beacons 200 sequentially numbered, including from the No. 1 UWB positioning beacon 201 to the No. 12 UWB positioning beacon 212, sequentially connected sequence numbers according to the acquired sequence numbers of all the numbered UWB positioning beacons and acquiring the No. 1 UWB positioning beacon 201 connected with the computer 300 first, and sequentially acquires, according to a preset program, information of UWB receiving beacons 108 respectively connected with the No. 1 UWB positioning beacon 201 to the No. 12 UWB positioning beacon 212, so as to acquire, in real time, position data and pose data of the 3D camera 100 in all directions in space.
In step S6, the omnidirectional position data and the omnidirectional pose data are corrected by an extended kalman filter algorithm, and the steps include:
the processor 103 controls the calculation module 104 to modify the roll and pitch fused IMU data via the extended kalman filter algorithm according to the program instructions, see formula (23) (24):
(23)
(24)
the rotation matrix is an orthogonal matrix, two coordinate systems u and v are set, the rotation matrix rotates a vector X from v frame to u frame, t is a translation matrix, and the two coordinate systems u and v are processed;
the unit quaternion is a 4-D expression of direction and rotation may be defined using the unit quaternion, as in equation (25):
 (25)
q is a unit quaternion, a rotation axis and a rotation angle are represented, the direction is calculated as a quaternion, the gravity vector is rotated from an earth coordinate system to a sensor coordinate system, the gravity vector in the sensor coordinate system is an accelerometer reading, and high-precision positioning of the 3D camera in a space of 1mm-5mm is obtained.
In addition, the extended Kalman filtering may be expressed using a nonlinear spatial model, a dynamic system block diagram, see FIG. 2.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.

Claims (9)

1. The positioning method of the space camera based on the combination of UWB and 3D vision is characterized by comprising the following steps: the method comprises the following steps:
s1, acquiring a plurality of UWB positioning beacon information uniformly distributed at preset positions around the whole periphery of a ground field;
s2, acquiring position data and pose data of a 3D camera positioned above the field, wherein the steps comprise: acquiring first data information acquired by a first IMU pose sensor arranged in a double lens of the 3D camera;
acquiring second data information acquired by a UWB receiving beacon which is arranged on the 3D camera and comprises a second IMU pose sensor; the first IMU pose sensor and the second IMU pose sensor are placed side by side and in the same direction, so that the first data information and the second data information are both data information of the 3D camera when the lens acquires a certain target image and depth information thereof;
s3, carrying out relevant data matching and EKF fusion on the first data information and the second data information to obtain the fused data information of the 3D camera;
s4, each UWB positioning beacon respectively acquires the data information of the space of the 3D camera corresponding to the respective position match in real time through the connected UWB receiving beacon, wherein the data information comprises position data and direction;
s5, sequentially acquiring information transmitted by a plurality of UWB positioning beacons according to a preset sequence, and acquiring the position data and the pose data of the 3D camera in all directions in space in real time;
s6, correcting the omnibearing position data and the position and pose data through an extended Kalman filtering algorithm and then converting the corrected omnibearing position data and the position and pose data into a preset data format.
2. The method for positioning a spatial camera based on combination of UWB and 3D vision according to claim 1, wherein: in S1, the horizontal distance between the adjacent UWB positioning beacons is set to be no more than 200m, and the height of the UWB positioning beacons from the ground surface is set to be no more than 50m.
3. The method for positioning a spatial camera based on combination of UWB and 3D vision according to claim 1, wherein: s2, the step of acquiring the first data information comprises the following steps:
respectively acquiring a certain target image and depth information thereof acquired by a 3D sensor in the double lens, and matching and fusing the target image and the depth information thereof to form a unique target image and the depth information thereof;
and acquiring data information acquired by an IMU arranged in the 3D sensor, and correspondingly matching with the unique target image and the depth information thereof to acquire the first data information of the space of the 3D camera in the target direction.
4. A method for positioning a spatial camera based on a combination of UWB and 3D vision as defined in claim 3, wherein: in S2, the data collection process of the first data information and the second data information is as shown in formula (1) -formula (9):
acquiring coordinate information from the 3D camera position coordinate system to acquire a center point of a world coordinate system, as in formula (1):
g w =Rwbg b (1)
wherein g is attraction; g b Is the coordinate system of the camera; g w Is a world coordinate system;
acquiring a rotation matrix from a world coordinate system to a camera coordinate system, as in formula (2):
(2)
wherein Rwb is Euler angle, phi is roll angle, theta is pitch angle, phi is yaw angle, c is cos, and the cosine of the trigonometric function is defined; s is sin, and the sine definition of the trigonometric function is adopted; the world coordinate system provides a reference coordinate system for the camera coordinate system, wherein the x-axis and the y-axis are tangential to the ground, the z-axis is downward, and the gravity vector formula (3) is substituted:
(3)
the triaxial accelerometer gives a component of gravitational acceleration expressed in the camera reference frame, see equation (4):
g b =[g bx g by g bz ] T (4)
where T represents the transposed matrix, and therefore, substituting the gravity vector is related by rotating the matrix, the relationship is as in equation (5):
(5)
obtaining the pitch angle θ, see formula (6)
(6)
Obtaining the roll angleSee formula (7):
(7)
equations for acquiring the speed and the position are shown in formula (8) and formula (9):
(8)
wherein ab is acceleration; vb is the speed; sb is the position; k, (k+1) is a time interval; Δt is the sampling time; the angular rate is integrated to determine the direction of the gyroscope.
5. The method for positioning a spatial camera based on combination of UWB and 3D vision according to claim 1, wherein: s3, carrying out relevant data matching and EKF fusion on the first data information and the second data information to obtain fused accurate data information of the 3D camera, wherein the calculation process comprises the following steps:
EKF estimates the position and orientation of the robot by using predictions and corrections of a nonlinear system model, see equations (10), 11):
(10)
(11)
wherein, the liquid crystal display device comprises a liquid crystal display device,is a state vector +.>For a known control input, +.>For process noise->Is the noise of the measurement and,for measuring vector +.>An observation matrix at the moment k; process noise->With covariance matrix Q, measurement noise +.>Having a covariance matrix R; the EKF is a special case of kalman filters, assumed to be a mutually independent zero-mean gaussian white noise process, for nonlinear systems; the EKF is used to estimate the position and orientation of the robot by employing predictions and corrections of the nonlinear system model;
the time prediction update equation is shown in formulas (12) - (13):
=A/> (12)
(13)
wherein A is a transfer matrix, B is a control matrix,for the measure of the accuracy of the state estimation in verifying the estimated covariance matrix +.>For metric value-1, < >>Covariance-1 for process noise;
the measurement update equations are shown in formulas (14) - (15):
(14)
(15)
wherein, the liquid crystal display device comprises a liquid crystal display device,is Kalman filtering gain value, +.>Sampling the observed value for the Kalman filtering estimation value, and mapping the real state space to the observed space for the observed model by H, +.>For observing model values, +.>Adding a metric value to the posterior estimated covariance matrix;
wherein the Kalman gain is as shown in formula (16):
(16)
wherein, the liquid crystal display device comprises a liquid crystal display device,is jacobian matrix->For observing the covariance of the noise +.>Is the prediction error in the case of continuous time;
the partial derivative of the measurement function h with respect to state x prior to the state estimate x k-equation is shown in equation (17):
(17)
wherein X is the partial derivative, X is the partial derivative,for the jacobian matrix measurement function,is the X partial derivative of the jacobian matrix;
wherein the motion model and the observation model in the EKF are established using kinematics using accelerometer data as control input, gyroscope data and vision data as measurements, model process noise and covariance noise are appropriately adjusted, and the state vector is shown in formula (18):
=[p v q ω (18)
wherein v is a state variable corresponding to the 3D position and speed of the IMU in the world coordinate system, q is a direction quaternion corresponding to the rotation matrix R and ω is the angular speed of the gyroscope;
the fusion transformation matrix is shown in formula (19):
(19)
the observation matrix and matrix data from the gyroscope are calculated as shown in equations (20) - (22):
(20)
(21)
(22)
where H is the calculated data check on the measurement matrix.
6. The method for positioning a spatial camera based on combination of UWB and 3D vision according to claim 1, wherein: s5, sequentially acquiring information transmitted by a plurality of UWB positioning beacons according to a preset sequence, wherein the steps comprise:
sequentially numbering all UWB positioning beacons uniformly distributed around the whole periphery of the field, and storing the UWB positioning beacons in a computer;
according to a preset program, the serial numbers of the UWB positioning beacons connected with the computer at first are obtained, and the serial numbers of all serial numbers connected in sequence are obtained;
and sequentially acquiring information of UWB receiving beacons connected with the UWB positioning beacons according to a preset sequence so as to acquire the position data and the pose data of the 3D camera in all directions in space in real time.
7. The method for positioning a spatial camera based on combination of UWB and 3D vision according to claim 6, wherein: s6, correcting the omnibearing position data and the position and pose data by an extended Kalman filtering algorithm, wherein the method comprises the following steps of:
the IMU data of the roll and pitch fusion is modified through an extended Kalman filtering algorithm, and the IMU data is shown in a formula (23) (24):
(23)
(24)
the rotation matrix is an orthogonal matrix, two coordinate systems u and v are set, the rotation matrix rotates a vector X from v frame to u frame, t is a translation matrix, and the two coordinate systems u and v are processed;
the unit quaternion is a 4-D expression of direction and rotation may be defined using the unit quaternion, as in equation (25):
 (25)
where q is the unit quaternion, representing the rotation axis and the rotation angle, the direction is calculated as a quaternion that rotates the gravity vector from the earth coordinate system to the sensor coordinate system where the gravity vector is the accelerometer reading.
8. The method for positioning a spatial camera based on combination of UWB and 3D vision according to claim 7, wherein: the extended Kalman filtering may be expressed using a nonlinear spatial model, a dynamic system block diagram.
9. Positioning device of space camera based on UWB and 3D vision combine, its characterized in that: a positioning method for implementing a spatial camera based on a combination of UWB and 3D vision as defined in any one of claims 1 to 8, comprising:
the image sensor module is used for acquiring a target image and depth information of the target image in real time;
the IMU attitude sensor is used for acquiring the attitude shape of the 3D camera in space and comprises inclination, rotation and pitching data;
the UWB receives the beacon, is used for transmitting position data and pose data of the 3D camera in space to UWB positioning beacon through the no-carrier communication technology;
the matching module is used for matching a certain target image acquired by the 3D sensor in the double lens with depth information of the target image; the method comprises the steps of correspondingly matching data information acquired by an IMU attitude sensor arranged in a 3D sensor with a certain corresponding target image and depth information thereof to acquire first data information of the 3D camera in the position and the direction of space; the method comprises the steps that when a certain target image and depth information of the target image are acquired, second data information acquired by an IMU gesture sensor in a UWB receiving beacon of a 3D camera is correspondingly matched with the first data information;
UWB positioning beacons, which are arranged at preset positions around the ground field and are used for ultra-wideband radio positioning and receiving the UWB receiving beacons;
the calculation module calculates the data according to the program instruction;
the 3D camera is provided with a double lens containing a 3D sensor, an IMU gesture sensor is arranged in the lens, a UWB receiving beacon is arranged on the camera body, the IMU gesture sensor is arranged in the UWB receiving beacon, and the two IMU gesture sensors are arranged side by side and in the same direction and are used for acquiring a target image and depth information thereof and accurate position data and position data of the 3D camera in space in the target direction;
the acquisition module is used for acquiring data information; the method comprises the steps of obtaining the number of a preset UWB positioning beacon, the number of the UWB positioning beacon connected with a computer at first and the number sequence connected with the computer in sequence;
the memory is used for storing data information and computer instructions;
a processor, executing the computer program instructions: acquiring a target image and depth information thereof in real time; acquiring the posture shape of the 3D camera in space, wherein the posture shape comprises inclination, rotation and pitching data; matching a certain target image acquired by a 3D sensor in the double lens and depth information thereof; the method comprises the steps of correspondingly matching data information acquired by an IMU attitude sensor arranged in a 3D sensor with a certain corresponding target image and depth information thereof to acquire first data information of the 3D camera in the position and the direction of space; the method comprises the steps that when a certain target image and depth information of the target image are acquired, second data information acquired by an IMU gesture sensor in a UWB receiving beacon of a 3D camera is correspondingly matched with the first data information; transmitting the position data and the pose data of the 3D camera in space to a UWB positioning beacon through a carrier-free communication technology; acquiring the number of a preset UWB positioning beacon, the number of the UWB positioning beacon connected with a computer at first and the number sequence connected with the computer in sequence; calculating data according to the program instructions;
and the computer is used for controlling the processor to run according to the instruction control and executing a positioning method of the space camera based on the combination of UWB and 3D vision.
CN202310551049.5A 2023-05-16 2023-05-16 Positioning method and device of space camera based on combination of UWB and 3D vision Pending CN116518959A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310551049.5A CN116518959A (en) 2023-05-16 2023-05-16 Positioning method and device of space camera based on combination of UWB and 3D vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310551049.5A CN116518959A (en) 2023-05-16 2023-05-16 Positioning method and device of space camera based on combination of UWB and 3D vision

Publications (1)

Publication Number Publication Date
CN116518959A true CN116518959A (en) 2023-08-01

Family

ID=87404526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310551049.5A Pending CN116518959A (en) 2023-05-16 2023-05-16 Positioning method and device of space camera based on combination of UWB and 3D vision

Country Status (1)

Country Link
CN (1) CN116518959A (en)

Similar Documents

Publication Publication Date Title
CN111156998B (en) Mobile robot positioning method based on RGB-D camera and IMU information fusion
US8138938B2 (en) Hand-held positioning interface for spatial query
US6409687B1 (en) Motion tracking system
CN106017463A (en) Aircraft positioning method based on positioning and sensing device
WO2016183812A1 (en) Mixed motion capturing system and method
CN111091587B (en) Low-cost motion capture method based on visual markers
CN111380514A (en) Robot position and posture estimation method and device, terminal and computer storage medium
JP7203105B2 (en) CALIBRATION DEVICE, MONITORING DEVICE, WORKING MACHINE, AND CALIBRATION METHOD FOR IMAGE SENSOR
CN102575933A (en) System that generates map image integration database and program that generates map image integration database
CN110132309B (en) Calibration method of rocker arm inertia/vision combined attitude determination device of coal mining machine
CN108759815B (en) Information fusion integrated navigation method used in global visual positioning method
CN110095659B (en) Dynamic testing method for pointing accuracy of communication antenna of deep space exploration patrol device
KR20170094030A (en) System and Method for providing mapping of indoor navigation and panorama pictures
CN110220533A (en) A kind of onboard electro-optical pod misalignment scaling method based on Transfer Alignment
CN113267794A (en) Antenna phase center correction method and device with base line length constraint
CN113720330A (en) Sub-arc-second-level high-precision attitude determination design and implementation method for remote sensing satellite
CN106525007A (en) Distributed interactive surveying and mapping universal robot
WO2020062356A1 (en) Control method, control apparatus, control terminal for unmanned aerial vehicle
Si et al. A novel positioning method of anti-punching drilling robot based on the fusion of multi-IMUs and visual image
CN114111776A (en) Positioning method and related device
EP4258015A1 (en) Support system for mobile coordinate scanner
CN116518959A (en) Positioning method and device of space camera based on combination of UWB and 3D vision
CN114383612B (en) Vision-assisted inertial differential pose measurement system
CN114199239B (en) Dual-vision auxiliary inertial differential cabin inner head gesture detection system combined with Beidou navigation
CN112489118B (en) Method for quickly calibrating external parameters of airborne sensor group of unmanned aerial vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination