CN110650427B - Indoor positioning method and system based on fusion of camera image and UWB - Google Patents

Indoor positioning method and system based on fusion of camera image and UWB Download PDF

Info

Publication number
CN110650427B
CN110650427B CN201910355667.6A CN201910355667A CN110650427B CN 110650427 B CN110650427 B CN 110650427B CN 201910355667 A CN201910355667 A CN 201910355667A CN 110650427 B CN110650427 B CN 110650427B
Authority
CN
China
Prior art keywords
camera
coordinates
image
space
uwb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910355667.6A
Other languages
Chinese (zh)
Other versions
CN110650427A (en
Inventor
李雪维
徐建斌
王骊
顾晔
王伟
袁骁
王立果
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Materials Branch of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
Materials Branch of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Materials Branch of State Grid Zhejiang Electric Power Co Ltd filed Critical Materials Branch of State Grid Zhejiang Electric Power Co Ltd
Priority to CN201910355667.6A priority Critical patent/CN110650427B/en
Publication of CN110650427A publication Critical patent/CN110650427A/en
Application granted granted Critical
Publication of CN110650427B publication Critical patent/CN110650427B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/023Services making use of location information using mutual or relative location information between multiple location based services [LBS] targets or of distance thresholds
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • G06F17/12Simultaneous equations, e.g. systems of linear equations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/33Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • H04W64/006Locating users or terminals or network equipment for network management purposes, e.g. mobility management with additional information processing, e.g. for direction or speed determination

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Mathematical Physics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Data Mining & Analysis (AREA)
  • Operations Research (AREA)
  • Automation & Control Theory (AREA)
  • Algebra (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an indoor positioning method and system based on fusion of camera images and UWB, wherein the system comprises one or more groups of UWB base stations and cameras which are arranged in pairs, a label positioned in an indoor positioning range, and computer equipment connected with the UWB base stations and the cameras. The position of the label in the three-dimensional space can be calculated through the distance information acquired from the label to a single UWB base station and the position of the label in the image of the camera, so that even if the UWB base station shields the label, the indoor positioning with higher precision can be realized as long as the number of the shielded UWB base stations is not less than one, and the robustness of UWB system positioning is greatly improved.

Description

Indoor positioning method and system based on fusion of camera image and UWB
Technical Field
The application belongs to the technical field of indoor navigation, and particularly relates to an indoor positioning method and system based on fusion of camera images and UWB.
Background
Uwb (ultra Wide band), which is a carrier-free communication technology, transmits data using nanosecond-level non-sinusoidal narrow pulses, and thus occupies a Wide spectrum. The UWB positioning adopts a broadband pulse communication technology, has strong anti-interference capability and reduces the positioning error. The UWB indoor positioning technology fills the blank of the high-precision positioning field, and has the advantages of insensitivity to channel fading, low power spectral density of transmitted signals, low interception capability, low system complexity, capability of providing centimeter-level positioning precision and the like.
The traditional minimum UWB positioning system adopts a three-base-station-one-tag architecture, and obtains the three-dimensional position of a tag in a scene by simultaneously calculating the distances from the tag to three base stations. However, in practical application, the UWB tag needs to maintain a line of sight and no shielding from each base station to obtain high-precision distance information, and once shielding objects appear between the tag and the base stations, the number of the base stations without shielding is less than three, which results in positioning failure.
Disclosure of Invention
The application aims to provide an indoor positioning method and system based on fusion of camera images and UWB, influence of environmental factors on indoor positioning is reduced, and accuracy and reliability of indoor positioning are improved.
In order to achieve the purpose, the technical scheme adopted by the application is as follows:
an indoor positioning method based on camera image and UWB fusion comprises the following steps:
according to the preset space coordinates of a plurality of labels, the distance between each space coordinate and a UWB base station is obtained, and the image position of each space coordinate in the camera image is obtained;
calculating the space coordinates of one or more groups of UWB base stations and the space coordinates of the camera according to the space coordinates, the distances and the image positions;
Controlling a camera to obtain the image position of a label to be positioned, and obtaining the three-dimensional coordinates of one or more labels to be positioned under the corresponding camera coordinates according to the image position of the label to be positioned, the spatial coordinates of UWB base stations positioned in the same group and the spatial coordinates of the camera;
and obtaining the space coordinates of the label to be positioned by utilizing the three-dimensional coordinates of the one or more labels to be positioned under the camera coordinates, and finishing indoor positioning.
Preferably, the image positions of the acquired space coordinates in the camera image and the image position of the label to be positioned obtained by the control camera are corrected image positions, and the method for correcting the image positions includes:
if the image position before correction is (the lower frequency)i
Figure GDA0002874091610000024
) Then the corrected image position is obtained as (u) according to the formulai、vi) The formula is as follows:
Figure GDA0002874091610000021
wherein u is0、v0Is the coordinate of the camera's optical center in the image, k1、k2For the radial distortion parameter of the camera, the parameter D is calculated by the following formula:
Figure GDA0002874091610000023
wherein u is0、v0As coordinates of the camera's optical center in the image, fx、fyIs the camera focal length of the video camera.
Preferably, the calculating of the spatial coordinates of one or more groups of UWB base stations and the spatial coordinates of the camera according to the plurality of spatial coordinates, the distance, and the image position includes:
Let preset i spatial coordinates be:
{(Xw1,Yw1,Zw1),(Xw2,Yw2,Zw2),(Xw3,Yw3,Zw3),...,(Xwi,Ywi,Zwi) And acquiring the distance between each space coordinate and one UWB base station as follows: { d1,d2,d3,...,diGet the corresponding shooting of each space coordinateThe image positions in the camera are: { (u)1,v1),(u2,v2),(u3,v3)...,(ui,vi)};
Calculating spatial coordinates B of the UWB base station, including:
obtaining a first equation group according to preset i space coordinates and corresponding distances as follows:
Figure GDA0002874091610000022
wherein, Xb、Yb、ZbThe space coordinate B of the UWB base station to be calculated is obtained; subtracting the latter term from the former term in the first equation set yields a second equation set as follows:
Figure GDA0002874091610000031
wherein M isi-1、Ni-1、Pi-1、Qi-1Are all constant coefficients;
solving the second equation set by a least square method to obtain a space coordinate B of the UWB base station to be calculated in the set as (X)b,Yb,Zb)。
Preferably, the calculating the spatial coordinates of the camera includes:
establishing a projection formula of the camera:
Figure GDA0002874091610000032
wherein s is a constant, u and v are image coordinates, i acquired image positions of each space coordinate in the corresponding camera are substituted for calculation, and u is0、v0As coordinates of the camera's optical center in the image, fx、fyFor the camera focal length of the video camera, X, T, Z is a space coordinate, i preset space coordinates are substituted for calculation, and R is a value to be solvedA rotation matrix of the solution, T is a translation matrix to be solved;
According to the projection formula, the calculation of the spatial coordinates of the cameras corresponds to the solution based on the PnP problem, and the solution can be performed by using CV _ P3P algorithm, CV _ ITERATIVE algorithm or CV _ EPNP algorithm to obtain the values of R and T, where T is the spatial coordinates of the cameras in the corresponding group.
Preferably, the camera is controlled to obtain the image position of the tag to be positioned, and three-dimensional coordinates of one or more tags to be positioned under the corresponding camera coordinates are obtained according to the image position of the tag to be positioned, the spatial coordinates of the UWB base stations positioned in the same group and the spatial coordinates of the camera; obtaining the space coordinate of the label to be positioned by the three-dimensional coordinate of one or more labels to be positioned under the camera coordinate, and finishing indoor positioning, wherein the method comprises the following steps:
calculating the distance L between the UWB base station and the camera in the same group:
L=||T-B||;
wherein, T is the space coordinate of the camera, B is the space coordinate of the UWB base station;
obtaining image positions (uT, vT) of the label to be positioned by utilizing the cameras of each group, and establishing the following formula according to the pinhole imaging principle and the perspective model of the cameras:
Figure GDA0002874091610000033
wherein, Xc、Yc、ZcFor the three-dimensional coordinates of the tag to be positioned under the camera coordinates of the corresponding camera,
Figure GDA0002874091610000041
is an internal parameter matrix A, u of the camera 0、v0As coordinates of the camera's optical center in the image, fx、fyIs the camera focal length of the camera;
the third set of equations can be obtained using the equations established based on the pinhole imaging principle and the perspective model of the camera as follows:
Figure GDA0002874091610000042
wherein, Xc、Yc、ZcFor the three-dimensional coordinates, u, of the tag to be positioned in the camera coordinates of the corresponding camera0、v0As coordinates of the camera's optical center in the image, fx、fyThe distance between the label and the UWB base station in the corresponding group is S;
solving the third program group to obtain the three-dimensional coordinate P of the label to be positioned under the camera coordinatecIs (X)c,Yc,Zc) If one or more groups of UWB base stations and cameras exist, one or more three-dimensional coordinates are correspondingly obtained;
for each three-dimensional coordinate PcCalculating the space coordinate P according to the following formulawIs (X)w,Yw,Zw):
Pw=R-1(Pc-T);
Wherein R is a rotation matrix of the camera, and T is a translation matrix of the camera; if the three-dimensional coordinate is one, the calculated space coordinate is the space coordinate of the label to be positioned under the world coordinate; and if the three-dimensional coordinates are multiple, obtaining the space coordinates of the label to be positioned under the world coordinates according to the multiple three-dimensional coordinates.
The application still provides an indoor positioning system based on camera image fuses with UWB, indoor positioning system based on camera image fuses with UWB includes: one or more groups of UWB base stations and cameras arranged in pairs, tags located within an indoor positioning range, and computer equipment connected with the UWB base stations and the cameras; the computer device comprises a memory and a processor, the memory stores a computer program, and the processor realizes the following steps when executing the computer program:
According to the preset space coordinates of a plurality of labels, the distance between each space coordinate and a UWB base station is obtained, and the image position of each space coordinate in the camera image is obtained;
calculating the space coordinates of one or more groups of UWB base stations and the space coordinates of the camera according to the space coordinates, the distances and the image positions;
controlling a camera to obtain the image position of a label to be positioned, and obtaining the three-dimensional coordinates of one or more labels to be positioned under the corresponding camera coordinates according to the image position of the label to be positioned, the spatial coordinates of UWB base stations positioned in the same group and the spatial coordinates of the camera;
and obtaining the space coordinates of the label to be positioned by utilizing the three-dimensional coordinates of the one or more labels to be positioned under the camera coordinates, and finishing indoor positioning.
Preferably, the image positions of the acquired space coordinates in the camera image and the image positions of the labels to be positioned obtained by the control camera are corrected image positions, and the correction of the image positions is implemented by:
if the image position before correction is (the lower frequency)i
Figure GDA0002874091610000055
) Then the corrected image position is obtained as (u) according to the formulai、vi) The formula is as follows:
Figure GDA0002874091610000051
wherein u is0、v0Is the coordinate of the camera's optical center in the image, k 1、k2For the radial distortion parameter of the camera, the parameter D is calculated by the following formula:
Figure GDA0002874091610000054
wherein u is0、v0As coordinates of the camera's optical center in the image, fx、fyIs the camera focal length of the video camera.
Preferably, when calculating the spatial coordinates of one or more groups of UWB base stations and the spatial coordinates of the camera according to the plurality of spatial coordinates, the distance and the image position, and calculating the spatial coordinates of each group of UWB base stations and the spatial coordinates of the camera, the following operations are performed:
let preset i spatial coordinates be:
{(Xw1,Yw1,Zw1),(Xw2,Yw2,Zw2),(Xw3,Yw3,Zw3),...,(Xwi,Ywi,Zwi) And acquiring the distance between each space coordinate and one UWB base station as follows: { d1,d2,d3,...,diAnd acquiring the image positions of the space coordinates in the corresponding cameras as follows: { (u)1,v1),(u2,v2),(u3,v3)...,(ui,vi)};
Calculating spatial coordinates B of the UWB base station, including:
obtaining a first equation group according to preset i space coordinates and corresponding distances as follows:
Figure GDA0002874091610000052
wherein, Xb、Yb、ZbThe space coordinate B of the UWB base station to be calculated is obtained; subtracting the latter term from the former term in the first equation set yields a second equation set as follows:
Figure GDA0002874091610000061
wherein M isi-1、Ni-1、Pi-1、Qi-1Are all constant coefficients;
solving the second equation set by a least square method to obtain a space coordinate B of the UWB base station to be calculated in the set as (X)b,Yb,Zb)。
Preferably, when the spatial coordinates of the camera are calculated, the following operations are performed:
Establishing a projection formula of the camera:
Figure GDA0002874091610000062
wherein s is a constant, u and v are image coordinates, i acquired image positions of each space coordinate in the corresponding camera are substituted for calculation, and u is0、v0As coordinates of the camera's optical center in the image, fx、fyThe method comprises the steps that X, T, Z are space coordinates for the camera focal length of a video camera, i preset space coordinates are substituted for calculation, R is a rotation matrix to be solved, and T is a translation matrix to be solved;
according to the projection formula, the calculation of the spatial coordinates of the cameras corresponds to the solution based on the PnP problem, and the solution can be performed by using CV _ P3P algorithm, CV _ ITERATIVE algorithm or CV _ EPNP algorithm to obtain the values of R and T, where T is the spatial coordinates of the cameras in the corresponding group.
Preferably, the camera is controlled to obtain the image position of the tag to be positioned, and three-dimensional coordinates of one or more tags to be positioned under the corresponding camera coordinates are obtained according to the image position of the tag to be positioned, the spatial coordinates of the UWB base stations positioned in the same group and the spatial coordinates of the camera; obtaining the space coordinates of the label to be positioned according to the three-dimensional coordinates of one or more labels to be positioned under the camera coordinates, completing indoor positioning, and executing the following operations:
Calculating the distance L between the UWB base station and the camera in the same group:
L=||T-B||;
wherein, T is the space coordinate of the camera, B is the space coordinate of the UWB base station;
obtaining image positions (uT, vT) of the label to be positioned by utilizing the cameras of each group, and establishing the following formula according to the pinhole imaging principle and the perspective model of the cameras:
Figure GDA0002874091610000063
wherein, Xc、Yc、ZcFor the three-dimensional coordinates of the tag to be positioned under the camera coordinates of the corresponding camera,
Figure GDA0002874091610000071
is an internal parameter matrix A, u of the camera0、v0As coordinates of the camera's optical center in the image, fx、fyIs the camera focal length of the camera;
the third set of equations can be obtained using the equations established based on the pinhole imaging principle and the perspective model of the camera as follows:
Figure GDA0002874091610000072
wherein, Xc、Yc、ZcFor the three-dimensional coordinates, u, of the tag to be positioned in the camera coordinates of the corresponding camera0、v0As coordinates of the camera's optical center in the image, fx、fyThe distance between the label and the UWB base station in the corresponding group is S;
solving the third program group to obtain the three-dimensional coordinate P of the label to be positioned under the camera coordinatecIs (X)c,Yc,Zc) If one or more groups of UWB base stations and cameras exist, one or more three-dimensional coordinates are correspondingly obtained;
for each three-dimensional coordinate P cCalculating the space coordinate P according to the following formulawIs (X)w,Yw,Zw):
Pw=R-1(Pc-T);
Wherein R is a rotation matrix of the camera, and T is a translation matrix of the camera; if the three-dimensional coordinate is one, the calculated space coordinate is the space coordinate of the label to be positioned under the world coordinate; and if the three-dimensional coordinates are multiple, obtaining the space coordinates of the label to be positioned under the world coordinates according to the multiple three-dimensional coordinates.
The indoor positioning method and system based on the fusion of the camera image and the UWB adopt the fusion positioning of the camera image and the UWB system, the position of the label in the three-dimensional space can be calculated through the distance information acquired from the label to a single UWB base station and the position of the label in the camera image, even if the UWB base station shields the label, as long as the number of the non-shielded UWB base stations is not less than one, the indoor positioning with higher precision can be realized, and the robustness of the UWB system positioning is greatly improved.
Drawings
FIG. 1 is a block flow diagram of an indoor positioning method based on the fusion of camera images and UWB according to the application;
fig. 2 is a schematic structural diagram of an embodiment of an indoor positioning system based on fusion of camera images and UWB according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
The steps involved in the methods of the present application are not strictly limited in order to be performed, and the steps may be performed in other orders. Moreover, at least a portion of the steps may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least a portion of the sub-steps or stages of other steps.
As shown in fig. 1, an embodiment provides an indoor positioning method based on camera image and UWB fusion, where the positioning method mainly includes:
(1) according to the preset space coordinates of a plurality of labels, the distance between each space coordinate and a UWB base station is obtained, and the image position of each space coordinate in the camera image is obtained;
(2) calculating the space coordinates of one or more groups of UWB base stations and the space coordinates of the camera according to the space coordinates, the distances and the image positions;
(3) obtaining the image position of a label to be positioned by using a camera, and obtaining the three-dimensional coordinates of one or more labels to be positioned under the corresponding camera coordinates according to the image position of the label to be positioned, the spatial coordinates of UWB base stations positioned in the same group and the spatial coordinates of the camera;
(4) and obtaining the space coordinates of the label to be positioned according to the three-dimensional coordinates of the one or more labels to be positioned under the camera coordinates, and finishing indoor positioning.
It should be noted that, steps (3) and (4) are used to obtain the real-time positioning of the tag, and steps (1) and (2) are used as the preparation work before obtaining the real-time positioning, and the preparation work may be performed before each real-time positioning, or may be performed only before the first real-time positioning, or may be performed once at intervals.
And the space coordinates of the UWB base station and the space coordinates of the camera are obtained through calculation according to the position of the tag, so that the accurate space coordinates of the UWB base station and the camera can be obtained, and the influence of installation errors or equipment displacement on a positioning result can be eliminated.
In the indoor positioning method provided by the embodiment, the UWB base station and the camera installed at the same position are taken as a set of equipment, and the indoor positioning method can relate to a set of UWB base station and camera, and can also relate to a plurality of sets of UWB base stations and cameras, that is, the positioning of the tag can be realized according to one UWB base station and one camera, and can also be realized according to a plurality of UWB base stations and a plurality of cameras, thereby breaking the limitation that the positioning is realized by at least three UWB base stations, reducing the influence of environmental factors on the positioning accuracy, realizing indoor positioning with higher precision, and improving the robustness of a UWB system.
In another embodiment, specific implementation steps of an indoor positioning method based on fusion of camera images and UWB are provided, and the steps are described by taking a group of UWB base stations and camera devices as an example:
s1, calibrating internal parameters of camera
Obtaining an internal parameter matrix A and a radial distortion parameter k of the camera by adopting a classical chessboard pattern calibration method 1、k2(ii) a Wherein A is:
Figure GDA0002874091610000091
wherein u is0、v0As coordinates of the camera's optical center in the image, fx、fyIs the camera focal length of the video camera.
And S2, calibrating the camera and the UWB base station jointly to obtain external parameters of the UWB base station and the camera.
S2.1, establishing standby data
Let the preset spatial coordinates of i tags be:
{(Xw1,Yw1,Zw1),(Xw2,Yw2,Zw2),(Xw3,Yw3,Zw3),...,(Xwi,Ywi,Zwi) In this example, i is 12.
The distance between each space coordinate and the group of UWB base stations is obtained as follows: { d1,d2,d3,...,di}。
The image positions of the space coordinates in the group of cameras are acquired as follows:
{(u1,v1),(u2,v2),(u3,v3)...,(ui,vi)}。
in order to improve the accuracy of the image position acquired by the camera, in an embodiment, the acquired image position is an image position after distortion correction, and the image position correction method includes:
if the image position before correction is (the lower frequency)i
Figure GDA0002874091610000095
) Then the corrected image position is obtained as (u) according to the formulai、vi) The formula is as follows:
Figure GDA0002874091610000092
wherein u is0、v0Is the coordinate of the camera's optical center in the image, k1、k2For the radial distortion parameter of the camera, the parameter D is calculated by the following formula:
Figure GDA0002874091610000094
wherein u is0、v0As coordinates of the camera's optical center in the image, fx、fyIs the camera focal length of the video camera. Unless otherwise specified, all image positions referred to in the present application are image positions subjected to distortion correction.
S2.2, calculating the space coordinate B of the UWB base station in the group
Obtaining a first equation group according to preset i space coordinates and corresponding distances as follows:
Figure GDA0002874091610000101
wherein, Xb、Yb、ZbFor UWB to be calculatedA spatial coordinate B of the base station; subtracting the latter term from the former term in the first equation set yields a second equation set as follows:
Figure GDA0002874091610000102
wherein M isi-1、Ni-1、Pi-1、Qi-1Are all constant coefficients;
solving the second equation set by a least square method to obtain a space coordinate B of the UWB base station to be calculated in the set as (X)b,Yb,Zb)。
S2.3, calculating the space coordinates of the cameras in the group
Establishing a projection formula of the camera:
Figure GDA0002874091610000103
wherein s is a constant, u and v are image coordinates, i acquired image positions of each space coordinate in the corresponding camera are substituted for calculation, and u is0、v0As coordinates of the camera's optical center in the image, fx、fyCalculating a camera focal length of a video camera by substituting i preset space coordinates, wherein the camera focal length is X, T, Z, R is a rotation matrix to be solved and is a 3 x 3 matrix, and T is a translation matrix to be solved and is a 3 x 1 vector;
according to the projection formula, the calculation of the spatial coordinates of the cameras corresponds to the solution based on the PnP problem, and the solution can be performed by using CV _ P3P algorithm, CV _ ITERATIVE algorithm or CV _ EPNP algorithm to obtain the values of R and T, where T is the spatial coordinates of the cameras in the corresponding group.
In this embodiment, a CV _ ITERATIVE algorithm is used for solving, and a result is obtained by solving using functions cvfindexternsciccameraparams 2 and cvfindhomograph of OpenCv.
S2.4, calculating the relative distance between the same group of cameras and the UWB base station
According to step 2.2 and step2.3 the spatial coordinates T (X) of the camera obtained by the solutiont,Yt,Zt) And spatial coordinates B (X) of the UWB base stationb,Yb,Zb) And obtaining the distance L between the two as follows:
L=||T-B||;
wherein, T is the space coordinate of the camera, and B is the space coordinate of the UWB base station.
S3, acquiring space coordinates of label by fusing UWB base station with camera
S3.1, obtaining of image position of label
The label shell can adopt colors with larger color difference with the environment, and the pixel area of the label in the image can be obtained through color segmentation and clustering of the image. In positioning, the tag can be regarded as a particle, so that the image positions (uT, vT) of the tag in the image can be obtained by simply performing mathematical averaging on the pixel coordinates of the divided tag area.
S3.2, calculating three-dimensional coordinates of the label
According to the pinhole imaging principle and the perspective model of the camera, the following formula is established:
Figure GDA0002874091610000111
wherein, Xc、Yc、ZcFor the three-dimensional coordinates of the tag to be positioned under the camera coordinates of the corresponding camera,
Figure GDA0002874091610000112
Is an internal parameter matrix A, u of the camera0、v0As coordinates of the camera's optical center in the image, fx、fyIs the camera focal length of the video camera.
Because the same group of cameras is very close to the UWB base station, the distance between the label and the optical center of the camera can be approximately regarded as the sum of the distance S between the UWB base station and the label and the relative distance L between the camera and the UWB base station, and a third program group can be obtained by combining a formula established according to the pinhole imaging principle of the camera and a perspective model as follows:
Figure GDA0002874091610000113
wherein, Xc、Yc、ZcFor the three-dimensional coordinates, u, of the tag to be positioned in the camera coordinates of the corresponding camera0、v0As coordinates of the camera's optical center in the image, fx、fyAnd L is the distance between the UWB base station in the same group and the video camera, and S is the distance between the tag and the UWB base station in the corresponding group.
Solving the third program group to obtain the three-dimensional coordinate P of the label to be positioned under the camera coordinatecIs (X)c,Yc,Zc)。
S3.2, obtaining the space coordinate of the label under the world coordinate
For three-dimensional coordinate PcCalculating the space coordinate P according to the following formulawIs (X)w,Yw,Zw):
Pw=R-1(Pc-T);
Wherein, R is the rotation matrix of the camera, and T is the translation matrix of the camera.
The above steps S1 to S3 are processes of obtaining a label from a set of cameras and UWB base stations, and if a plurality of sets of UWB base stations are set up in the UWB system and each UWB base station is provided with a camera corresponding to a short distance, each set of cameras and UWB base station are executed according to the above procedure, and then a plurality of three-dimensional coordinates are obtained in step S3.2. If the three-dimensional coordinate is one, the calculated space coordinate is the space coordinate of the label to be positioned under the world coordinate; and if the three-dimensional coordinates are multiple, obtaining the space coordinates of the label to be positioned under the world coordinates according to the multiple three-dimensional coordinates.
When the spatial coordinate of a label to be positioned under the world coordinate is obtained according to the three-dimensional coordinates, different weights can be set for each three-dimensional coordinate for calculation.
In another embodiment, there is also provided an indoor positioning system based on camera image fusion with UWB, the system comprising: one or more groups of UWB base stations and cameras arranged in pairs, tags located within an indoor positioning range, and computer equipment connected with the UWB base stations and the cameras; the computer device comprises a memory and a processor, the memory stores a computer program, and the processor realizes the following steps when executing the computer program:
according to the preset space coordinates of a plurality of labels, the distance between each space coordinate and a UWB base station is obtained, and the image position of each space coordinate in the camera image is obtained;
calculating the space coordinates of one or more groups of UWB base stations and the space coordinates of the camera according to the space coordinates, the distances and the image positions;
controlling a camera to obtain the image position of a label to be positioned, and obtaining the three-dimensional coordinates of one or more labels to be positioned under the corresponding camera coordinates according to the image position of the label to be positioned, the spatial coordinates of UWB base stations positioned in the same group and the spatial coordinates of the camera;
And obtaining the space coordinates of the label to be positioned according to the three-dimensional coordinates of the one or more labels to be positioned under the camera coordinates, and finishing indoor positioning.
Taking fig. 2 as an example, the structure of the indoor positioning system based on the fusion of camera images and UWB is further explained: in fig. 2, 3 groups of UWB base stations and cameras are arranged, and the 3 groups of UWB base stations and cameras are respectively arranged in different directions, and the relative distances between the UWB base stations and the cameras in each group are very close.
The UWB base station 1 and the camera 4 are installed as close as possible, the UWB base station 2 and the camera 5 are installed as close as possible, and the UWB base station 3 and the camera 6 are installed as close as possible, so that the accuracy of subsequent positioning calculation is improved. The tag 7 is within the positioning range of 3 groups of UWB base stations and cameras. Each UWB base station and camera all are connected with computer equipment, receive computer equipment's instruction and upload the data that acquire to the computer.
For the 3 groups of UWB base stations and the camera exemplified in the embodiment, in the case of no foreign object occlusion, the 3 groups of UWB base stations and the camera simultaneously position the tag, and obtain final positioning information according to the 3 groups of obtained positioning information. When the UWB base station or the camera in one group or two groups can not work, the label is positioned by adopting the remaining two groups or one group of UWB base station and the camera, thereby obtaining the final positioning information of the label and reducing the influence of environmental factors on positioning.
For specific limitations of the indoor positioning system based on the fusion of the camera image and the UWB, reference may be made to the above limitations of the indoor positioning method based on the fusion of the camera image and the UWB, and details thereof are not repeated herein.
In another embodiment, there is also provided an indoor positioning system based on camera image fusion with UWB, the system comprising: one or more groups of UWB base stations and cameras arranged in pairs, tags located within an indoor positioning range, and computer equipment connected with the UWB base stations and the cameras; the computer device includes:
the condition presetting module is used for acquiring the distance between each space coordinate and the UWB base station and acquiring the image position of each space coordinate in the camera image according to the preset space coordinates of the plurality of labels;
the external parameter acquisition module is used for calculating the space coordinates of one or more groups of UWB base stations and the space coordinates of the camera according to the plurality of space coordinates, distances and image positions;
the three-dimensional coordinate calculation module is used for controlling the camera to obtain the image position of the label to be positioned, and obtaining the three-dimensional coordinates of one or more labels to be positioned under the corresponding camera coordinates according to the image position of the label to be positioned, the space coordinates of the UWB base stations in the same group and the space coordinates of the camera;
And the space coordinate calculation module is used for obtaining the space coordinates of the label to be positioned according to the three-dimensional coordinates of the one or more labels to be positioned under the camera coordinates, and completing indoor positioning.
For specific limitations of the indoor positioning system based on the fusion of the camera image and the UWB, reference may be made to the above limitations of the indoor positioning method based on the fusion of the camera image and the UWB, and details thereof are not repeated herein.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An indoor positioning method based on the fusion of camera images and UWB is characterized in that the indoor positioning method based on the fusion of camera images and UWB comprises the following steps:
according to the preset space coordinates of a plurality of labels, the distance between each space coordinate and a UWB base station is obtained, and the image position of each space coordinate in the camera image is obtained;
calculating the space coordinates of one or more groups of UWB base stations and the space coordinates of the camera according to the space coordinates, the distances and the image positions;
controlling a camera to obtain the image position of a label to be positioned, and obtaining the three-dimensional coordinates of one or more labels to be positioned under the corresponding camera coordinates according to the image position of the label to be positioned, the spatial coordinates of UWB base stations positioned in the same group and the spatial coordinates of the camera;
and obtaining the space coordinates of the label to be positioned by utilizing the three-dimensional coordinates of the one or more labels to be positioned under the camera coordinates, and finishing indoor positioning.
2. The indoor positioning method based on the fusion of camera image and UWB according to claim 1, wherein the image position of each acquired space coordinate in the camera image and the image position of the tag to be positioned obtained by the control camera are corrected image positions, and the correction method of the image positions comprises:
If the image position before correction is
Figure FDA0002874091600000011
The corrected image position is obtained as (u) according to the formulai、vi) The formula is as follows:
Figure FDA0002874091600000012
wherein u is0、v0Is the coordinate of the camera's optical center in the image, k1、k2For the radial distortion parameter of the camera, the parameter D is calculated by the following formula:
Figure FDA0002874091600000013
wherein u is0、v0As coordinates of the camera's optical center in the image, fx、fyIs the camera focal length of the video camera.
3. The indoor positioning method based on the fusion of camera image and UWB according to claim 1, wherein the calculating of the spatial coordinates of one or more groups of UWB base stations and the spatial coordinates of the camera according to a plurality of spatial coordinates, distances and image positions comprises:
let preset i spatial coordinates be: { (X)w1,Yw1,Zw1),(Xw2,Yw2,Zw2),(Xw3,Yw3,Zw3),…,(Xwi,Ywi,Zwi) And acquiring the distance between each space coordinate and one UWB base station as follows: { d1,d2,d3,…,diAcquiring corresponding camera shooting of each space coordinateThe image positions in the machine are: { (u)1,v1),(u2,v2),(u3,v3)…,(ui,vi)};
Calculating spatial coordinates B of the UWB base station, including:
obtaining a first equation group according to preset i space coordinates and corresponding distances as follows:
Figure FDA0002874091600000021
wherein, Xb、Yb、ZbThe space coordinate B of the UWB base station to be calculated is obtained; subtracting the latter term from the former term in the first equation set yields a second equation set as follows:
Figure FDA0002874091600000022
Wherein M isi-1、Ni-1、Pi-1、Qi-1Are all constant coefficients;
solving the second equation set by a least square method to obtain a space coordinate B of the UWB base station to be calculated in the set as (X)b,Yb,Zb)。
4. The indoor positioning method based on the fusion of the camera image and the UWB as set forth in claim 3, wherein the calculating the spatial coordinates of the camera comprises:
establishing a projection formula of the camera:
Figure FDA0002874091600000023
wherein s is a constant, u and v are image coordinates, i acquired image positions of each space coordinate in the corresponding camera are substituted for calculation, and u is0、v0For the optical centre of the camera in the imageCoordinate, fx、fyThe method comprises the steps that X, T, Z are space coordinates for the camera focal length of a video camera, i preset space coordinates are substituted for calculation, R is a rotation matrix to be solved, and T is a translation matrix to be solved;
according to the projection formula, the calculation of the spatial coordinates of the cameras corresponds to the solution based on the PnP problem, and the solution can be performed by using CV _ P3P algorithm, CV _ ITERATIVE algorithm or CV _ EPNP algorithm to obtain the values of R and T, where T is the spatial coordinates of the cameras in the corresponding group.
5. The indoor positioning method based on the fusion of the camera image and the UWB as claimed in claim 4, wherein the camera is controlled to obtain the image position of the tag to be positioned, and the three-dimensional coordinates of one or more tags to be positioned under the corresponding camera coordinates are obtained according to the image position of the tag to be positioned, the spatial coordinates of the UWB base stations positioned in the same group and the spatial coordinates of the camera; obtaining the space coordinate of the label to be positioned by the three-dimensional coordinate of one or more labels to be positioned under the camera coordinate, and finishing indoor positioning, wherein the method comprises the following steps:
Calculating the distance L between the UWB base station and the camera in the same group:
L=‖T-B‖;
wherein, T is the space coordinate of the camera, B is the space coordinate of the UWB base station;
obtaining the image position (u) of the label to be positioned by using each group of camerasT、vT) According to the pinhole imaging principle and the perspective model of the camera, the following formula is established:
Figure FDA0002874091600000031
wherein, Xc、Yc、ZcFor the three-dimensional coordinates of the tag to be positioned under the camera coordinates of the corresponding camera,
Figure FDA0002874091600000032
is an internal parameter matrix A, u of the camera0、v0As coordinates of the camera's optical center in the image, fx、fyIs the camera focal length of the camera;
the third set of equations can be obtained using the equations established based on the pinhole imaging principle and the perspective model of the camera as follows:
Figure FDA0002874091600000033
wherein, Xc、Yc、ZcFor the three-dimensional coordinates, u, of the tag to be positioned in the camera coordinates of the corresponding camera0、v0As coordinates of the camera's optical center in the image, fx、fyThe distance between the label and the UWB base station in the corresponding group is S;
solving the third program group to obtain the three-dimensional coordinate P of the label to be positioned under the camera coordinatecIs (X)c,Yc,Zc) If one or more groups of UWB base stations and cameras exist, one or more three-dimensional coordinates are correspondingly obtained;
for each three-dimensional coordinate P cCalculating the space coordinate P according to the following formulawIs (X)w,Yw,Zw):
Pw=R-1(Pc-T);
Wherein R is a rotation matrix of the camera, and T is a translation matrix of the camera; if the three-dimensional coordinate is one, the calculated space coordinate is the space coordinate of the label to be positioned under the world coordinate; and if the three-dimensional coordinates are multiple, obtaining the space coordinates of the label to be positioned under the world coordinates according to the multiple three-dimensional coordinates.
6. An indoor positioning system based on camera image and UWB fusion, characterized in that the indoor positioning system based on camera image and UWB fusion comprises: one or more groups of UWB base stations and cameras arranged in pairs, tags located within an indoor positioning range, and computer equipment connected with the UWB base stations and the cameras; the computer device comprises a memory and a processor, the memory stores a computer program, and the processor realizes the following steps when executing the computer program:
according to the preset space coordinates of a plurality of labels, the distance between each space coordinate and a UWB base station is obtained, and the image position of each space coordinate in the camera image is obtained;
calculating the space coordinates of one or more groups of UWB base stations and the space coordinates of the camera according to the space coordinates, the distances and the image positions;
Controlling a camera to obtain the image position of a label to be positioned, and obtaining the three-dimensional coordinates of one or more labels to be positioned under the corresponding camera coordinates according to the image position of the label to be positioned, the spatial coordinates of UWB base stations positioned in the same group and the spatial coordinates of the camera;
and obtaining the space coordinates of the label to be positioned by utilizing the three-dimensional coordinates of the one or more labels to be positioned under the camera coordinates, and finishing indoor positioning.
7. An indoor positioning system based on camera image and UWB fusion according to claim 6, wherein the image position of each acquired space coordinate in the camera image and the image position of the tag to be positioned obtained by the control camera are corrected image positions, and the correction of the image positions is performed by:
if the image position before correction is
Figure FDA0002874091600000041
The corrected image position is obtained as (u) according to the formulai、vi) The formula is as follows:
Figure FDA0002874091600000042
wherein u is0、v0Is the coordinate of the camera's optical center in the image, k1、k2For the radial distortion parameter of the camera, the parameter D is calculated by the following formula:
Figure FDA0002874091600000043
wherein u is0、v0As coordinates of the camera's optical center in the image, fx、fyIs the camera focal length of the video camera.
8. An indoor positioning system based on the fusion of camera image and UWB according to claim 6, characterized in that, when calculating the space coordinates of one or more groups of UWB base stations and the space coordinates of camera according to a plurality of space coordinates, distances and image positions, and calculating the space coordinates of UWB base stations and the space coordinates of camera of each group, the following operations are executed:
Let preset i spatial coordinates be: { (X)w1,Yw1,Zw1),(Xw2,Yw2,Zw2),(Xw3,Yw3,Zw3),...,(Xwi,Ywi,Zwi) And acquiring the distance between each space coordinate and one UWB base station as follows: { d1,d2,d3,…,diAnd acquiring the image positions of the space coordinates in the corresponding cameras as follows: { (u)1,v1),(u2,v2),(u3,v3)...,(ui,vi)};
Calculating spatial coordinates B of the UWB base station, including:
obtaining a first equation group according to preset i space coordinates and corresponding distances as follows:
Figure FDA0002874091600000051
wherein, Xb、Yb、ZbThe space coordinate B of the UWB base station to be calculated is obtained; of the first set of equationsThe subtraction of the latter term from the former term yields a second set of equations as follows:
Figure FDA0002874091600000052
wherein M isi-1、Ni-1、Pi-1、Qi-1Are all constant coefficients;
solving the second equation set by a least square method to obtain a space coordinate B of the UWB base station to be calculated in the set as (X)b,Yb,Zb)。
9. An indoor positioning system based on the fusion of camera image and UWB according to claim 8, characterized in that when calculating the spatial coordinates of the camera, the following operations are executed:
establishing a projection formula of the camera:
Figure FDA0002874091600000053
wherein s is a constant, u and v are image coordinates, i acquired image positions of each space coordinate in the corresponding camera are substituted for calculation, and u is0、v0As coordinates of the camera's optical center in the image, fx、fyThe method comprises the steps that X, T, Z are space coordinates for the camera focal length of a video camera, i preset space coordinates are substituted for calculation, R is a rotation matrix to be solved, and T is a translation matrix to be solved;
According to the projection formula, the calculation of the spatial coordinates of the cameras corresponds to the solution based on the PnP problem, and the solution can be performed by using CV _ P3P algorithm, CV _ ITERATIVE algorithm or CV _ EPNP algorithm to obtain the values of R and T, where T is the spatial coordinates of the cameras in the corresponding group.
10. The indoor positioning system based on the fusion of the camera image and the UWB as defined in claim 9, wherein the control camera obtains an image position of the tag to be positioned, and obtains three-dimensional coordinates of one or more tags to be positioned under corresponding camera coordinates according to the image position of the tag to be positioned, the spatial coordinates of the UWB base stations located in the same group, and the spatial coordinates of the camera; obtaining the space coordinates of the label to be positioned according to the three-dimensional coordinates of one or more labels to be positioned under the camera coordinates, completing indoor positioning, and executing the following operations:
calculating the distance L between the UWB base station and the camera in the same group:
L=||T-B||;
wherein, T is the space coordinate of the camera, B is the space coordinate of the UWB base station;
obtaining the image position (u) of the label to be positioned by using each group of camerasT、vT) According to the pinhole imaging principle and the perspective model of the camera, the following formula is established:
Figure FDA0002874091600000061
wherein, Xc、Yc、ZcFor the three-dimensional coordinates of the tag to be positioned under the camera coordinates of the corresponding camera,
Figure FDA0002874091600000062
Is an internal parameter matrix A, u of the camera0、v0As coordinates of the camera's optical center in the image, fx、fyIs the camera focal length of the camera;
the third set of equations can be obtained using the equations established based on the pinhole imaging principle and the perspective model of the camera as follows:
Figure FDA0002874091600000063
wherein, Xc、Yc、ZcFor the three-dimensional coordinates, u, of the tag to be positioned in the camera coordinates of the corresponding camera0、v0For the optical centre of the camera in the imageCoordinate, fx、fyThe distance between the label and the UWB base station in the corresponding group is S;
solving the third program group to obtain the three-dimensional coordinate P of the label to be positioned under the camera coordinatecIs (X)c,Yc,Zc) If one or more groups of UWB base stations and cameras exist, one or more three-dimensional coordinates are correspondingly obtained;
for each three-dimensional coordinate PcCalculating the space coordinate P according to the following formulawIs (X)w,Yw,Zw):
Pw=R-1(Pc-T);
Wherein R is a rotation matrix of the camera, and T is a translation matrix of the camera; if the three-dimensional coordinate is one, the calculated space coordinate is the space coordinate of the label to be positioned under the world coordinate; and if the three-dimensional coordinates are multiple, obtaining the space coordinates of the label to be positioned under the world coordinates according to the multiple three-dimensional coordinates.
CN201910355667.6A 2019-04-29 2019-04-29 Indoor positioning method and system based on fusion of camera image and UWB Active CN110650427B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910355667.6A CN110650427B (en) 2019-04-29 2019-04-29 Indoor positioning method and system based on fusion of camera image and UWB

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910355667.6A CN110650427B (en) 2019-04-29 2019-04-29 Indoor positioning method and system based on fusion of camera image and UWB

Publications (2)

Publication Number Publication Date
CN110650427A CN110650427A (en) 2020-01-03
CN110650427B true CN110650427B (en) 2021-08-06

Family

ID=68989399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910355667.6A Active CN110650427B (en) 2019-04-29 2019-04-29 Indoor positioning method and system based on fusion of camera image and UWB

Country Status (1)

Country Link
CN (1) CN110650427B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111226800B (en) * 2020-01-19 2021-10-15 中国农业科学院农业信息研究所 Milk cow cooling method, device and system based on position detection
CN111896915A (en) * 2020-04-22 2020-11-06 河海大学 Soft body row overlapping positioning monitoring detection system and use method thereof
CN111601246B (en) * 2020-05-08 2021-04-20 中国矿业大学(北京) Intelligent position sensing system based on space three-dimensional model image matching
CN113808199B (en) * 2020-06-17 2023-09-08 华为云计算技术有限公司 Positioning method, electronic equipment and positioning system
CN111852456B (en) * 2020-07-29 2023-04-07 中国矿业大学 Robust UWB (ultra wide band) underground anchor rod drilling positioning method based on factor graph
CN113115208A (en) * 2021-04-12 2021-07-13 云汉逐影(北京)科技有限公司 UWB-based target tracking and target image reconstruction technology
CN113856173A (en) * 2021-09-03 2021-12-31 安吉豪鼎机电有限公司 Golf ball follow-up vehicle
CN114071362A (en) * 2021-11-05 2022-02-18 国网江苏省电力有限公司电力科学研究院 Multi-target dynamic monitoring method, device, equipment and medium
CN116193581B (en) * 2023-05-04 2023-08-04 广东工业大学 Indoor unmanned aerial vehicle hybrid positioning method and system based on member-collecting filtering

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101626540B1 (en) * 2014-06-19 2016-06-01 주식회사 에스원 Intruder Detection system based on Smart Sensor and Method thereof
CN106647766A (en) * 2017-01-13 2017-05-10 广东工业大学 Robot cruise method and system based on complex environment UWB-vision interaction
CN108012325B (en) * 2017-10-30 2021-01-19 上海神添实业有限公司 Navigation positioning method based on UWB and binocular vision
CN109117809A (en) * 2018-08-24 2019-01-01 浙江大丰实业股份有限公司 UWB positioning states switching system
CN109323696B (en) * 2018-11-07 2022-07-08 航天信息股份有限公司 Indoor positioning navigation system and method for unmanned forklift
CN109541535A (en) * 2019-01-11 2019-03-29 浙江智澜科技有限公司 A method of AGV indoor positioning and navigation based on UWB and vision SLAM

Also Published As

Publication number Publication date
CN110650427A (en) 2020-01-03

Similar Documents

Publication Publication Date Title
CN110650427B (en) Indoor positioning method and system based on fusion of camera image and UWB
CN108257183B (en) Camera lens optical axis calibration method and device
CN110660088B (en) Image processing method and device
CN109272478B (en) Screen projection method and device and related equipment
US20180218485A1 (en) Method and apparatus for fusing plurality of depth images
US10373360B2 (en) Systems and methods for content-adaptive image stitching
CN111754579B (en) Method and device for determining external parameters of multi-view camera
WO2021139176A1 (en) Pedestrian trajectory tracking method and apparatus based on binocular camera calibration, computer device, and storage medium
CN110363838B (en) Large-visual-field image three-dimensional reconstruction optimization method based on multi-spherical-surface camera model
CN103679729A (en) Full-automatic camera parameter calibration method based on colored calibration board
CN108053373A (en) One kind is based on deep learning model fisheye image correcting method
CN111461963B (en) Fisheye image stitching method and device
CN107492080B (en) Calibration-free convenient monocular head image radial distortion correction method
CN113329179B (en) Shooting alignment method, device, equipment and storage medium
CN106570907A (en) Camera calibrating method and device
CN113301320A (en) Image information processing method and device and electronic equipment
CN108460724B (en) Adaptive image fusion method and system based on Mahalanobis distance discrimination
CN112258581B (en) On-site calibration method for panoramic camera with multiple fish glasses heads
CN112598751A (en) Calibration method and device, terminal and storage medium
CN108053376A (en) A kind of semantic segmentation information guiding deep learning fisheye image correcting method
CN111815714A (en) Fisheye camera calibration method and device, terminal device and storage medium
CN115834860A (en) Background blurring method, apparatus, device, storage medium, and program product
CN108683897B (en) Intelligent correction method for distortion of multi-projection display system
CN112396662A (en) Method and device for correcting conversion matrix
WO2021022989A1 (en) Calibration parameter obtaining method and apparatus, processor, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant