CN106228538B - Binocular vision indoor orientation method based on logo - Google Patents

Binocular vision indoor orientation method based on logo Download PDF

Info

Publication number
CN106228538B
CN106228538B CN201610546034.XA CN201610546034A CN106228538B CN 106228538 B CN106228538 B CN 106228538B CN 201610546034 A CN201610546034 A CN 201610546034A CN 106228538 B CN106228538 B CN 106228538B
Authority
CN
China
Prior art keywords
coordinate
video camera
point
camera
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610546034.XA
Other languages
Chinese (zh)
Other versions
CN106228538A (en
Inventor
马琳
林英男
徐诗皓
徐玉滨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen National Research Institute of High Performance Medical Devices Co Ltd
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201610546034.XA priority Critical patent/CN106228538B/en
Publication of CN106228538A publication Critical patent/CN106228538A/en
Application granted granted Critical
Publication of CN106228538B publication Critical patent/CN106228538B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

Binocular vision indoor orientation method based on logo, the present invention relates to binocular vision indoor orientation methods.The present invention is that want the prior art be there are problems that monocular is not easy to extract target three-dimensional information, and the binocular vision indoor orientation method based on logo proposed.This method is by one, establishes the vision map data base of logo Image Acquisition;Two, acquire left video camera and right video camera intrinsic parameter and outer parameter;Three, logo image is shot using binocular camera, is matched using logo image vision information characteristics with the visual information feature of Visual Map database images;Retain match point for the rejecting of Mismatching point;Four, vertex P is calculated1And P2Three-dimensional coordinate;Five, acquire coordinate of the left video camera under world coordinate system and etc. realize.The present invention is applied to binocular vision indoor orientation method field.

Description

Binocular vision indoor orientation method based on logo
Technical field
The present invention relates to binocular vision indoor orientation methods, in particular to the binocular vision indoor positioning side based on logo Method.
Background technique
The continuous demand of the rapid development of internet and wearable device and the application based on location aware, so that position is felt Know that being increasingly becoming people relies on and essential information.Reliable location information is brought under many different environment for user More preferable prior user experience and impression.People increasingly be unable to do without those positions based on GPS and map in daily life Set service and navigation system.It is clear that outdoor positioning technology is highly developed, and many mobile devices are also all referring to making With outdoor positioning technology.GPS system, GLONASS navigation system, Galileo navigation system and Beidou Navigation System are to make at present With relatively broad global position system.Due to the influence of indoor environment factor, satellite positioning signal can not directly meet interior The demand of location-based service, therefore GPS is appropriate only for outdoor scene.Currently, the relevant technologies of location-based service and Industry is just continued to develop towards indoor direction, and target is that immanent location-based service is provided for people.
Indoor positioning has many methods at present, but there are implementation condition complexity, the not high disadvantages of positioning accuracy.Such as it is logical The indoor positioning for crossing GPS or WIFI signal can not continuously provide the location information of user well, it means that traditional The method accuracy under this closed environment interfered indoors, all there is disadvantages for sensibility and robustness more.But vision Method can but simplify, effective location information is directly effectively extracted in geostationary indoor environment, and Other external informations are not depended on, solving indoor positioning with the method for machine vision is a feasible scheme.
Indoors in positioning field, precision and cost be restrict universal principal element, then it is a kind of it is simple and convenient, be easy to Identification, and absolute location coordinates are included, the proposition of the marker with certain error correcting capability, and the mark is taken using video camera Will object information carries out image procossing, the method for finally determining position, is exactly to carry out target identification using computer vision to exist Application in indoor positioning.These markers can be marker generally existing in building, such as emergency exit, number etc. Deng, or some pattern icons artificially provided can solve the problems, such as in the form of software after being matched, precision it is reliable and And it is at low cost.
In machine vision, binocular vision is current one of hot spot, because easier can extract mesh compared with monocular Three-dimensional information is marked, more mesh is again simple easily to be realized, is had well so building Binocular Stereo Vision System and positioning indoors Effect.Binocular Stereo Vision System is usually the two width digital pictures for obtaining measured object simultaneously from different perspectives by twin camera, Or two width digital pictures of measured object are obtained from different perspectives in different moments by single camera, and recover based on principle of parallax The three-dimensional geometric information of object rebuilds object three-dimensional contour outline and position, and then can obtain relative coordinate.
And unlike common images match, such as wearable device, people walks about in interior direction and direction are all It is unfixed.In open shopping square, many scene images, time but more can be captured, user be along The road in narrow long corridor goes ahead, and towards front.In order to user preferably to experience, in user towards just When front, two sides metope object will be existed in the image in the form of a low-angle, cause difficulty to the identification of target.
Summary of the invention
It is to be not easy to extract asking for target three-dimensional information there are monocular the purpose of the present invention is to solve the prior art Topic, and the binocular vision indoor orientation method based on logo proposed.
Above-mentioned goal of the invention is achieved through the following technical solutions:
Step 1: establishing the vision map data base of logo Image Acquisition;
Step 2: the calibration of binocular camera inside and outside parameter;Using Zhang Zhengyou chessboard calibration method, the interior of left video camera is acquired The intrinsic parameter of parameter and right video camera;The outer parameter of the relative pose relationship of left video camera and right video camera is determined using Matlab R and T;
Step 3: shooting logo image using binocular camera, logo image vision information characteristics and Visual are utilized The visual information feature of Map database images is matched;Retain match point for the rejecting of Mismatching point;
Step 4: the pixel coordinate according to obtained in step 3WithCalculate vertex P1In left video camera Coordinate system OclXclYclZclUnder three-dimensional coordinate;The pixel coordinate according to obtained in step 3 Calculate vertex P2 In the coordinate system O of left video cameraclXclYclZclUnder three-dimensional coordinate;
Step 5: the three-dimensional coordinate acquired according to step 4 In vision map data base The vertex P of the logo image of storage1And P2Coordinate under world coordinate system establishes coordinate transformation relation, acquires left video camera and exists Coordinate under world coordinate system, the as position coordinates of user.
Invention effect
In machine vision, binocular vision is current one of hot spot, because easier can extract mesh compared with monocular Three-dimensional information is marked, more mesh is again simple and easy, so having very using two video cameras as the eyes of people position indoors Good effect.Binocular Stereo Vision System is the computing system of a triangle in fact, by two video cameras in left and right different Same target object is taken in position, or with monocular-camera follow shot, shines same object in different location, and based on view Poor principle recovers the far and near information of target, and can rebuild the three-D profile of object, and then can obtain relative coordinate, and regards The information interference of feel is smaller, and precision is higher.Therefore this research is main chooses binocular vision system to complete, it is high-precision to realize Indoor positioning.
The present invention is existing in the Feature Points Matching and shooting image and database by shooting image pair and database images Conversion between the coordinate system of real coordinate system is directed in long corridor to realize the positioning function of online user for both walls Logo image information on body carries out indoor positioning.This positioning system is divided into two steps: establishing offline database stage and online The location estimation stage.Off-line phase needs to construct the vision map data base containing visual signature information in scene indoors, A certain number of logo images are specially arranged in long corridor indoors, and guarantee that each moment has logo to fall in binocular camera shooting In two image of machine, the logo and to image photographic on the long corridor wall of face in the preferable situation of illumination condition, and to a left side for image Two-dimensional coordinate in upper angle and the alive boundary's coordinate system of upper right angular vertex is recorded.When user's tuning on-line, pass through binocular camera shooting Head shoots a pair of of image, carries out SURF characteristic matching and ranging with image in offline database, just can determine that by coordinate transform The coordinate of the position of shooting.When distance of the video camera away from logo is 3.5m, position error can reach 0.22m.Cumulative distribution function It is 0.22m that figure such as Fig. 6, which can be seen that the result in 68% probability distribution,.
Detailed description of the invention
Fig. 1 is that the RANSAC algorithm that specific embodiment five proposes rejects Mismatching point and estimates homography matrix H flow chart;
Fig. 2 is the relation schematic diagram between the coordinate system that specific embodiment four proposes;
Fig. 3 is the relational graph (d=between the camera coordinate system that specific embodiment seven proposes and image physical coordinates system 1,2);
Fig. 4 is the binocular vision system figure (d=1,2) that specific embodiment three proposes;
Fig. 5 is the indoor coordinate system schematic diagram (top view) that specific embodiment two proposes;
Fig. 6 is the cumulative distribution function figure that specific embodiment one proposes.
Specific embodiment
Specific embodiment 1: the binocular vision indoor orientation method based on logo of present embodiment, specifically according to Following steps preparation:
Step 1: establishing vision map (Visual Map) database of logo Image Acquisition;
Step 2: the calibration of binocular camera inside and outside parameter;Using Zhang Zhengyou chessboard calibration method, the interior of left video camera is acquired The intrinsic parameter of parameter and right video camera;The outer parameter of the relative pose relationship of left video camera and right video camera is determined using Matlab R and T;
Step 3: shooting logo image using binocular camera, logo image vision information characteristics and Visual are utilized The visual information feature of Map database images is matched;Retain match point for the rejecting of Mismatching point;
Step 4: the pixel coordinate according to obtained in step 3WithCalculate vertex P1In left video camera Coordinate system OclXclYclZclUnder three-dimensional coordinate;The pixel coordinate according to obtained in step 3 Calculate vertex P2 In the coordinate system O of left video cameraclXclYclZclUnder three-dimensional coordinate;
Step 5: the three-dimensional coordinate acquired according to step 4 With vision map (Visual Map the vertex P of the logo image) stored in database1And P2Coordinate under world coordinate system establishes coordinate transformation relation, asks Obtain coordinate of the left video camera under world coordinate system, the as position coordinates of user;
According to step 1, the location information of available each visual information feature;By step 3, it can determine and clap The database images of images match are taken the photograph, and so knows that the position coordinates for the visual information feature that shooting image includes;By step 3, Position coordinates of the available visual information under camera coordinate system;By two different visual informations, two can be determined Position coordinates under camera coordinate system;Position of the video camera under world coordinates can be acquired by calculating coordinate;? Calculate online user position.
Present embodiment effect:
The Feature Points Matching and shooting image and database that present embodiment passes through shooting image pair and database images Conversion between the coordinate system of middle reality coordinate system is directed in long corridor to realize the positioning function of online user for two Logo image information on side wall body carries out indoor positioning.This positioning system is divided into two steps: establish the offline database stage and The online location estimation stage.Off-line phase needs to construct the vision map datum containing visual signature information in scene indoors A certain number of logo images are specially arranged in library indoors in long corridor, and guarantee that each moment has logo to fall in binocular In two image of video camera, the logo and to image photographic on the long corridor wall of face in the preferable situation of illumination condition, and to image The upper left corner and the alive boundary's coordinate system of upper right angular vertex in two-dimensional coordinate recorded.When user's tuning on-line, pass through binocular Camera shoots a pair of of image, carries out SURF characteristic matching and ranging with image in offline database, just can by coordinate transform Determine the coordinate of the position of shooting.When distance of the video camera away from logo is 3.5m, position error can reach 0.22m.Cumulative distribution It is 0.22m that functional arrangement such as Fig. 6, which can be seen that the result in 68% probability distribution,.
Specific embodiment 2: the present embodiment is different from the first embodiment in that: logo figure is established in step 1 As vision map (Visual Map) database detailed process of acquisition are as follows:
Step 1 one: the indoor environment positioned as needed selects world coordinate system origin O (xw0,zw0), establish plane Two-dimensional Cartesian coordinate system OwXwZw
Step 1 two: it is acquired using monocular-camera to logo image is fixed on indoor wall, and records and clapped Take the photograph the vertex P of logo image information1Position coordinates under world coordinate systemWith vertex P2Under world coordinate system Position coordinatesSave the vertex P of the logo image of monocular-camera shooting1Pixel coordinate p1=(u1,v1), Define the vertex P of the logo image of monocular-camera shooting2Pixel coordinate p2=(u2,v2), by pixel coordinate (u1,v1), as Plain coordinate (u2,v2), position coordinatesAnd position coordinatesIt is stored in vision map (Visual Map) data In library;Position coordinates indicate as shown in Figure 5;
Logo image is obtained visual information feature by the extraction that SURF algorithm carries out characteristic point information by step 1 three; By visual information characteristic storage to vision map (Visual Map) database;Q=1 ... n, n are logo image number.Other steps Rapid and parameter is same as the specific embodiment one.
Specific embodiment 3: the present embodiment is different from the first and the second embodiment in that: binocular is taken the photograph in step 2 The calibration of camera inside and outside parameter;Using Zhang Zhengyou chessboard calibration method, the intrinsic parameter of left video camera and the internal reference of right video camera are acquired Number;The outer parameter R and T of the relative pose relationship of left video camera and right video camera is determined using Matlab specifically:
Step 2 one calibrates a left side in Matlab software by chessboard plane chart board using Zhang Zhengyou chessboard calibration method Video camera and right camera model parameter matrix, the intrinsic parameter f of left video camera is read out according to model parameter matrixlWith right camera shooting The intrinsic parameter f of machiner, by flAnd frIt is saved;
Step 2 two, determined using Matlab left video camera and right video camera relative pose relationship outer parameter R and T; And R and T are saved in Matlab;Wherein,Indicate left camera coordinate system origin O in Fig. 4clWith right camera shooting Machine coordinate origin OcrBetween spin matrix;
If providing that a rotation relationship is decomposed into rotates clockwise α degree around the x-axis of coordinate system Angle rotates clockwise beta angle around y-axis and rotates clockwise γ degree angle around z-axis, thenWherein,
Indicate left camera coordinate system origin O in Fig. 4clWith right camera coordinate system origin OcrBetween translation Vector;txIndicate OcrAnd OclIn the shift value of x-axis direction, tyIndicate OcrAnd OclShift value in y-axis direction, tzIndicate OcrWith OclShift value in z-axis direction;
Shooting image size is 1292 pixels × 964 pixels;Gridiron pattern calibrating template, each square lattice are printed first For 30mm × 30mm, specification is 9 × 6, and scaling board is attached on rigid plane hardboard and keeps surfacing;By constantly adjusting The distance between calibrating template and binocular camera and angle respectively acquire 15 chessboard plane chart board figures using left and right video camera Picture.Other steps and parameter are the same as one or two specific embodiments.
Specific embodiment 4: unlike one of present embodiment and specific embodiment one to three: sharp in step 3 Logo image is shot with binocular camera, utilizes the vision of logo image vision information characteristics and Visual Map database images Information characteristics are matched;Retain match point for the rejecting of Mismatching point specifically:
Step 3 one defines camera coordinate system, image physical coordinates system and image pixel coordinates system;World coordinate system, Positional relationship between camera coordinate system, image physical coordinates system and image pixel coordinates system is as shown in Figure 2;Wherein to CCD Plane has done a mirror image about camera lens origin;
(1), by camera optical center left in binocular camera position OclIt is defined as binocular camera coordinate origin;And it enables double The Z of lens cameraCThe optical axis coincidence of axis and left video camera;Wherein, binocular camera includes left video camera and right video camera;It will be left The coordinate system of video camera is defined as OclXclYclZcl;The coordinate system of right video camera is defined as OcrXcrYcrZcr
(2), logo image pixel coordinates system is defined as two-dimensional Cartesian coordinate system O1UV is as shown in Fig. 2, in the coordinate system In with vertex O1For coordinate origin, pixel center point coordinate is (u0,v0);World coordinate system, camera coordinate system, image object The positional relationship managed between coordinate system and image pixel coordinates system is as shown in Figure 2;
Step 3 two: the logo image that left video camera is shot is obtained by the extraction that SURF algorithm carries out characteristic point information Visual information feature;
Step 3 three: the logo image vision information characteristics point and vision map (Visual of left video camera shooting are calculated Map) the arest neighbors Euclidean distance of the visual information characteristic point of database retrieves and left video camera according to arest neighbors Euclidean distance Image, that is, database images of most matched vision map (Visual Map) database of the logo image of shooting;By database diagram The visual information feature of picture and left video camera, which extract to obtain visual information feature, to carry out Feature Points Matching and obtains m to match point;
Step 3 four: the logo image that right video camera is shot is obtained by the extraction that SURF algorithm carries out characteristic point information Visual information feature;The visual information feature of database images and right video camera are extracted to obtain visual information feature progress feature Point matching obtains K to match point;
Step 3 five: the Mismatching point in step 3 three between matching image is rejected using RANSAC algorithm;
Step 3 six: the K through step 3 four is rejected to the Mismatching point in match point using RANSAC algorithm;
Step 3 seven: pass through the homography matrix H estimated1、H2With the logo image top stored in Visual Map database Point P1With vertex P2Pixel coordinate calculating find out homography matrix H1The vertex P of the logo image of corresponding left video camera shooting1's Pixel Pl 1And Pl 2, find out homography matrix H2The vertex P of the logo image of corresponding right video camera shooting2PixelWithWherein, the l and r that subscript indicates respectively represent left and right cameras, and 1 and the 2 of superscript expression respectively represent the left and right of picture Upper angular vertex;
Relationship between homography matrix and matching image pair is expressed as:
For Pl 1、Pl 2OrE is left video camera l or right video camera r;HjFor H1Or H2;pjFor p1Or p2
By the vertex P of the logo image of vision map in step 1 (Visual Map) database purchase1With vertex P2Picture Plain coordinate p1、p2It is brought into formula (1) respectively, the vertex P of the logo image of left video camera shooting is calculated1Corresponding pixel Point Pl 1, the shooting of left video camera logo image vertex P2Corresponding pixel Pl 2Pixel coordinate, right video camera shooting The vertex P of logo image1Corresponding pixelWith the vertex P of the logo image of right video camera shooting2Corresponding pixel Pixel coordinate;Wherein, pixel Pl 1Pixel coordinate be expressed asPixel Pl 2Pixel coordinate be expressed asPixelPixel coordinate be expressed asPixelPixel coordinate be expressed asMatching figure Picture matches the logo image shot for left video camera with the correspondence image in Visual Map database or right video camera is clapped The logo image taken the photograph and corresponding images match in Visual Map database;
When user's tuning on-line, the industrial camera and camera lens that are fixed on tripod using two while left and right two is shot Indoor photo comprising fixed visual information feature;Three of online user is shot respectively left images and offline database Image carries out characteristic point by SURF algorithm and slightly matches;Due to that can have some error hidings when SURF algorithm carries out match point calculating Point, these Mismatching points will lead to the accuracy decline of indoor positioning;Therefore, RANSAC (RANdom Sample is utilized Consensus) algorithm carries out the rejecting of Mismatching point, improves matched accuracy;And with RANSAC algorithm to homography matrix It is estimated, calculates the pixel coordinate of the logo image upper left corner and the upper right corner under pixel coordinate system in two camera reviews; The effect of homography matrix is to describe the two-dimensional projection transformation relationship between two planes, with two as 4 pairs of corresponding points in plane Homography matrix can be found out, threshold value is set as t and n0Meet condition: at least n0The actual position coordinate of a match point and by list When the Euclidean distance between the position coordinates of matrix inverse being answered to be less than t, then it is assumed that the homography matrix can satisfy this group matching figure The transformation relation of picture;RANSAC algorithm weeds out the Mismatching point for being unsatisfactory for above-mentioned homography matrix by such method.Other steps Rapid and parameter is identical as one of specific embodiment one to three.
Specific embodiment 5: unlike one of present embodiment and specific embodiment one to four: in step 3 five Mismatching point in step 3 three between matching image is rejected using RANSAC algorithm specifically:
The initial value on step 3 May Day, the number of iterations n and match point i is set as 0;
Step 3 five or two estimates that left video cameras are clapped to 4 pairs of match points of selection in match point using m obtained in step 3 three Take the photograph the homography matrix H between logo image and database logo image1, by calculating the homography matrix H obtained1Calculate step 3 three Obtained in theoretical coordinate of the m to match point remaining in match point in corresponding logo image coordinate system;
I-th of match point theoretical coordinate and step 3 two obtain in step 3 five or three, calculating database logo image i-th The Euclidean distance of a visual information characteristic point coordinate;Wherein, 1≤i≤m;
Whether the Euclidean distance that step 3 the May 4th, judgment step 353 obtain is less than preset Euclidean distance threshold value t= 0.01, if it is judged that be it is yes, then follow the steps 355;If it is judged that be it is no, then return to step 353;
Step 3 five or five enables the value of n add 1, and judges whether the value of n is greater than preset the number of iterations n0=20;If sentenced Disconnected result be it is yes, then save the matching double points, and save homography matrix H1;If it is judged that be it is no, reject the matching double points, And return to step 352;Detailed process is as shown in Figure 1.Other steps and one of parameter and specific embodiment one to four It is identical.
Specific embodiment 6: unlike one of present embodiment and specific embodiment one to five: in step 3 six K through step 3 four is rejected to the Mismatching point in match point using RANSAC algorithm specifically:
Step 361, the number of iterations n ' and K are set as 0 to the initial value of the match point k in match point;
Step 3 six or two estimates that left video cameras are clapped to 4 pairs of match points of selection in match point using K obtained in step 3 four Take the photograph the homography matrix H between logo image and database logo image2, by calculating the homography matrix H obtained2Calculate step 3 four Obtained in theoretical coordinate of the remaining match point in corresponding logo image coordinate system in match point;
Step 3 six or three calculates k-th of visual information characteristic point that k-th of match point theoretical coordinate is obtained with step 3 four The Euclidean distance of coordinate;Wherein, 1≤k≤K;
Whether the Euclidean distance that step 3 six or four, judgment step 363 obtain is less than preset Euclidean distance threshold value t, such as Fruit judging result be it is yes, then follow the steps 365;If it is judged that be it is no, then return to step 363;
Step 3 six or five enables the value of n ' add 1, and judges whether the value of n ' is greater than preset the number of iterations n '0=20;If Judging result be it is yes, then terminate, and save homography matrix H2;If it is judged that be it is no, then return to step 362;Tool Body process is as shown in Figure 1.Other steps and parameter are identical as one of specific embodiment one to five.
Specific embodiment 7: unlike one of present embodiment and specific embodiment one to six: root in step 4 According to pixel coordinate obtained in step 3WithCalculate vertex P1In the coordinate system O of left video cameraclXclYclZclUnder Three-dimensional coordinate;The pixel coordinate according to obtained in step 3 Calculate vertex P2In the coordinate of left video camera It is OclXclYclZclUnder three-dimensional coordinate specifically:
Step 4 one: according to pixel coordinate OrLogo image top after asking conversion Point P1Or vertex P2In the coordinate (x, y) under physical coordinates system as shown in Figure 2 as shown in formula (2):
Wherein, the logo image of binocular camera shooting is logo image or the shooting of right video camera of left video camera shooting Logo image;(x, y) is (x1 l,y1 l)、(x1 r,y1 r)、(x2 l,y2 l) or (x2 r,y2 r);(x1 l,y1 l) be conversion after left video camera The logo image vertex P of shooting1Physical coordinates, (x1 r,y1 r) it is the logo image vertex P that the right video camera after converting is shot1 Physical coordinates;(x2 l,y2 l) it is the logo image vertex P that the left video camera after converting is shot2Physical coordinates, (x2 r,y2 r) be The logo image vertex P of right video camera shooting after conversion2Physical coordinates;(u, v) is Or
Dx is the amplification coefficient of x;Dy is the amplification coefficient of y;Shown in homogeneous coordinates such as formula (3) with expression matrix:
Definition pixel center point coordinate is (u0,v0) it is used as coordinate origin O2, logo image physical coordinates system is defined as Two-dimensional Cartesian coordinate system O2XY, X-axis and Y-axis respectively with the X of camera coordinate systemclAxis and YclAxis is parallel, usually with millimeter (mm) For coordinate unit;
Step 4 two: the logo image physical coordinates system O after asking conversion2XY and left camera coordinate system OclXclYclZclIt Between relationship;Relationship between camera coordinate system and image physical coordinates system is as shown in Figure 3;
OclThe intersection point p of P and the plane of delineationcl(x, y) or OcrThe intersection point p of P and the plane of delineationcr(x, y) meets fundamental relation such as Shown in formula (4):
Wherein, P point is P1Point or P2Point;
(xc,yc,zc) it is binocular camera coordinate, (xc,yc,zc) be Or For vertex P under left camera coordinate system1Coordinate;For vertex P under right camera coordinate system1Coordinate;For left camera coordinate system Lower vertex P2Coordinate;For vertex P under right camera coordinate system2Coordinate;OclP is P in P1Or vertex P2
Shown in homogeneous coordinates form such as formula (6) with expression matrix:
If left camera coordinate system is OclXclYclZclIt is O with right camera coordinate systemcrXcrYcrZcr, image physical coordinates System is respectively OlXlYlZlAnd OrXrYrZr, left focal length of camera is intrinsic parameter fl;Right focal length of camera is intrinsic parameter fr, f flOr fr
Then shown in the Perspective transformation model of video camera such as formula (8):
(xl,yl) it is (x1 l,y1 l) or (x2 l,y2 l);(xr,yr) it is (x1 r,y1 r) or (x2 r,y2 r);
With space conversion matrix MlrIndicate the position transformational relation between left and right cameras coordinate system, as shown in formula (9):
Wherein, Mlr=[R | T];R is spin matrix, and T is translation vector;MlrThe matrix formed for R and T;
Step 4 three: the coordinate under left camera coordinate system is solved;The intrinsic parameter obtained using step 2 and outer parameter, Coordinate (x, y) is obtained according to step 4 one;Vertex P under left camera coordinate system is calculated1Coordinate (xcl, ycl, zcl);
Therefore, it is indicated by matrix equation (11):
Solution matrix equation can find out P1Coordinate of the point under left camera coordinate system, ρrFor proportionality coefficient,
As shown in formula (12):
Wherein, according to pixel Pl 1Homogeneous coordinates in image physical coordinates system are expressed asAnd pixel Homogeneous coordinates in image physical coordinates system are expressed asSolve P1Coordinate of the point under left camera coordinate system ForAnd P2Coordinate of the point under left camera coordinate system beFig. 4 is binocular used in the present invention Vision system schematic diagram;So far, the position coordinates of logo image top left corner apex under left camera coordinate system are solved;In the following, Solve the position of the left video camera under world coordinate system;
It is obtained and left and right shooting figure using the matching result for shooting image and Visual Map database images in step 3 As the database images to match;Logo image vertex P can be obtained according to homography matrix and projection matrix1With vertex P2It is taking the photograph Then image pixel coordinates in camera imaging coordinate system acquire the logo image upper left corner and upper right angular vertex and sit in left video camera Three-dimensional coordinate under mark system.Other steps and parameter are identical as one of specific embodiment one to six.
Specific embodiment 8: unlike one of present embodiment and specific embodiment one to seven: root in step 5 The three-dimensional coordinate acquired according to step 4 It is stored in vision map (Visual Map) database Logo image vertex P1And P2Coordinate under world coordinate system establishes coordinate transformation relation, acquires left video camera in the world Coordinate under coordinate system, the as position coordinates of user specifically:
Step 5 one, the world coordinate system O according to foundationwXwZwAs shown in figure 5, enabling XwThe direction that axis is directed toward is east, Zw The direction that axis is directed toward is the north, i.e., for floor building more regular at present, there are the directions of following four kinds of walls, such as Shown in Fig. 5;The case where being shot according to video camera from north orientation south, θ indicates left video camera C point and database logo image vertex P1's Line and vertex P1And P2Between angle;Wherein, (0, π) θ ∈;
Step 5 two: C point is calculated away from P1The distance D of point1, C point is calculated away from P2The distance D of point2, it is known that P1The coordinate of pointAnd P1P2Length D3, calculate position of the video camera C point under world coordinate system;Video camera C point is video camera Optical center;
P is calculated by formula (12)1Point and P2Point is in left camera coordinate system OclXclYclZclUnder three-dimensional coordinate beWithThen acquire the following length:
Step 5 three: the angle θ is found out;According to the three of triangle ABC side length D1、D2And D3It acquires shown in angle θ such as formula (15):
Step 5 four: coordinate (x of the point C under world coordinate system can be obtained by obtaining positioning coordinate by step 5 threec, zc), As shown in formula (16);
Similarly the target of other directions can also find out each side length of triangle by this method first, then determine angle Position of the video camera under world coordinate system is finally calculated in θ.Other steps and parameter and specific embodiment one to seven it One is identical.

Claims (7)

1. the binocular vision indoor orientation method based on logo, which is characterized in that this method is specifically to follow the steps below :
Step 1: establishing the vision map data base of logo Image Acquisition, detailed process is;
Step 1 one: the indoor environment positioned as needed selects world coordinate system origin O (xw0,zw0), it is straight to establish planar Angular coordinate system OwXwZw
Step 1 two: it is acquired using monocular-camera to logo image is fixed on indoor wall, and records and be taken The vertex P of logo image information1Position coordinates under world coordinate systemWith vertex P2Under world coordinate system Position coordinatesSave the vertex P of the logo image of monocular-camera shooting1Pixel coordinate p1=(u1,v1), it is fixed The vertex P of the logo image of adopted monocular-camera shooting2Pixel coordinate p2=(u2,v2), by pixel coordinate (u1,v1)、Pixel is sat Mark (u2,v2), position coordinatesAnd position coordinatesIt is stored in vision map data base;
Logo image is obtained visual information feature by the extraction that SURF algorithm carries out characteristic point information by step 1 three;It will view Feel information characteristics storage to vision map data base;Q=1 ... n, n are logo image number;
Step 2: the calibration of binocular camera inside and outside parameter;Using Zhang Zhengyou chessboard calibration method, the intrinsic parameter of left video camera is acquired With the intrinsic parameter of right video camera;Using Matlab determine the relative pose relationship of left video camera and right video camera outer parameter R and T;
Step 3: shooting logo image using binocular camera, logo image vision information characteristics and Visual Map number are utilized It is matched according to the visual information feature of library image;Retain match point for the rejecting of Mismatching point;
Step 4: the pixel coordinate according to obtained in step 3WithCalculate vertex P1In the coordinate system of left video camera OclXclYclZclUnder three-dimensional coordinate;The pixel coordinate according to obtained in step 3Calculate vertex P2On a left side The coordinate system O of video cameraclXclYclZclUnder three-dimensional coordinate;
Step 5: the three-dimensional coordinate acquired according to step 4It is stored in vision map data base Logo image vertex P1And P2Coordinate under world coordinate system establishes coordinate transformation relation, acquires left video camera in the world Coordinate under coordinate system, the as position coordinates of user.
2. the binocular vision indoor orientation method based on logo according to claim 1, it is characterised in that: binocular in step 2 The calibration of camera interior and exterior parameter;Using Zhang Zhengyou chessboard calibration method, acquire left video camera intrinsic parameter and right video camera it is interior Parameter;The outer parameter R and T of the relative pose relationship of left video camera and right video camera is determined using Matlab specifically:
Step 2 one calibrates left camera shooting by chessboard plane chart board using Zhang Zhengyou chessboard calibration method in Matlab software Machine and right camera model parameter matrix, the intrinsic parameter f of left video camera is read out according to model parameter matrixlWith right video camera Intrinsic parameter fr, by flAnd frIt is saved;
Step 2 two, determined using Matlab left video camera and right video camera relative pose relationship outer parameter R and T;And by R It is saved in Matlab with T;Wherein,Indicate left camera coordinate system origin OclIt is former with right camera coordinate system Point OcrBetween spin matrix;
If one rotation relationship of regulation be decomposed into the x-axis around coordinate system rotate clockwise α degree angle, around Y-axis rotates clockwise beta angle and rotates clockwise γ degree angle around z-axis, thenWherein,
Left camera coordinate system origin O in expressionclWith right camera coordinate system origin OcrBetween translation vector;tx Indicate OcrAnd OclIn the shift value of x-axis direction, tyIndicate OcrAnd OclShift value in y-axis direction, tzIndicate OcrAnd OclIn z-axis The shift value in direction.
3. the binocular vision indoor orientation method based on logo according to claim 1, it is characterised in that: utilized in step 3 Binocular camera shoots logo image, is believed using the vision of logo image vision information characteristics and Visual Map database images Breath feature is matched;Retain match point for the rejecting of Mismatching point specifically:
Step 3 one defines camera coordinate system, image physical coordinates system and image pixel coordinates system;
(1), by camera optical center left in binocular camera position OclIt is defined as binocular camera coordinate origin;And binocular is enabled to take the photograph The Z of cameraCThe optical axis coincidence of axis and left video camera;Wherein, binocular camera includes left video camera and right video camera;By left camera shooting The coordinate system of machine is defined as OclXclYclZcl;The coordinate system of right video camera is defined as OcrXcrYcrZcr
(2), logo image pixel coordinates system is defined as two-dimensional Cartesian coordinate system O1UV, in the coordinate system with vertex O1To sit Mark system origin, pixel center point coordinate are (u0,v0);
Step 3 two: the logo image that left video camera is shot is obtained into vision by the extraction that SURF algorithm carries out characteristic point information Information characteristics;
Step 3 three: the vision of the logo image vision information characteristics point and vision map data base that calculate left video camera shooting is believed The arest neighbors Euclidean distance for ceasing characteristic point is retrieved with the logo image of left video camera shooting most according to arest neighbors Euclidean distance The image for the vision map data base matched i.e. database images;The visual information feature of database images and left video camera are extracted It obtains visual information feature progress Feature Points Matching and obtains m to match point;
Step 3 four: the logo image that right video camera is shot is obtained into vision by the extraction that SURF algorithm carries out characteristic point information Information characteristics;The visual information feature of database images and right video camera are extracted to obtain visual information feature progress characteristic point With obtaining K to match point;
Step 3 five: the Mismatching point in step 3 three between matching image is rejected using RANSAC algorithm;
Step 3 six: the K through step 3 four is rejected to the Mismatching point in match point using RANSAC algorithm;
Step 3 seven: pass through the homography matrix H estimated1、H2With the logo image vertex P stored in Visual Map database1 With vertex P2Pixel coordinate calculating find out homography matrix H1The vertex P of the logo image of corresponding left video camera shooting1Pixel Point Pl 1And Pl 2, find out homography matrix H2The vertex P of the logo image of corresponding right video camera shooting2PixelWith
Relationship between homography matrix and matching image pair is expressed as:
Pe jFor Pl 1、Pl 2OrE is left video camera l or right video camera r;HjFor H1Or H2;pjFor p1Or p2
The vertex P for the logo image that vision map data base in step 1 is stored1With vertex P2Pixel coordinate p1、p2Band respectively Enter in formula (1), the corresponding pixel P of vertex P1 of the logo image of left video camera shooting is calculatedl 1, the shooting of left video camera Logo image vertex P2Corresponding pixel Pl 2Pixel coordinate, right video camera shooting logo image vertex P1It is corresponding PixelWith the vertex P of the logo image of right video camera shooting2Corresponding pixelPixel coordinate;Wherein, pixel Point Pl 1Pixel coordinate be expressed asPixel Pl 2Pixel coordinate be expressed asPixelPixel coordinate It is expressed asPixelPixel coordinate be expressed asMatching image schemes the logo shot for left video camera As the logo image and Visual Map number matched with the correspondence image in Visual Map database or right video camera is shot According to images match corresponding in library.
4. the binocular vision indoor orientation method based on logo according to claim 3, it is characterised in that: adopted in step 3 five The Mismatching point in step 3 three between matching image is rejected with RANSAC algorithm specifically:
The initial value on step 3 May Day, the number of iterations n and match point i is set as 0;
Step 3 five or two estimates that left video cameras are shot to 4 pairs of match points of selection in match point using m obtained in step 3 three Homography matrix H between logo image and database logo image1, by calculating the homography matrix H obtained1It calculates in step 3 three Theoretical coordinate of the obtained m to match point remaining in match point in corresponding logo image coordinate system;
Step 3 five or three calculates i-th of view that i-th of match point theoretical coordinate and step 3 two in database logo image obtain Feel the Euclidean distance of information characteristics point coordinate;Wherein, 1≤i≤m;
Whether the Euclidean distance that step 3 the May 4th, judgment step 353 obtain is less than preset Euclidean distance threshold value t, if sentenced Disconnected result be it is yes, then follow the steps 355;If it is judged that be it is no, then return to step 353;
Step 3 five or five enables the value of n add 1, and judges whether the value of n is greater than preset the number of iterations n0;If it is judged that being It is then to save the matching double points, and save homography matrix H1;If it is judged that be it is no, reject the matching double points, and return and hold Row step 3 five or two.
5. the binocular vision indoor orientation method based on logo according to claim 3, it is characterised in that: adopted in step 3 six The K through step 3 four is rejected to the Mismatching point in match point with RANSAC algorithm specifically:
Step 361, the number of iterations n ' and K are set as 0 to the initial value of the match point k in match point;
Step 3 six or two estimates that left video cameras are shot to 4 pairs of match points of selection in match point using K obtained in step 3 four Homography matrix H between logo image and database logo image2, by calculating the homography matrix H obtained2It calculates in step 3 four Theoretical coordinate of the remaining match point in corresponding logo image coordinate system in obtained match point;
Step 3 six or three calculates k-th of visual information characteristic point coordinate that k-th of match point theoretical coordinate is obtained with step 3 four Euclidean distance;Wherein, 1≤k≤K;
Whether the Euclidean distance that step 3 six or four, judgment step 363 obtain is less than preset Euclidean distance threshold value t, if sentenced Disconnected result be it is yes, then follow the steps 365;If it is judged that be it is no, then return to step 363;
Step 3 six or five enables the value of n ' add 1, and judges whether the value of n ' is greater than preset the number of iterations n '0;If it is judged that Be it is yes, then terminate, and save homography matrix H2;If it is judged that be it is no, then return to step 362.
6. the binocular vision indoor orientation method based on logo according to claim 3, it is characterised in that: basis in step 4 Pixel coordinate obtained in step 3WithCalculate vertex P1In the coordinate system O of left video cameraclXclYclZclUnder Three-dimensional coordinate;The pixel coordinate according to obtained in step 3Calculate vertex P2In the coordinate system of left video camera OclXclYclZclUnder three-dimensional coordinate specifically:
Step 4 one: according to pixel coordinateOrLogo image vertex after asking conversion P1Or vertex P2In the coordinate (x, y) under physical coordinates system as shown in formula (2):
Wherein, the logo image of binocular camera shooting is the logo image of left video camera shooting or the logo of right video camera shooting Image;(x, y) is (x1 l,y1 l)、(x1 r,y1 r)、(x2 l,y2 l) or (x2 r,y2 r);(x1 l,y1 l) shot for the left video camera after conversion Logo image vertex P1Physical coordinates, (x1 r,y1 r) it is the logo image vertex P that the right video camera after converting is shot1Object Manage coordinate;(x2 l,y2 l) it is the logo image vertex P that the left video camera after converting is shot2Physical coordinates, (x2 r,y2 r) it is conversion The physical coordinates of the logo image vertex P2 of right video camera shooting afterwards;(u, v) isOr
Dx is the amplification coefficient of x;Dy is the amplification coefficient of y;Shown in homogeneous coordinates such as formula (3) with expression matrix:
Definition pixel center point coordinate is (u0,v0) it is used as coordinate origin O2, logo image physical coordinates system is defined as two dimension Rectangular coordinate system O2XY, X-axis and Y-axis respectively with the X of camera coordinate systemclAxis and YclAxis is parallel;
Step 4 two: the logo image physical coordinates system O after asking conversion2XY and left camera coordinate system OclXclYclZclBetween Relationship;
OclThe intersection point p of P and the plane of delineationcl(x, y) or OcrThe intersection point p of P and the plane of delineationcr(x, y) meets fundamental relation such as formula (4) shown in:
Wherein, P point is P1Point or P2Point;
(xc,yc,zc) it is binocular camera coordinate, (xc,yc,zc) be OrFor vertex P under left camera coordinate system1Coordinate;For vertex P under right camera coordinate system1Coordinate;For under left camera coordinate system Vertex P2Coordinate;For vertex P under right camera coordinate system2Coordinate;OclP is P in P1Or vertex P2
Shown in homogeneous coordinates form such as formula (6) with expression matrix:
If left camera coordinate system is OclXclYclZclIt is O with right camera coordinate systemcrXcrYcrZcr, image physical coordinates system difference For OlXlYlZlAnd OrXrYrZr, left focal length of camera is intrinsic parameter fl;Right focal length of camera is intrinsic parameter fr, f flOr fr
Then shown in the Perspective transformation model of video camera such as formula (8):
(xl,yl) it is (x1 l,y1 l) or (x2 l,y2 l);(xr,yr) it is (x1 r,y1 r) or (x2 r,y2 r);
With space conversion matrix MlrIndicate the position transformational relation between left and right cameras coordinate system, as shown in formula (9):
Wherein, Mlr=[R | T];R is spin matrix, and T is translation vector;MlrThe matrix formed for R and T;
Step 4 three: the coordinate under left camera coordinate system is solved;The intrinsic parameter obtained using step 2 and outer parameter, according to Step 4 one obtains coordinate (x, y);Vertex P under left camera coordinate system is calculated1Coordinate (xcl, ycl, zcl);
Therefore, it is indicated by matrix equation (11):
Solution matrix equation can find out P1Coordinate of the point under left camera coordinate system, ρrFor proportionality coefficient,
As shown in formula (12):
Wherein, according to pixel Pl 1Homogeneous coordinates in image physical coordinates system are expressed asAnd pixelScheming As the homogeneous coordinates in physical coordinates system are expressed asSolve P1Coordinate of the point under left camera coordinate system beAnd P2Coordinate of the point under left camera coordinate system be
7. the binocular vision indoor orientation method based on logo according to claim 3, it is characterised in that: basis in step 5 The three-dimensional coordinate that step 4 acquiresWith the top of the logo image stored in vision map data base Point P1And P2Coordinate under world coordinate system establishes coordinate transformation relation, acquires coordinate of the left video camera under world coordinate system, The as position coordinates of user specifically:
Step 5 one, the world coordinate system O according to foundationwXwZw, enable XwThe direction that axis is directed toward is east, ZwThe direction that axis is directed toward is The north, the case where shooting according to video camera from north orientation south, θ indicates left video camera C point and database logo image vertex P1Company Line and vertex P1 and P2Between angle;Wherein, (0, π) θ ∈;
Step 5 two: C point is calculated away from P1The distance D of point1, C point is calculated away from P2The distance D of point2, it is known that P1The coordinate of pointAnd P1P2Length D3, calculate position of the video camera C point under world coordinate system;Video camera C point is video camera Optical center;
P is calculated by formula (12)1Point and P2Point is in left camera coordinate system OclXclYclZclUnder three-dimensional coordinate beWithThen acquire the following length:
Step 5 three: the angle θ is found out;According to the three of triangle ABC side length D1、D2And D3It acquires shown in angle θ such as formula (15):
Step 5 four: coordinate (x of the point C under world coordinate system can be obtained by obtaining positioning coordinate by step 5 threec, zc), such as formula (16) shown in;
CN201610546034.XA 2016-07-12 2016-07-12 Binocular vision indoor orientation method based on logo Active CN106228538B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610546034.XA CN106228538B (en) 2016-07-12 2016-07-12 Binocular vision indoor orientation method based on logo

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610546034.XA CN106228538B (en) 2016-07-12 2016-07-12 Binocular vision indoor orientation method based on logo

Publications (2)

Publication Number Publication Date
CN106228538A CN106228538A (en) 2016-12-14
CN106228538B true CN106228538B (en) 2018-12-11

Family

ID=57520505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610546034.XA Active CN106228538B (en) 2016-07-12 2016-07-12 Binocular vision indoor orientation method based on logo

Country Status (1)

Country Link
CN (1) CN106228538B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103056B (en) * 2017-04-13 2021-01-29 哈尔滨工业大学 Local identification-based binocular vision indoor positioning database establishing method and positioning method
CN107292927B (en) * 2017-06-13 2020-09-04 厦门大学 Binocular vision-based symmetric motion platform pose measurement method
CN108416428B (en) * 2018-02-28 2021-09-14 中国计量大学 Robot vision positioning method based on convolutional neural network
CN108534782B (en) * 2018-04-16 2021-08-17 电子科技大学 Binocular vision system-based landmark map vehicle instant positioning method
CN108801274B (en) * 2018-04-16 2021-08-13 电子科技大学 Landmark map generation method integrating binocular vision and differential satellite positioning
CN109191529A (en) * 2018-07-19 2019-01-11 武汉卫思德科技有限公司 A kind of indoor visible light self aligning system and method based on binocular technology
CN109186491A (en) * 2018-09-30 2019-01-11 南京航空航天大学 Parallel multi-thread laser measurement system and measurement method based on homography matrix
CN109269478A (en) * 2018-10-24 2019-01-25 南京大学 A kind of container terminal based on binocular vision bridge obstacle detection method
CN109540144A (en) * 2018-11-29 2019-03-29 北京久其软件股份有限公司 A kind of indoor orientation method and device
CN109587628B (en) * 2018-12-14 2021-01-26 深圳力维智联技术有限公司 Indoor real-time positioning method and device
CN110017841A (en) * 2019-05-13 2019-07-16 大有智能科技(嘉兴)有限公司 Vision positioning method and its air navigation aid
CN110889349A (en) * 2019-11-18 2020-03-17 哈尔滨工业大学 VSLAM-based visual positioning method for sparse three-dimensional point cloud chart
CN112454365A (en) * 2020-12-03 2021-03-09 湖南长城科技信息有限公司 Human behavior recognition technology-based human-computer interaction safety monitoring system
CN112667832B (en) * 2020-12-31 2022-05-13 哈尔滨工业大学 Vision-based mutual positioning method in unknown indoor environment
CN112884841B (en) * 2021-04-14 2022-11-25 哈尔滨工业大学 Binocular vision positioning method based on semantic target
CN113253843B (en) * 2021-05-24 2023-05-09 哈尔滨工业大学 Indoor virtual roaming realization method and realization system based on panorama

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484881A (en) * 2014-12-23 2015-04-01 哈尔滨工业大学 Image capture-based Visual Map database construction method and indoor positioning method using database
CN104596519A (en) * 2015-02-17 2015-05-06 哈尔滨工业大学 RANSAC algorithm-based visual localization method
CN104933718A (en) * 2015-06-23 2015-09-23 广东省自动化研究所 Physical coordinate positioning method based on binocular vision
CN105225240A (en) * 2015-09-25 2016-01-06 哈尔滨工业大学 The indoor orientation method that a kind of view-based access control model characteristic matching and shooting angle are estimated

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104484881A (en) * 2014-12-23 2015-04-01 哈尔滨工业大学 Image capture-based Visual Map database construction method and indoor positioning method using database
CN104596519A (en) * 2015-02-17 2015-05-06 哈尔滨工业大学 RANSAC algorithm-based visual localization method
CN104933718A (en) * 2015-06-23 2015-09-23 广东省自动化研究所 Physical coordinate positioning method based on binocular vision
CN105225240A (en) * 2015-09-25 2016-01-06 哈尔滨工业大学 The indoor orientation method that a kind of view-based access control model characteristic matching and shooting angle are estimated

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
双目视觉在移动机器人定位中的应用;王殿君;《中国机械工程》;20130523(第2013年第9期);第1155-1158页 *
基于双目立体视觉的特征点提取与定位方法研究;吴哲岑;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150915(第2015年第9期);第I138-1399页 *

Also Published As

Publication number Publication date
CN106228538A (en) 2016-12-14

Similar Documents

Publication Publication Date Title
CN106228538B (en) Binocular vision indoor orientation method based on logo
CN110568447B (en) Visual positioning method, device and computer readable medium
JP4245963B2 (en) Method and system for calibrating multiple cameras using a calibration object
Teller et al. Calibrated, registered images of an extended urban area
Ventura et al. Wide-area scene mapping for mobile visual tracking
US7751651B2 (en) Processing architecture for automatic image registration
US7898569B2 (en) Angled axis machine vision system and method
CN108765498A (en) Monocular vision tracking, device and storage medium
CN108876749A (en) A kind of lens distortion calibration method of robust
EP1596330A2 (en) Estimating position and orientation of markers in digital images
US20030014224A1 (en) Method and apparatus for automatically generating a site model
Meilland et al. Dense visual mapping of large scale environments for real-time localisation
CN105654547B (en) Three-dimensional rebuilding method
CN107103056B (en) Local identification-based binocular vision indoor positioning database establishing method and positioning method
KR101342393B1 (en) Georeferencing Method of Indoor Omni-Directional Images Acquired by Rotating Line Camera
Gerke Using horizontal and vertical building structure to constrain indirect sensor orientation
CN108629829A (en) The three-dimensional modeling method and system that one bulb curtain camera is combined with depth camera
CN111009030A (en) Multi-view high-resolution texture image and binocular three-dimensional point cloud mapping method
CN108921889A (en) A kind of indoor 3-D positioning method based on Augmented Reality application
JP2006234703A (en) Image processing device, three-dimensional measuring device, and program for image processing device
CN109613974A (en) A kind of AR household experiential method under large scene
Cao et al. Camera calibration using symmetric objects
CN114926538A (en) External parameter calibration method and device for monocular laser speckle projection system
Abdellali et al. Absolute and relative pose estimation of a multi-view camera system using 2d-3d line pairs and vertical direction
CN110631556B (en) Distance measurement method of heterogeneous stereoscopic vision system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201010

Address after: 150001 No. 434, postal street, Nangang District, Heilongjiang, Harbin

Patentee after: Harbin Institute of Technology National University Science Park Development Co.,Ltd.

Address before: 150001 Harbin, Nangang, West District, large straight street, No. 92

Patentee before: HARBIN INSTITUTE OF TECHNOLOGY

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201210

Address after: Room A101, building 1, Yinxing Zhijie phase II, No. 1301-76, sightseeing Road, Xinlan community, Guanlan street, Longhua District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen National Research Institute of high performance Medical Devices Co.,Ltd.

Address before: 150001 No. 434, postal street, Nangang District, Heilongjiang, Harbin

Patentee before: Harbin Institute of Technology National University Science Park Development Co.,Ltd.

TR01 Transfer of patent right