CN110458879A - A kind of indoor positioning based on machine vision and map structuring device - Google Patents
A kind of indoor positioning based on machine vision and map structuring device Download PDFInfo
- Publication number
- CN110458879A CN110458879A CN201910679444.5A CN201910679444A CN110458879A CN 110458879 A CN110458879 A CN 110458879A CN 201910679444 A CN201910679444 A CN 201910679444A CN 110458879 A CN110458879 A CN 110458879A
- Authority
- CN
- China
- Prior art keywords
- camera
- map
- image
- pose
- machine vision
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 claims abstract description 54
- 230000000007 visual effect Effects 0.000 claims abstract description 19
- 238000007781 pre-processing Methods 0.000 claims abstract description 11
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 9
- 238000012800 visualization Methods 0.000 claims abstract description 6
- 239000000284 extract Substances 0.000 claims abstract 2
- 238000005457 optimization Methods 0.000 claims description 19
- 238000013519 translation Methods 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 8
- 239000011159 matrix material Substances 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 5
- 238000012937 correction Methods 0.000 claims description 3
- 239000000571 coke Substances 0.000 claims description 2
- 239000003086 colorant Substances 0.000 claims description 2
- 238000000605 extraction Methods 0.000 claims description 2
- 230000035945 sensitivity Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 241001062009 Indigofera Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
- G06T11/206—Drawing of charts or graphs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Present invention discloses a kind of indoor positionings based on machine vision and map structuring device, the device includes transducer calibration part, image preprocessing part, visual odometry building part and map structuring part, and the transducer calibration part is demarcated to monocular camera;Picture preprocessing part is that the image of monocular camera shooting is carried out calibration and specific gray proces;Visual odometry building part is to have selected half dense direct algorithm to minimize luminosity error, Lai Chongjian mono- and half dense structure, and estimate that the pose of camera realizes the positioning to camera;Map structuring part carries out the reconstruction of local map according to the pixel with certain gradient that half dense direct algorithm extracts, and map is finally saved as to the form of Octree map, and shown with visualization procedure.
Description
Technical field
The present invention relates to a kind of indoor positionings based on machine vision and map structuring device, can be used for machine vision neck
Domain.
Background technique
With the development of science and technology, artificial intelligence is receive more and more attention, machine vision can be described as artificial intelligence
One of popular research direction, machine vision is fast-developing branch of artificial intelligence.In brief, machine vision
Human eye exactly is replaced with machine to measure and judge.NI Vision Builder for Automated Inspection will be ingested target and turned by image-pickup device
It changes picture signal into, sends dedicated image processing system to, obtain the shape information of target subject, according to pixel distribution and bright
The information such as degree, color, are transformed into digitized signal, and picture system carries out various operations to these signals to extract the spy of target
Sign, and then live device action is controlled according to the result of differentiation.
The purpose of machine vision is the decision-making capability for making machine have the sensing capability and brain similar to human eye.But it is of today
Most of machine perception do not ensure that enough precisions, less can guarantee that machine can be made correctly under the information of acquisition
Selection, usually, most equipment can only carry out positioning analysis object by experience, the theoretical system without forming system.
Therefore, object positions and builds diagram technology there are biggish uncertainty at present, has no idea to guarantee enough stability, identification essence
The problems such as degree is not high, and there are biggish error hiding rate and leakage matching rates, such as current Household floor-sweeping machine device people, often occur
The problems such as partial region can not position.
Summary of the invention
The object of the invention is to propose a kind of based on machine vision to solve the above-mentioned problems in the prior art
Indoor positioning and map structuring device.
A kind of indoor positioning based on machine vision that the purpose of the invention will be achieved through the following technical solutions: and ground
Figure construction device, the device include transducer calibration part, image preprocessing part, visual odometry building part and map structure
Part is built, the transducer calibration part is demarcated to monocular camera;Picture preprocessing part is to shoot monocular camera
Image carry out calibration with specific gray proces;Visual odometry building part is that half dense direct algorithm has been selected to minimize
Luminosity error, Lai Chongjian mono- and half dense structure, and estimate that the pose of camera realizes the positioning to camera;Map structuring portion
The reconstruction for dividing the pixel with certain gradient extracted according to half dense direct algorithm to carry out local map, finally protects map
The form of Octree map is saved as, and is shown with visualization procedure.
Preferably, 3 coordinate system difference are constructed using general Zhang Zhengyou calibration method in the system calibrating part
Are as follows: world coordinate system, camera coordinates system, image coordinate system.It is closed according to rotation, the translation between world coordinates and camera coordinates
System, the proportional relation of the similar triangles between camera coordinates and image physical coordinates, image physical coordinates and image pixel
Translation, proportionate relationship between coordinate can obtain the relationship between image pixel coordinates and world coordinates.
Preferably, in the image preprocessing part, the internal reference matrix that is obtained first according to system calibrating part and outer
Join matrix, the coefficient of radial distortion and tangential distortion coefficient of camera is obtained, by Taylor series expansion by distortion point in imager
Imaging above corrects to new position, to achieve the purpose that reduce radial distortion and tangential distortion as far as possible.
Preferably, the picture of correction is subjected to gray processing processing in image preprocessing part, improves the quality of image, gray scale
Change and use component method, the color-ratio of tri- components of R, G, B is adjusted, the more intuitive display figure of the grayscale image presented
The information of picture.
Preferably, using in nonlinear filtering after image preprocessing part is using a wide range of normalized of image
Median filtering, 0 to 255 gray value is stretched between -100 to 400, and treated in this way, and color of image contrast can be more
Obviously, advantageously for the display of profile and extraction.
Preferably, it is rebuild in visual odometry building by half dense direct method.Half dense direct method is different from
Sparse direct method and dense direct method.He does not need to calculate all pixels, as long as by will be with certain in the two field pictures of front and back
The pixel of gradient is calculated, and the indefinite place of pixel is given up, which does not have the calculating of pose
Calculation amount is contributed and increased, finally calculates the pixel of reservation, is reconstructed into one and half dense structures, this method phase
Than the requirement also decrease to some degree in dense reconstruction for CPU.
Preferably, specific reconstruction process is as follows: setting the world coordinates of P as [X;Y;Z], its figure in former frame and a later frame
As upper nonhomogeneous pixel coordinate is P1,P2;Our target is that the pose for seeking former frame to a later frame converts, so with previous
The camera of frame be reference system, by rotation and translation reach second frame camera position, and set the matrix of rotation and translation as
R,t,;The internal reference of two cameras is identical to be set as K, projection equation are as follows:
Wherein Z1It is depth of the P under first camera coordinates system, Z2It is depth of the P under second coordinate system, ξ is rotation
Turn the corresponding Lie algebra of translation, in order to find and P1, more like P2, need to minimize luminosity error, that is, minimize P and exist
The luminance errors of the upper picture of two frame of front and back:
E=I1(p1)-I2(p2),
Optimization aim is two norms of the error, due to the constant hypothesis of gray scale, it is assumed that a spatial point is under each visual angle
The gray scale of imaging is constant, then any spatial point is Pi, the minimum error of entire camera pose ξ are as follows:
It is equivalent to the problem of carrying out an optimization due to solving direct method, is optimized herein using existing optimization library.
Preferably, the problem of direct method is equivalent to one error optimization of progress is solved, is optimized, is made using G2O herein
Being optimized with the library is the process that solution procedure is abstracted into a figure optimization, and the key for scheming optimization is the structure on node and side
Build, since optimized variable is a camera pose, and used Lie algebra in calculating, thus the building on pose vertex we
Using SE (3) pose vertex, use the VertexSE3Expmap function in the library G2O as camera pose, G2O is not present in library
Calculate the side of luminosity error, thus we oneself define a new side, and building while when inherit existing while g2o::
BaseUnaryEdge.It when inheriting, needs to insert dimension, the type of measured value in template parameter, and connects the top on this side
Point, meanwhile, we are stored in spatial point P, camera internal reference and image in the member variable on the side.
Preferably, due to our visual odometry parts be use half dense direct method, so map structuring in terms of I
Can construct half dense Octree map.Octree map is will to put cloud to divide space, is divided into eight three-dimensional volume elements,
Again each three-dimensional volume elements is continued to divide and be stopped until being divided into defined threshold condition.Octree map, which is visually seen, seems
What many small cubes formed.When resolution ratio is higher, square very little;When resolution ratio is lower, square is very big.Each square indicates
The probability that the lattice are occupied.Octree map itself has preferable compression performance, not only overcomes point cloud chart and occupies storage sky
Between it is big, the shortcomings that moving object cannot be handled, and ambient enviroment can be reacted in real time.
Detailed description of the invention
Fig. 1 is coordinate transition diagram of the invention.
Fig. 2 is visual odometry flow diagram of the present invention.
Fig. 3 is direct method schematic diagram of the invention.
Fig. 4 is the lab diagram of half dense direct method of the invention.
Fig. 5 is generation Octree map of the invention.
Fig. 6 device overall flow figure.
Specific embodiment
The purpose of the present invention, advantage and feature, by by the non-limitative illustration of preferred embodiment below carry out diagram and
It explains.These embodiments are only the prominent examples using technical solution of the present invention, it is all take equivalent replacement or equivalent transformation and
The technical solution of formation, all falls within the scope of protection of present invention.
Present invention discloses a kind of indoor positioning based on machine vision and map structuring device, which includes sensor
Part, image preprocessing part, visual odometry building part and map structuring part are demarcated, the transducer calibration part is
Monocular camera is demarcated;Picture preprocessing part is to carry out the image of monocular camera shooting at calibration and specific gray scale
Reason;Visual odometry building part is that half dense direct algorithm has been selected to minimize luminosity error, and Lai Chongjian mono- and half is dense
Structure, and estimate that the pose of camera realizes the positioning to camera;It is extracted according to half dense direct algorithm map structuring part
Pixel with certain gradient carries out the reconstruction of local map, and map is finally saved as to the form of Octree map, is used in combination
Visualization procedure is shown.
Camera calibration part is monocular calibration, and the internal reference that estimation camera is demarcated by monocular is exactly build-in attribute, including coke
Away from, pixel ruler, coefficient of radial distortion and tangential distortion coefficient etc..The distortion factor and camera internal reference of acquisition are as subsequent paving
Pad, the picture that distortion factor is used to shoot monocular camera carry out correction process;Camera internal reference matrix is as visual odometry
Input parameter.The present invention uses Zhang Zhengyou calibration method, using gridiron pattern as sensor reference object, constructs 3 coordinate systems point
Not are as follows: world coordinate system, camera coordinates system, image coordinate system, as shown in Figure 1.According between world coordinates and camera coordinates
Rotation, translation relation, the proportional relation of the similar triangles between camera coordinates and image physical coordinates, image physics sit
Translation, proportionate relationship between mark and image pixel coordinates can obtain the relationship between image pixel coordinates and world coordinates, thus
It is demarcated to close, obtains distortion factor and camera internal reference.
Picture preprocessing part, the picture after being shot by monocular camera are pre-processed, first progress gray processing
Processing, due to ambient room light unevenness, for sensor acquisition off-the-air picture according to human eye to three kinds of colors of RGB
Sensitivity is adjusted the three-component weight of image, visually to the susceptibility highest of green, most to blue susceptibility
Low, according to red: green: indigo plant=3:6:1 ratio column carry out component weighed value adjusting gray processing, allow gray level image in human eye observation, more
Add true to nature;The purpose of normalized gray level image is the contrast in order to increase image, and the intensity value ranges of script 0-255 expand
It can reinforce outline sense after big, be equivalent to the contrast for increasing image, visually convenient for the calculating of subsequent luminosity error.
Visual odometry constructs part, is mainly rebuild by half dense direct method, algorithm flow chart is as shown in Figure 2.
Direct method, which constructs visual odometry, can construct half dense or even dense map to avoid characteristic point and description is calculated,
For needing the subsequent system for carrying out path planning, direct method is a good selection, and the schematic diagram of direct method is as shown in Figure 3.
Direct method divides into sparse direct method, half dense direct method, dense direct method again, selects half dense direct method in the present invention.Half
Dense direct method is different from sparse direct method and dense direct method.He does not need to calculate all pixels, as long as by two frame of front and back
The pixel with certain gradient is calculated in image, gives up the indefinite place of pixel, the pixel pair given up
Calculation amount is not contributed and increased in the calculating of pose, finally the pixel of reservation is calculated, it is thick to be reconstructed into one and half
Close structure, requirement also decrease to some degree of this method compared to dense reconstruction for CPU.Camera pose is solved
Aspect optimizes the movement of camera by minimizing luminosity error in half dense direct method.Concrete analysis process is as follows:
If the world coordinates of P is [X;Y;Z], its nonhomogeneous pixel coordinate on the image of former frame and a later frame is
P1,P2.Our target is that the pose for seeking former frame to a later frame converts, so using the camera of former frame as reference system, through overwinding
Turn and move to the position up to second frame camera, and sets the matrix of rotation and translation as R, t,.The internal reference of two cameras is identical
It is set as K, projection equation are as follows:
Wherein Z1It is depth of the P under first camera coordinates system, Z2It is depth of the P under second coordinate system, ξ is rotation
Turn the corresponding Lie algebra of translation, in order to find and P1,More like P2, need to minimize luminosity error, that is, minimize P and exist
The luminance errors of the upper picture of two frame of front and back:
E=I1(p1)-I2(p2)
Optimization aim is two norms of the error, due to the constant hypothesis of gray scale, it is assumed that a spatial point is under each visual angle
The gray scale of imaging is constant, then any spatial point is Pi, the minimum error of entire camera pose ξ are as follows:
It is equivalent to the problem of carrying out an optimization due to solving direct method, is optimized herein using existing optimization library.
It is optimized herein using G2O, being optimized using the library is the process that solution procedure is abstracted into a figure optimization, figure optimization
Key be node and side building, since optimized variable is a camera pose, and Lie algebra has been used in calculating, institute
With the building on pose vertex, we use SE (3) pose vertex, use VertexSE3Expmap function in the library G2O as phase
Seat in the plane appearance, there is no the sides for calculating luminosity error in the library G2O, so we define a new side ourselves, and when constructing side
Inherit existing side g2o::BaseUnaryEdge.When inheriting, need to insert dimension, the class of measured value in template parameter
Type, and the vertex on this side is connected, meanwhile, we are stored in spatial point P, camera internal reference and image the member variable on the side
In.In order to allow g2o to optimize the corresponding error in the side, it would be desirable to override two Virtual Functions: being calculated and missed with computeError ()
Difference calculates Jacobi with linearizeOplus ().Node and side, which are combined into figure, to carry out figure optimization using G2O
Estimate the movement of camera, Fig. 4 is the comparison of reference frame and the 2nd frame in half dense direct method experiment, and green portion is to participate in optimization
Pixel, which meets the requirement of certain gradient.
Map structuring part, since our visual odometry parts are the half dense direct methods used, so map structuring
Aspect we can construct half dense Octree map.Octree map is will to put cloud to divide space, is divided into eight solids
In volume elements, then each three-dimensional volume elements is continued to divide and is stopped until being divided into defined threshold condition.Octree map is from vision
On see to seem that a many small cubes form.When resolution ratio is higher, square very little;When resolution ratio is lower, square is very big.Each
Square indicates the probability that the lattice are occupied.Octree map itself has preferable compression performance, not only overcomes point cloud chart and accounts for
With memory space it is big, the shortcomings that moving object cannot be handled, and ambient enviroment can be reacted in real time.We are according to depth herein
The posture information of degree figure and camera, the coordinate of point is converted to world coordinates, and the coordinate with location information is put into as point cloud
In octomap, Octree map view is finally saved into.Wherein we need to install the library octomap, mainly include in the library
Octomap map components and a visualization procedure octovis can check that the three-dimensional information for generating Octree map, Fig. 5 are
The Octree map generated using octovis program display.
The present invention is the automatic detecting of room objects, operation task, constructs a set of positioning based on machine vision and ground
Figure construction device designs the system that can position and realize that figure is built in part in real time using indoor bedroom as practical application example, can
Path planning of subsequent robot etc. is cooperated to operate.A kind of high-precision locating method is proposed for indoor environment, it can be real-time
Alignment sensor location information, and establish Octree map.
Still there are many embodiment, all technical sides formed using equivalents or equivalent transformation by the present invention
Case is within the scope of the present invention.
Claims (7)
1. a kind of indoor positioning based on machine vision and map structuring device, it is characterised in that: be mainly made of such as lower component:
Transducer calibration part, for being demarcated to monocular camera;
Picture preprocessing part, the image for shooting monocular camera carry out calibration and specific gray proces;
Visual odometry constructs part, and for selecting half dense direct algorithm to minimize luminosity error, Lai Chongjian mono- and half is dense
Structure, and estimate that the pose of camera realizes positioning to camera;
Map structuring part, the weight according to the pixel progress local map with certain gradient that half dense direct algorithm extracts
It builds, map is finally saved as to the form of Octree map, and shown with visualization procedure.
2. a kind of indoor positioning based on machine vision according to claim 1 and map structuring device, it is characterised in that:
It is monocular calibration in the transducer calibration part, the internal reference that estimation camera is demarcated by monocular is exactly build-in attribute, including coke
Away from, pixel ruler, coefficient of radial distortion and tangential distortion coefficient pass through camera calibration and obtain distortion factor and camera internal reference, distortion
The picture that coefficient is used to shoot monocular camera carries out correction process, and camera internal reference matrix is joined as the input of visual odometry
Number.
3. a kind of indoor positioning based on machine vision according to claim 1 and map structuring device, it is characterised in that:
Picture needs after being shot by monocular camera are specifically pre-processed, and ambient room light unevenness are primarily due to, for passing
The off-the-air picture of sensor acquisition, to the sensitivity of three kinds of colors of RGB, is adjusted the three-component weight of image according to human eye
It is whole;Secondly, using normalized gray level image, in order to which the contrast for increasing image expands script intensity value ranges
It can reinforce outline after big, convenient for the calculating of subsequent luminosity error.
4. a kind of indoor positioning based on machine vision according to claim 1 and map structuring device, it is characterised in that:
Visual odometry mainly passes through half dense direct method and is rebuild, and half dense direct method will will have certain in the two field pictures of front and back
The pixel of gradient is calculated, and is given up the indefinite place of pixel, is finally calculated the pixel of reservation, be reconstructed into
One and half dense structures.
5. a kind of indoor positioning based on machine vision according to claim 4 and map structuring device, it is characterised in that:
Specific reconstruction process is as follows: setting the world coordinates of P as [X;Y;Z], its nonhomogeneous picture on the image of former frame and a later frame
Plain coordinate is P1, P2;Our target is that the pose for seeking former frame to a later frame converts, so using the camera of former frame as reference
System, the position of second frame camera is reached by rotation and translation, and sets the matrix of rotation and translation as R, t,;Two cameras
Internal reference is identical to be set as K, projection equation are as follows:
Wherein Z1It is depth of the P under first camera coordinates system, Z2It is depth of the P under second coordinate system, ξ is rotary flat
Corresponding Lie algebra is moved, in order to find and P1, more like P2, need to minimize luminosity error, that is, minimize P in front and back
The luminance errors of the upper picture of two frames:
E=I1(p1)-I2(p2),
Optimization aim is two norms of the error, due to the constant hypothesis of gray scale, it is assumed that a spatial point is imaged under each visual angle
Gray scale be it is constant, then any spatial point be Pi, the minimum error of entire camera pose ξ are as follows:
ei=I1(p1, i)-I2(p2, i),
It is equivalent to the problem of carrying out an optimization due to solving direct method, is optimized herein using existing optimization library.
6. a kind of indoor positioning based on machine vision according to claim 4 and map structuring device, it is characterised in that:
Half dense direct method solution camera pose, which is equivalent to, minimizes luminosity error, carries out customized optimization using G2O, makes
Being optimized with the library is the process that solution procedure is abstracted into a figure optimization, and the key for scheming optimization is the structure on node and side
Build, since optimized variable is a camera pose, and used Lie algebra in calculating, thus the building on pose vertex we
Using sE (3) pose vertex, use the VertexSE3Expmap function in the library G2O as camera pose, G2O is not present in library
The side of luminosity error is calculated, so we define a new side, and is inherited in the existing library G2O when constructing side ourselves
Side g2o::BaseUnaryEdge, then it is written over, need to insert dimension, the type of measured value in template parameter, with
And the vertex on this side is connected, meanwhile, we are stored in spatial point P, camera internal reference and image in the member variable on the side;In order to
It allows g2o to optimize the corresponding error in the side, overrides two Virtual Functions: calculating error amount with computeError (), use
LinearizeOplus () calculates Jacobi;Node and side, which are combined into figure, to carry out figure optimal estimating camera using G2O
Movement.
7. a kind of indoor positioning based on machine vision according to claim 1 and map structuring device, it is characterised in that:
We construct half dense ground by the pixel with certain gradient of half dense direct method extraction in terms of map structuring
Figure --- the coordinate of point is converted to world coordinates, is had according to the posture information of the image of acquisition and camera by Octree map
The coordinate of location information is put into octomap as point cloud, is finally saved into Octree map view, is needed to install octomap
Library mainly includes octomap map components and a visualization procedure octovis in the library, can check with generating Octree
The three-dimensional information of figure.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910679444.5A CN110458879A (en) | 2019-07-25 | 2019-07-25 | A kind of indoor positioning based on machine vision and map structuring device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910679444.5A CN110458879A (en) | 2019-07-25 | 2019-07-25 | A kind of indoor positioning based on machine vision and map structuring device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110458879A true CN110458879A (en) | 2019-11-15 |
Family
ID=68483524
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910679444.5A Withdrawn CN110458879A (en) | 2019-07-25 | 2019-07-25 | A kind of indoor positioning based on machine vision and map structuring device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110458879A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111192323A (en) * | 2020-04-10 | 2020-05-22 | 支付宝(杭州)信息技术有限公司 | Object positioning method and device based on image |
TWI781511B (en) * | 2020-11-06 | 2022-10-21 | 財團法人工業技術研究院 | Multi-camera positioning and dispatching system, and method thereof |
-
2019
- 2019-07-25 CN CN201910679444.5A patent/CN110458879A/en not_active Withdrawn
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111192323A (en) * | 2020-04-10 | 2020-05-22 | 支付宝(杭州)信息技术有限公司 | Object positioning method and device based on image |
TWI781511B (en) * | 2020-11-06 | 2022-10-21 | 財團法人工業技術研究院 | Multi-camera positioning and dispatching system, and method thereof |
US11632499B2 (en) | 2020-11-06 | 2023-04-18 | Industrial Technology Research Institute | Multi-camera positioning and dispatching system, and method thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Sinha et al. | Pan–tilt–zoom camera calibration and high-resolution mosaic generation | |
CN109272570A (en) | A kind of spatial point three-dimensional coordinate method for solving based on stereoscopic vision mathematical model | |
CN106960442A (en) | Based on the infrared night robot vision wide view-field three-D construction method of monocular | |
US20200334842A1 (en) | Methods, devices and computer program products for global bundle adjustment of 3d images | |
Umeda et al. | Registration of range and color images using gradient constraints and range intensity images | |
CN105654476B (en) | Binocular calibration method based on Chaos particle swarm optimization algorithm | |
CN110288657A (en) | A kind of augmented reality three-dimensional registration method based on Kinect | |
CA2707176A1 (en) | Method and apparatus for rapid three-dimensional restoration | |
CN106570938A (en) | OPENGL based panoramic monitoring method and system | |
CN108629829B (en) | Three-dimensional modeling method and system of the one bulb curtain camera in conjunction with depth camera | |
CN104361603B (en) | Gun camera image target designating method and system | |
CN109523595A (en) | A kind of architectural engineering straight line corner angle spacing vision measuring method | |
CN103278138A (en) | Method for measuring three-dimensional position and posture of thin component with complex structure | |
CN107220996B (en) | One kind is based on the consistent unmanned plane linear array of three-legged structure and face battle array image matching method | |
Gonçalves et al. | Real-time direct tracking of color images in the presence of illumination variation | |
Li et al. | HDRFusion: HDR SLAM using a low-cost auto-exposure RGB-D sensor | |
CN110458879A (en) | A kind of indoor positioning based on machine vision and map structuring device | |
CN108362205A (en) | Space ranging method based on fringe projection | |
CN106295657A (en) | A kind of method extracting human height's feature during video data structure | |
Ye et al. | Deep Reflectance Scanning: Recovering Spatially‐varying Material Appearance from a Flash‐lit Video Sequence | |
Lukierski et al. | Rapid free-space mapping from a single omnidirectional camera | |
Liu et al. | Creating simplified 3D models with high quality textures | |
Dai et al. | Multi-spectral visual odometry without explicit stereo matching | |
Bruno et al. | Integrated processing of photogrammetric and laser scanning data for frescoes restoration | |
CN112017259B (en) | Indoor positioning and image building method based on depth camera and thermal imager |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20191115 |
|
WW01 | Invention patent application withdrawn after publication |