CN105043351A - Biological robot-based miniature wireless active omni-directional vision sensor - Google Patents

Biological robot-based miniature wireless active omni-directional vision sensor Download PDF

Info

Publication number
CN105043351A
CN105043351A CN201510391913.5A CN201510391913A CN105043351A CN 105043351 A CN105043351 A CN 105043351A CN 201510391913 A CN201510391913 A CN 201510391913A CN 105043351 A CN105043351 A CN 105043351A
Authority
CN
China
Prior art keywords
vision sensor
point
gecko
formula
pipeline
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510391913.5A
Other languages
Chinese (zh)
Inventor
汤一平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201510391913.5A priority Critical patent/CN105043351A/en
Publication of CN105043351A publication Critical patent/CN105043351A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a biological robot-based miniature wireless active omni-directional vision sensor. The hardware of the biological robot-based miniature wireless active omni-directional vision sensor comprises a gecko and a pipeline vision detection device; the pipeline vision detection device is bundled on the gecko, and the gecko creeps forward and drags the pipeline vision detection device to shoot the omni-directional image of the inner wall of a pipeline in the narrow and long pipeline with small caliber; the pipeline vision detection device mainly comprises a wireless communication unit, an active omni-directional vision sensor and a power supply; the active omni-directional vision mainly comprises an omni-directional vision sensor, an LED band light source and an omni-directional laser source; and a pipeline detection analysis system detects the detected pipeline according to the omni-directional video image transmitted through wireless communication network and executes 3D modeling treatment.

Description

The active panoramic vision sensor of a kind of micro radio based on biorobot
Technical field
Application in the state-detection etc. that the present invention relates to biorobot, panorama LASER Light Source, omnibearing vision sensor, wireless transmission and computer vision technique industrial surface in small space, particularly relates to the active panoramic vision sensor of a kind of micro radio based on biorobot.
Background technology
Underground pipeline also exists monitoring structural health conditions problem, and its necessity and urgency are self-evident.Need and a kind of robotization, omnibearing, intelligentized total health health check-up technology is carried out to pipeline, regularly pipeline detected and safeguard.
The robust motion of the industrial robot under non-structure environment, dirigibility, robustness, the aspect such as environmental suitability and efficiency of energy utilization far lag behind animal.Animal robot in energy resource supply, kinematic dexterity, disguise, maneuverability and adaptability comparatively robot there is more obvious advantage.
" nature " magazine ran in 2002, under the support energetically of U.S. DARPA, the computer MSR Information system of the doctor SanjivTalwar leader of the state university in USA New York successfully achieves the various motor behaviors of manual guidance mouse.Scientific research personnel, by installing microprocessor on mouse body, experiences cortex implant electrode in the section of its brain generation pleasant sensation and somatesthesia.Complete at outer remote-controlled its of 500m turnings, move ahead, climb tree and the action such as jump, even can control mouse and produce some motor behaviors against its habit, as being exposed in the space of injection high light.These researchs are applied to special dimension to animal robot and have guided direction.
The existing various pipes 13340km of China and annual with more than 100,000 speed increments, has the very vigorous market demand to the state-detection of industrial surface in small space.How completing pipe detection and the maintenance task of great engineering like this, is the difficult problem that China engineering staff needs solution badly.And biorobot is as the important supplement of the pipe robot adopted at present, during, limited space narrow and small at pipeline, and in the less pipeline of corner, diameter and not rounded pipe applications, there is obvious advantage.Therefore, biorobot detects at small space, has a wide range of applications market in pipeline or the business such as the detection of building air-conditioning system and maintenance.
Structural health monitoring technology is a multi-field integrated technology interdisciplinary, and it relates to multiple research directions such as civil engineering work, dynamics, materialogy, sensing technology, measuring technology, signal analysis, computer technology, the network communication communication technology, pattern-recognition, mobile robot.
Classify from the object of inner surface of pipeline vision-based detection, following two classes can be divided into: the first kind detects to have zero defect, such as burn into crack, leakage, distortion etc., and some detections need accurately to locate defect, keep in repair with this; Equations of The Second Kind detects geometric sense, the parameter such as inwall pattern, internal diameter, linearity of such as pipeline, and this kind of detection method and first kind difference are comparatively large, need the accurate three-dimensional coordinate of Obtaining Accurate inner surface of pipeline point of density cloud.
Use the most general testing tool to be pipeline closed-circuit television detection system now, i.e. CCTV method, the method is the instrument being specifically applied to underground pipeline detection.
Application number be 201010170739.9 Chinese patent application disclose a kind of robot for detecting drainage pipeline video, comprising camera system, lens control system, fuselage drivetrain, software control and data transmission system are unified lighting device.Creeped in pipeline by driving system loads other systems, lens control system is in order to adjust position and the angle of camera system, camera system obtains the pipe interior information after illumination, these information is outputted on computer screen finally by data transmission system.The main Problems existing of this technology is, extra lens control system is needed constantly to adjust the position of camera system and angle to obtain the panorama image information of pipe interior, and three-dimensionalreconstruction to be carried out from these image informations and three-dimensional values is extremely difficult, in addition, relevant technician is needed to detect video recording according to these, carry out interpretation and the analysis of pipe interior situation, strictly speaking, this kind of technology is only the image information obtained in pipe.
Application number be 201010022782.0 Chinese patent application disclose a kind of CCTV closed-circuit television camera detecting method, adopt CCTV pipe endoscopic telephotography detection system, automatic creeping in pipeline, the constructional aspect of pipeline is detected and makes a video recording, undertaken showing and record by wire transmission mode, and carry out a kind of detection method of assessing according to detection technique code again.Strictly speaking, this kind of technology is also only the image information obtained in pipe.
In sum, there is following defect and deficiency when utilizing CCTV method to carry out pipe detection at present: first, being limited in scope of shot by camera, the image of an inner-walls of duct part can only be obtained, in order to obtain all images of inner-walls of duct, need the visual angle constantly changing video camera; In addition, testing staff can only be leaned at present to judge surface whether existing defects with the eyes image passed through in obtained pipeline, general human eye can only discover change in size and reach ± image variation of more than 10%, be difficult to the accurate robotization and the intellectuality that realize pipe detection; Secondly, in current pipe, image acquiring method result in and is difficult to do quantitative measurement and analysis to the size of defect and particular location, still rely on the experience of ground staff to judge, and computer disposal result is comprehensively analyzed, be difficult to carry out high-precision automatic analysis and assessment to functional defect in pipeline and structural defect, automatically conduit running situation be divided into different brackets; Finally, the three-dimensional modeling difficulty of pipe interior sidewall, is difficult to the details reproducing pipeline by three-dimensional data mode, maintenance management for pipeline provides effective data supporting, thering is provided reference for formulating pipeline maintenance plan, to adopt different restorative procedures, repairing in time, economically.
Application number is that 201510006892.0 Chinese patent application disclose a kind of pipe interior merit defect detecting device based on active panoramic vision and method, and this device comprises: kernel operating system is analyzed in make a video recording system of creeping, control cables and detection; Detect analysis kernel operating system to comprise: the qualitative and quantitative analysis judge module of coordination control module, system control unit of creeping, image receiving unit, functional defect and structural defect and storage element; By carrying out functional defect and the structural defect of machine vision Treatment Analysis and identification pipeline to pipe interior full-view image and laser scanning cross sectional slice image two kinds of images.This application motion effectively overcomes the defect and the deficiency that to there are some when CCTV method carries out pipe detection, but content not yet relates to the active panoramic vision sensor designing technique how high precision carries out 3D measurement, further, this case not yet relates to and how to apply active panoramic vision sensor and carry out three-dimensional reconstruction to pipe interior.
In above-mentioned pipeline visual detection robot technology, the monitoring structural health conditions for small-bore pipeline has a very stubborn problem, is exactly how in a small space, to allow pipeline visual detection robot carry vision inspection apparatus move; Current industry solutions adopts creepage robot, but the manufacturing cost of this kind of creepage robot is very expensive, is difficult to apply actual; Another stubborn problem is that pipeline vision inspection apparatus also must microminiaturized and light-weight design except wanting the panoramic picture of energy Real-time Obtaining inner-walls of duct.
Summary of the invention
In order to the robotization and intelligent level that overcome existing CCTV method are low, pipe detection car body is long-pending to be difficult to greatly carry out high-precision automatic analysis and assessment to functional defect in small-bore pipeline and structural defect, the three-dimensional modeling difficulty of pipe interior sidewall, creepage robot is expensive waits deficiency, the invention provides the active panoramic vision sensor of a kind of micro radio based on biorobot, pipe detection robotization and intelligent level can be improved, high-precision automatic analysis and assessment are carried out to functional defect in pipeline and structural defect, realize the three-dimensional modeling of pipe interior sidewall.
Realize foregoing invention content, several key problem must be solved: (1) finds that a kind of biorobot being applicable to creeping in long and narrow pipeline also can guide its behavior of creeping; (2) realize a kind of cheap, overall weight in the heavy burden limit of power of biorobot, there is wireless video image transmission ability and can fast and high precision obtains the active panoramic vision sensor of actual object depth information; (3) adopt computer vision technique to the three-dimensional rebuilding method of pipe interior sidewall.
The technical solution adopted for the present invention to solve the technical problems is:
The mobile system that first can move on the surface of smooth or coarse various gradients, the i.e. accessible kinematic robot of three dimensions, i.e. TDOF is one of branch of the most important and most difficulty of modern machines people.One of design objective of the present invention to find that a kind of biorobot being applicable to creeping in long and narrow pipeline also can guide its behavior of creeping; Gecko is the quadruped that can the inclined-plane of different angles such as ground, wall, ceiling again move, and it belongs to reptile, originates in the ground such as China Guangxi, Yunnan, Southeast Asia, and body weight is large, movement velocity is fast, flexible, heavy burden ability is strong.Its body colour is different because of habitat, has pitchy, blackish green, taupe etc.Tail has 6 ~ 7 greyish white colour circles, four limbs are short and small, can only climb and can not jump.Pin is very special, has 5 toes, can adsorb cliff.Like warm and humid darkness, be afraid of cold, photophobia.The cultural technique of current gecko is ripe, and Geko survival rate all reaches more than 90%.Utilize gecko as the advantage of biorobot in energy resource supply, kinematic dexterity etc., and gecko can only creep and the characteristic of photophobia, the behavior of creeping guiding gecko by the control of light carrys out the alternative mankind and realizes long and narrow space investigation.
Gecko body weight of growing up can reach 150 grams greatly, height 310mm, the thick 4 ~ 5mm of thoracic dorsal, and movement velocity can reach 1.5m/s, and heavy burden ability is born a heavy burden by force on the ceiling can reach 5 times of body weight; The adult gecko price cultivated in the market 50 yuan/right, therefore, adopt gecko to have very strong price advantage as the pipeline biorobot that creeps.
Pipeline visual detection equipment mainly comprises wireless communication unit, active panoramic vision sensor and power supply; The present invention allows gecko carry pipeline visual detection equipment enter pipeline and carry out vision-based detection.
The calculating of the weight-matched of gecko heavy burden ability of growing up and pipeline visual detection equipment: wireless communication unit, its weight <10g, effective communication distance 5km; Active panoramic vision sensor, its weight <200g, imaging resolution is 3,000,000 pixels; Power supply, its weight 50g; The summation of this three's weight is 260g; Consider gecko in the duct creep substantially in horizontality, therefore, adult gecko heavy burden ability in the horizontal direction driving force can be enough to the overall weight driving pipeline visual detection equipment.
According to the result of calculation of the weight-matched of adult gecko heavy burden ability and pipeline visual detection equipment, active panoramic vision sensor of the present invention must within 200g; This point requires the necessary microminiaturized light-weight design of active panoramic vision sensor;
Utilizing gecko to carry out the motion intervention to the perception of environment to it: the basis to the perception of environment being motion decision-making, is also the basis of implementing animal movement intervention; Shown in table 1 is the stable situation of gecko under different Yan look, and experimental study shows, gecko reflects destabilization and unusual destabilization under light green and dark green color regime.
The stable situation of table 1 gecko under different Yan look
Because the gecko creeped in long and narrow pipeline defines the direction of creeping of gecko substantially, gecko is utilized to fear the characteristic of dark green light, at the active panoramic vision sensor that the afterbody configuration of gecko controls with green light source; When the green light source controlling active panoramic vision sensor is luminous on the one hand for vision-based detection pipeline provides illumination, on the other hand for guiding the behavior of creeping of gecko, namely in dark green optical illumination situation, guide moving ahead of gecko, the stopping guiding gecko when light source is closed is creeped;
Active panoramic vision sensor, its hardware comprises: omnibearing vision sensor, panorama LASER Light Source and LED are with light source; Described omnibearing vision sensor is coaxially fixedly connected with described panorama LASER Light Source, and LED is with light source to be looped around on the lower fixed seat of described omnibearing vision sensor; In order to meet lightweight and low-cost design requirement, the parts in described omnibearing vision sensor and described panorama LASER Light Source adopt mould of plastics to suppress and produce.
In order to obtain the panoramic picture of small pipeline inwall to greatest extent, the vertical field of view of the omnibearing vision sensor described in requirement can be large as far as possible, and imaging focal length is shorter as far as possible; According to lightweight and low-cost design requirement, the employing mould of plastics processing and forming of described omnibearing vision sensor; According to the 3D measurement of pipeline and the requirement of 3D modeling, the optical imagery of the catadioptric minute surface of described omnibearing vision sensor must be able to conveniently calculate, and vertical field of view angle is large as far as possible, will reduce distortion as far as possible simultaneously;
The vertical section of catadioptric minute surface is designed to concave arc curve, as shown in Figure 2, according to optical reflection principle, obtains following formula;
&alpha; = &theta; + r &prime; - &pi; 2 - - - ( 1 )
δ=2θ+r'-π(2)
&theta; = a r c s i n ( r R ) - - - ( 3 )
&delta; = 2 a r c s i n ( r R ) + r &prime; - &pi; - - - ( 4 )
In formula, r is the height of incident beam, and r' is the angle of incident beam, and δ is the angle of folded light beam, and R is the arc radius of mirror surface, and α is the incident angle of incident beam, and θ is the grazing angle of mirror surface circular curve.For the video camera of selected fixed focal length, the angle δ of folded light beam only could imaging on imager chip in a fixing scope; That is, the height of incident beam and angle only have meet certain condition could imaging on imager chip.
The catadioptric minute surface of omnibearing vision sensor is formed around axis of symmetry one week by the curve of concave arc, as shown in figure 11; Mould of plastics processing and forming is convenient in this design, and the distortion of panoramic imagery is little, and vertical areas imaging is close to 90 °, and imaging focal length is short; These characteristics are very suitable for the vision-based detection of small pipeline inwall.
Described active 3D stereoscopic full views vision sensor, actual physics space coordinates are based upon the axial line of described panorama LASER Light Source and the intersection point perpendicular to the panorama laser of axial line, coordinate figure represents with X, Y, Z respectively; Panoramic picture establishment of coordinate system is at the center of panoramic picture, and coordinate figure represents with u, v respectively; The establishment of coordinate system of catadioptric minute surface is at the center of concave arc, and coordinate figure represents with X', Y' respectively;
In order to carry out 3D measurement to pipeline, need to demarcate omnibearing vision sensor, the object of demarcating finds out the r' corresponding relation of the height r of incident beam and the angle of incident beam, represents with formula (20);
r=f(p(u',v'))
(20)
r'=g(p(u',v'))
In formula, p (u', v') is a point in panoramic imagery plane, and r is the height of incident beam, and r' is the angle of incident beam, f (...) and g (...) difference representative function relation.
Described omnibearing vision sensor comprises concave arc minute surface, concave arc minute surface lid, transparent glass, gib screw, outer cover and image unit; As shown in Figure 1, the axial line of described concave arc minute surface has a threaded hole; The center of described transparent glass has an aperture; Described outer cover is harmonious by two semicircle column types and forms, and the male and female buckle in semicircle column type matches; First transparent glass is embedded into during assembling in the outer cover of a semicircle column type, then aims at the male and female buckle of two semicircular column type, and on its separately outer wall, apply external force make it synthesize outer cover that one secures transparent glass; Described outer cover bottom has a camera lens head bore; Then be connected with the threaded hole on concave arc minute surface with the aperture of gib screw through described transparent glass; The camera lens of described image unit is fixed in described outer cover camera lens head bore; Described concave arc minute surface lid center has an aperture.
Described panorama LASER Light Source, comprise conical minute surface, transparent housing, ring shape generating laser and base, ring shape generating laser is fixed on base, the utilizing emitted light axial line of ring shape generating laser is consistent with base axial line, conical minute surface is fixed on one end of transparent housing, and the base being fixed wtih ring shape generating laser is fixed on the other end of transparent housing; Ring shape laser transmitter projects circle laser out produces the panorama laser perpendicular to axial line by the reflection of conical minute surface; The back side of described conical minute surface has a threaded hole, as shown in Figure 4.
Described omnibearing vision sensor with the step that is fixedly connected with of described panorama LASER Light Source is: the threaded hole at the back side of the conical minute surface described in aiming at after the aperture that screw covers through described concave arc minute surface also screws; Then described concave arc minute surface lid is carried out hasp with described concave arc minute surface to be connected; By above-mentioned connection, described omnibearing vision sensor and described panorama LASER Light Source are assembled into active panoramic vision sensor, as shown in Figure 5.
As shown in figure 12, panorama laser projection, to concave arc mirror-reflection on imaging plane the imaging of the some P (x, y, z) on inner-walls of duct through omnibearing vision sensor, obtains the panoramic picture with panorama laser intelligence; According to calibration result, obtain this beam heights r and angle r' from some p (u', v') panoramic picture; In order to calculate some P (x, y, the z) spatial value on inner-walls of duct, here by real space establishment of coordinate system on the intersection point of panorama laser projection face and axis of symmetry, set up cylindrical coordinate system; According to design, the center of arc O (B ,-H) of concave arc minute surface, wherein B is the distance of center of arc to cylindrical coordinate system axis of concave arc minute surface, and H is the vertical range of center of arc to panorama laser projection face of concave arc minute surface; The distance P of the point on inner-walls of duct to space coordinates initial point is calculated according to geometric relationship formula (5) r,
P R = &lsqb; H - r + ( B - R 2 - r 2 ) tanr &prime; &rsqb; tanr &prime; - - - ( 5 )
In formula, H is the vertical range of center of arc to panorama laser projection face of concave arc minute surface, B is the distance of center of arc to cylindrical coordinate system axis of concave arc minute surface, r is the height of the folded light beam of panorama laser on inner-walls of duct at concave arc minute surface, r' is the angle of the folded light beam of panorama laser on inner-walls of duct, R is the radius-of-curvature of concave arc minute surface, P rfor the point on inner-walls of duct is to the distance of space coordinates initial point.
In actual testing process, active panoramic vision sensor is by gecko tethered sliding, along with gecko creeps along conduit axis, the sectioning image that panorama LASER Light Source provides pipeline to break cross section panoramic scanning light, the laser scanning of omnibearing vision sensor acquisition panorama for inner-walls of duct; Then, need on the sectioning image of panorama laser scanning, parse laser projection positional information; Above-mentioned processing procedure is called panorama laser method of section by the present invention.
Laser projection positional information is extracted by frame-to-frame differences method, and frame-to-frame differences method obtains the method for laser projection point as calculus of differences by the sectioning image of panorama laser scanning that obtains two adjacent positions; When in gecko crawling process, the sectioning image of the two frame panorama laser scannings that front and back position obtains, along conduit axis direction there will be comparatively significantly difference between its frame and frame, two frame subtract, obtain the absolute value of two two field picture luminance differences, judge whether it is greater than threshold value to analyze the laser projection point extracted in panorama sectioning image.
Owing to can there is the impact of noise in testing process, the section edges information obtained there will be discontinuous phenomenon.Therefore need by local connection method, discontinuous section edges to be connected, join algorithm thought is that the response intensity and gradient direction by comparing gradient operator determines whether two points belong to a limit together, judge with formula (6) and formula (7)
|▽f(x,y)-▽f(x',y')|≤T (6)
|α(x,y)-α(x',y')|≤A α(7)
In formula, ▽ f (x, y) is the frontier point Grad in inner-walls of duct neighborhood, and ▽ f (x', y') is to be confirmed some Grad, T for gradient judgment threshold, α (x, y) is the deflection of the frontier point gradient vector in inner-walls of duct neighborhood, and α (x', y') is the deflection of to be confirmed some gradient vector, A αfor the deflection judgment threshold of gradient vector.
When formula (6) and formula (7) are all set up, represent that the Grad of point to be confirmed and the frontier point in inner-walls of duct neighborhood is all similar with deflection, 2 is connect, and namely point to be confirmed is the point belonged on inner-walls of duct; The complete closed conduct inner wall edge line of on the image plane one is obtained by above-mentioned process.
Further, for the imaging characteristics of panoramic picture, the closed conduct inner wall edge line obtained is adopted to the mode of annular traversal, take image center location as the center of circle, travel through from 0 ° → 360 ° with azimuthal angle beta with equal interval angles, calculate the pipe interior real space point cloud geometric data represented by cylindrical coordinate system according to formula (5); With formula (8), the pipe interior real space point cloud geometric data represented by cylindrical coordinate system is converted into pipe interior real space point cloud geometric data P (x, y, z) represented by cartesian coordinate system;
x = P R &times; s i n &beta; y = P R &times; cos &beta; z = 0 - - - ( 8 )
In formula, P rfor the point on inner-walls of duct is to the distance of coordinate origin, β is position angle.
In order to carry out 3D modeling to long and narrow pipeline, need to estimate the motion of the gecko drawing active 3D stereoscopic full views vision sensor; Here detection coordinates system be based upon on the intersection point of panorama laser projection face and axis of symmetry, use SFM algorithm, the structural remodeling algorithm namely moved, estimates the motion of gecko, obtains the information of measurement point coordinate transform.
Concrete pipeline 3D modeling process is as follows: first, and omnibearing vision sensor obtains the omnidirectional images sequence in its motion process; Then SFM algorithm is utilized to extract and tracking characteristics point, to obtain the corresponding point in omnidirectional images sequence; Then the motion of gecko is estimated with linear estimation methods, the position of two images mainly utilizing corresponding point to take in each observation station; Last in order to estimate the motion of gecko more accurately, the motion of gecko is again estimated with nonlinear Estimation Algorithms.
The extraction of unique point and tracking: in order to obtain the corresponding point between image in omnidirectional images sequence, first extract minutiae in the first two field picture, then follows the tracks of these unique points along image sequence; Tracking characteristics point adopts SIFT algorithm, i.e. Scale invariant features transform algorithm; But in fact the distortion of omnidirectional images and projection distortion can affect character pair point and gather and follow the tracks of; This is because existing SIFT algorithm is a kind of algorithm of the overall situation, panoramic vision is difficult to ensure Scale invariant features transform condition in global scope, thus result in tracking error; In order to improve the tracking accuracy of SIFT algorithm, here according to gecko motion feature in the duct, character pair point collection and tracking being defined in a subrange, namely following the tracks of sector method by dividing; The locus that the method is based on corresponding point between two frames in omnidirectional images sequence is the hypothesis that can not undergo mutation, gecko is in the process of moving ahead, corresponding point is from the outer ring of panoramic picture to the movement of picture centre direction in some Sector Ranges, or direction, outer ring movement from from picture centre to panoramic picture; There is sector constraint condition to improve the tracking accuracy of SIFT algorithm; Concrete methods of realizing is: extract minutiae in N two field picture, follows the tracks of same unique point in the same sector then in N+1 two field picture;
The estimation of gecko: in order to estimate the motion of gecko, calculates two observation stations here, i.e. the fundamental matrix of the relative position between the detection coordinates system of two diverse locations and the different information in orientation; Essential matrix E formula (9) represents;
r i &prime; T Er i = 0 - - - ( 9 )
Wherein, r i=[x i, y i, z i] t, r' i=[x' i, y' i, z' i] be respectively corresponding point in two panoramic pictures light vector, formula (9) is changed into formula (10);
u i Te=0(10)
Wherein,
u i=[x ix' i,y ix' i,z ix' i,x iy' i,y iy' i,z iy' i,x iz' i,y iz' i,z iz' i] T(11)
e=[e 11,e 12,e 13,e 21,e 22,e 23,e 31,e 32,e 33] T(12)
Obtain essential matrix E by solving simultaneous equations to 8 groups of corresponding light vector r, computing method formula (13) represents;
min e | | U e | | 2 - - - ( 13 )
Wherein, U=[u 1, u 2..., u n] t, essential matrix E is with U tthe proper vector e of the minimal eigenvalue of U carries out calculating and obtains;
Calculate rotation matrix R and translation vector t from essential matrix E, shown in formula (14), essential matrix E is by rotation matrix R and translation vector t=[t x, t y, t z] t, represent;
E=RT(14)
Here T matrix representation the following.
T = 0 - t z t y t z 0 - t x - t y t x 0 - - - ( 15 )
Calculating rotation matrix R and T method from essential matrix E is adopt Matrix Singular Value, i.e. SVD method, and the method belongs to numerical operation method; But the geometric meaning between four groups of solutions that this decomposition method decomposites is not directly perceived, be difficult to ensure that decomposition result is only correct solution, and real-time is not good enough; Therefore, also need to estimate again and the process of yardstick matching process the motion of gecko.
The motion of gecko is estimated again: estimate that rotation matrix R and translation vector T may not necessarily obtain good result by SVD method from essential matrix E, this is because do not consider the various mistakes in image in SVD method.Therefore, to need to reappraise in gecko motion obtain the measuring error of each unique point in panoramic picture; Here the motion of light-stream adjustment to gecko is used to reappraise; The thought of the method is minimized feature re-projection error summation;
Yardstick matching process: owing to being only process the panoramic picture of input in SFM algorithm, do not comprise the information of any yardstick; Therefore, the distance between 2 observation stations can't be determined by SFM algorithm | t|; But, in panorama laser method of section result, include yardstick coordinate information; Therefore, the result by merging these two kinds of disposal routes realizes yardstick coupling;
First, the three-dimensional coordinate of an inner-walls of duct point is measured by panorama laser method of section; Then, with the three-dimensional coordinate of SFM algorithm measurement same point; Finally, yardstick coupling is realized by three-dimensional coordinate same point close as far as possible;
When same point is away from observation point, by two kinds of algorithms of different, i.e. SFM algorithm and panorama laser method of section, the minimum deflection processed between obtained coordinate figure same point is more responsive; Based on this, the distance between minimum deflection coordinate figure is adopted to calculate yardstick s', as shown in formula (16) here;
m i n &Sigma; k = 1 m | | l o g ( p k ) - l o g ( s &prime; p k &prime; ) | | 2 - - - ( 16 )
In formula, p k=[x k, y k, z k] trepresent panorama laser method of section measurement result, p k'=[x k', y k', z k'] trepresent SFM algorithm measurement result;
Texture: shown in Fig. 8 is modeling process to pipeline; After a certain spatial point carries out 3D measurement to inner-walls of duct, along with creeping of gecko carries out 3D measurement to next measurement point to inner-walls of duct; The 3D measurement result of each transversal section is spliced, finally also needs to carry out texture, realize the automatic 3D modeling of long and narrow pipeline.
Another kind of calculate rotation matrix R from essential matrix E and T method is: first, utilize the attribute of essential matrix E order 2 obtain gecko move before and after between translational movement t, as shown in formula (17);
t = - e i 2 Me k 2 + e i 3 Me k 3 e i 1 Me k 2 Me k 3 T , j = 1 , k &NotEqual; j Me k 1 - e i 1 Me k 1 + e i 3 Me k 3 e i 2 M e k 3 T , j = 2 , k &NotEqual; j Me k 1 Me k 2 - e i 1 Me k 1 + e i 2 Me k 2 e i 3 T , j = 3 , k &NotEqual; j - - - ( 17 )
In formula, e ijfor the element of essential matrix E, Me ijfor e ijalgebraic complement;
And then try to achieve satisfied constraint || t|| 2two translational movement t of=1 1=1, t 2=-1, wherein,
t = t | | t | | 2 - - - ( 18 )
Then, utilize the method solving rotation matrix system of equations obtain gecko move before and after between rotation matrix R;
Formula (18) is updated to formula (14) and calculates rotation matrix R, obtain Four composition solution result; Finally, utilize the method directly asking for space 3D point imaging depth from Four composition solution result, to determine the only correct solution that meeting spatial 3D visibility of a point retrains fast, computing method such as formula (19) provides;
&sigma; 1 &sigma; 2 = - K - 1 x - R T K &prime; - 1 y ^ + R T t - - - ( 19 )
In formula, K' -1, K -1be respectively the inside and outside parameter inverse of a matrix matrix of omnibearing vision sensor, σ 1, σ 2be respectively corresponding point gecko move before and after between panoramic picture in imaging depth, obtained by the measurement of panorama laser method of section; R tfor the torque battle array of rotation matrix R, t is translation vector, be respectively corresponding point gecko move before and after between panoramic picture in imaging moiety;
As long as the σ in formula (19) 1, σ 2all meet the constraint being greater than zero, then corresponding R, t is only correct solution.
The heavy burden ability of gecko or limited after all, in order to the long and narrow inner-walls of duct of Real-time Obtaining panoramic picture and carry out dissection process, here the panoramic picture adopting communication to be obtained by omnibearing vision sensor is transferred to pipe detection analytic system, the ability of current radio communication has reached kilometer level, substantially can meet the requirement of image transmitting during pipe detection; The full-view video image that pipe detection analytic system transmits according to cordless communication network detects and 3D modeling process detected pipeline.
For the 3D modeling of long and narrow pipeline, the present invention is spliced by the 3D measurement result of panorama laser method of section to the transversal section of long and narrow pipeline, finally also needs to carry out texture, realizes the automatic 3D modeling of long and narrow pipeline; Triangular grid is the method discrete point cloud tri patch in space being built into body surface, and the process obtaining the cloud data of cross-section of pipeline is the panorama sectioning image of each frame of process, and the cloud data obtained is regularly arranged; Here triangular grid model is adopted to carry out three-dimensionalreconstruction, and by the texture in panoramic picture in 3D model.
Beneficial effect of the present invention is mainly manifested in:
1) provide a kind of high performance-price ratio, light-duty, can fast and high precision obtains the active panoramic vision sensor of actual object depth information;
2) provide a kind of monitoring structural health conditions means of small-bore pipeline, in energy resource supply, kinematic dexterity, disguise, maneuverability and adaptability, there is more obvious advantage;
3) propose a kind of biorobot being applicable to creeping in long and narrow pipeline and also can guide its behavior of creeping.
Accompanying drawing explanation
Fig. 1 is a kind of structural drawing of omnibearing vision sensor;
Fig. 2 is concave mirror imaging schematic diagram;
Fig. 3 is the front elevation of concave arc minute surface;
Fig. 4 is the structural drawing of panorama LASER Light Source;
Fig. 5 is a kind of structural drawing of active panoramic vision sensor;
Fig. 6 is the laser projection panorama sketch that active panoramic vision sensor obtains;
Fig. 7 is overall macroscopical schematic diagram that the active panoramic vision sensor of a kind of micro radio based on biorobot carries out pipe detection;
Fig. 8 is the process flow diagram of movement locus by SFM algorithm determination pipeline gecko and 3D modeling;
Fig. 9 is the motion estimation process process flow diagram of active panoramic vision sensor;
Figure 10 is that omnibearing vision sensor has angularly resolution character curve comparison diagram in a certain scope;
Figure 11 is the imaging schematic diagram of omnibearing vision sensor;
Figure 12 is the spatial point cloud calculation specifications of inner-walls of duct.
Embodiment
Below in conjunction with accompanying drawing, the invention will be further described.
Embodiment 1
With reference to Fig. 1 ~ 12, the active panoramic vision sensor of a kind of micro radio based on biorobot, its hardware mainly comprises: gecko and pipeline visual detection equipment; Pipeline visual detection equipment is bundled on gecko, and gecko middle traction pipeline visual detection equipment of creeping forward carries out taking the panoramic picture of inner-walls of duct in small-bore long and narrow pipeline.
Pipeline visual detection equipment mainly comprises wireless communication unit, active panoramic vision sensor and power supply.
Active panoramic vision sensor mainly comprises omnibearing vision sensor, LED is with light source and panorama LASER Light Source;
Omnibearing vision sensor comprises concave arc minute surface 2, concave arc minute surface lid 1, transparent glass 3, gib screw 4, outer cover 5 and image unit 7; As shown in Figure 1, the axial line of concave arc minute surface has a threaded hole; The center of transparent glass has an aperture; Outer cover is harmonious by two semicircle column types and forms, and the male and female buckle in semicircle column type matches; First transparent glass is embedded into during assembling in the outer cover of a semicircle column type, then aims at the male and female buckle of two semicircular column type, and on its separately outer wall, apply external force make it synthesize outer cover that one secures transparent glass; Outer cover bottom has a camera lens head bore; Then be connected with the threaded hole on concave arc minute surface with the aperture of gib screw through transparent glass; The camera lens of image unit is fixed in outer cover camera lens head bore; Concave arc minute surface lid center has an aperture 8.
Panorama LASER Light Source, comprise conical minute surface 11, transparent housing 12, ring shape generating laser 13 and base 14, ring shape generating laser is fixed on base, the utilizing emitted light axial line of ring shape generating laser is consistent with base axial line, conical minute surface is fixed on one end of transparent housing, and the base being fixed wtih ring shape generating laser is fixed on the other end of transparent housing; Ring shape laser transmitter projects circle laser out produces the panorama laser perpendicular to axial line by the reflection of conical minute surface; The back side of conical minute surface has a threaded hole 15, as shown in Figure 4.
Omnibearing vision sensor with the step that is fixedly connected with of panorama LASER Light Source is: screw passes the threaded hole 15 at the back side of the conical minute surface of the rear aligning of aperture 8 that concave arc minute surface covers and screws; Then concave arc minute surface lid 1 is carried out hasp with concave arc minute surface 2 to be connected; By above-mentioned connection, omnibearing vision sensor and panorama LASER Light Source are assembled into active panoramic vision sensor, as shown in Figure 5.
The vertical section of catadioptric minute surface is designed to concave arc curve, as shown in Figure 2, according to optical reflection principle, obtains following formula;
&alpha; = &theta; + r &prime; - &pi; 2 - - - ( 1 )
δ=2θ+r'-π(2)
&theta; = a r c s i n ( r R ) - - - ( 3 )
&delta; = 2 a r c s i n ( r R ) + r &prime; - &pi; - - - ( 4 )
In formula, r is the height of incident beam, and r' is the angle of incident beam, and δ is the angle of folded light beam, and R is the arc radius of mirror surface, and α is the incident angle of incident beam, and θ is the grazing angle of mirror surface circular curve.For the video camera of selected fixed focal length, the angle δ of folded light beam only could imaging on imager chip in a fixing scope; That is, the height of incident beam and angle only have meet certain condition could imaging on imager chip.
The catadioptric minute surface of omnibearing vision sensor is formed around axis of symmetry one week by the curve of concave arc, as shown in figure 11; Mould of plastics processing and forming is convenient in this design, and the distortion of panoramic imagery is little, and vertical areas imaging is close to 90 °, and imaging focal length is short; These characteristics are very suitable for the vision-based detection of small pipeline inwall;
Based on above-mentioned panoramic imagery geometric relationship, study the problem of calibrating of omnibearing vision sensor below; As shown in figure 12, the object of demarcation finds out the r' corresponding relation of the height r of incident beam and the angle of incident beam.
In the present invention, actual physics space coordinates are based upon the axial line of panorama LASER Light Source and the intersection point perpendicular to the panorama laser of axial line, coordinate figure represents with X, Y, Z respectively, as shown in figure 12; Panoramic picture establishment of coordinate system is at the center of panoramic picture, and coordinate figure represents with u, v respectively, as shown in figure 11; The establishment of coordinate system of catadioptric minute surface is at the center of concave arc, and coordinate figure represents with X', Y' respectively, as shown in figure 12;
Panorama laser projection, to concave arc mirror-reflection on imaging plane the imaging of the some P (x, y, z) on inner-walls of duct through omnibearing vision sensor, obtains the panoramic picture with panorama laser intelligence; According to calibration result, obtain this beam heights r and angle r' from some p (u', v') panoramic picture; In order to calculate some P (x, y, the z) spatial value on inner-walls of duct, here by real space establishment of coordinate system on the intersection point of panorama laser projection face and axis of symmetry, set up cylindrical coordinate system; According to design, the center of arc O (B ,-H) of concave arc minute surface, wherein B is the distance of center of arc to cylindrical coordinate system axis of concave arc minute surface, and H is the vertical range of center of arc to panorama laser projection face of concave arc minute surface; The distance P of the point on inner-walls of duct to coordinate origin is calculated according to geometric relationship formula (5) r,
P R = &lsqb; H - r + ( B - R 2 - r 2 ) tanr &prime; &rsqb; tanr &prime; - - - ( 5 )
In formula, H is the vertical range of center of arc to panorama laser projection face of concave arc minute surface, B is the distance of center of arc to cylindrical coordinate system axis of concave arc minute surface, r is the height of the folded light beam of panorama laser on inner-walls of duct at concave arc minute surface, r' is the angle of the folded light beam of panorama laser on inner-walls of duct, and R is the radius-of-curvature of concave arc minute surface.
In actual testing process, active panoramic vision sensor is by gecko tethered sliding, along with gecko creeps along conduit axis, the sectioning image that panorama LASER Light Source provides pipeline to break cross section panoramic scanning light, the laser scanning of omnibearing vision sensor acquisition panorama for inner-walls of duct; Then, need on the sectioning image of panorama laser scanning, parse laser projection positional information; Above-mentioned processing procedure is called panorama laser method of section by the present invention.
Laser projection positional information is extracted by frame-to-frame differences method, and frame-to-frame differences method obtains the method for laser projection point as calculus of differences by the sectioning image of panorama laser scanning that obtains two adjacent positions; When in gecko crawling process, the sectioning image of the two frame panorama laser scannings that front and back position obtains, along conduit axis direction there will be comparatively significantly difference between its frame and frame, two frame subtract, obtain the absolute value of two two field picture luminance differences, judge whether it is greater than threshold value to analyze the laser projection point extracted in panorama sectioning image.
Owing to can there is the impact of noise in testing process, the section edges information obtained there will be discontinuous phenomenon.Therefore need by local connection method, discontinuous section edges to be connected, join algorithm thought is that the response intensity and gradient direction by comparing gradient operator determines whether two points belong to a limit together, judge with formula (6) and formula (7)
|▽f(x,y)-▽f(x',y')|≤T (6)
|α(x,y)-α(x',y')|≤A α(7)
In formula, ▽ f (x, y) is the frontier point Grad in inner-walls of duct neighborhood, and ▽ f (x', y') is to be confirmed some Grad, T for gradient judgment threshold, α (x, y) is the deflection of the frontier point gradient vector in inner-walls of duct neighborhood, and α (x', y') is the deflection of to be confirmed some gradient vector, A αfor the deflection judgment threshold of gradient vector.
When formula (6) and formula (7) are all set up, represent that the Grad of point to be confirmed and the frontier point in inner-walls of duct neighborhood is all similar with deflection, 2 is connect, and namely point to be confirmed is the point belonged on inner-walls of duct; The complete closed conduct inner wall edge line of on the image plane one is obtained by above-mentioned process;
Further, for the imaging characteristics of panoramic picture, the closed conduct inner wall edge line obtained is adopted to the mode of annular traversal, take image center location as the center of circle, travel through from 0 ° → 360 ° with azimuthal angle beta with equal interval angles, calculate the pipe interior real space point cloud geometric data represented by cylindrical coordinate system according to formula (5); With formula (8), the pipe interior real space point cloud geometric data represented by cylindrical coordinate system is converted into pipe interior real space point cloud geometric data P (x, y, z) represented by cartesian coordinate system;
{ x = P R &times; s i n &beta; y = P R &times; cos &beta; z = 0 - - - ( 8 )
In formula, P rfor the point on inner-walls of duct is to the distance of coordinate origin, β is position angle;
In order to carry out 3D modeling to long and narrow pipeline, need to estimate the motion of the gecko drawing active 3D stereoscopic full views vision sensor; Here detection coordinates system be based upon on the intersection point of panorama laser projection face and axis of symmetry, use SFM algorithm, the structural remodeling algorithm namely moved, estimates the motion of gecko, obtains the information of measurement point coordinate transform.
Concrete pipeline 3D modeling process is as follows: first, and omnibearing vision sensor obtains the omnidirectional images sequence in its motion process; Then SFM algorithm is utilized to extract and tracking characteristics point, to obtain the corresponding point in omnidirectional images sequence; Then the motion of gecko is estimated with linear estimation methods, the position of two images mainly utilizing corresponding point to take in each observation station; Last in order to estimate the motion of gecko more accurately, the motion of gecko is again estimated with nonlinear Estimation Algorithms.
The extraction of unique point and tracking: in order to obtain the corresponding point between image in omnidirectional images sequence, first extract minutiae in the first two field picture, then follows the tracks of these unique points along image sequence; Tracking characteristics point adopts SIFT algorithm, i.e. Scale invariant features transform algorithm; But in fact the distortion of omnidirectional images and projection distortion can affect character pair point and gather and follow the tracks of; This is because existing SIFT algorithm is a kind of algorithm of the overall situation, panoramic vision is difficult to ensure Scale invariant features transform condition in global scope, thus result in tracking error; In order to improve the tracking accuracy of SIFT algorithm, here according to gecko motion feature in the duct, character pair point collection and tracking being defined in a subrange, namely following the tracks of sector method by dividing; The locus that the method is based on corresponding point between two frames in omnidirectional images sequence is the hypothesis that can not undergo mutation, gecko is in the process of moving ahead, corresponding point is from the outer ring of panoramic picture to the movement of picture centre direction in some Sector Ranges, or direction, outer ring movement from from picture centre to panoramic picture; There is sector constraint condition to improve the tracking accuracy of SIFT algorithm; Concrete methods of realizing is: extract minutiae in N two field picture, follows the tracks of same unique point in the same sector then in N+1 two field picture.
The estimation of gecko: in order to estimate the motion of gecko, calculates two observation stations here, i.e. the fundamental matrix of the relative position between the detection coordinates system of two diverse locations and the different information in orientation; Essential matrix E formula (9) represents;
r i &prime; T Er i = 0 - - - ( 9 )
Wherein, r i=[x i, y i, z i] t, r' i=[x' i, y' i, z' i] be respectively corresponding point in two panoramic pictures light vector, formula (9) is changed into formula (10);
u i Te=0(10)
Wherein,
u i=[x ix' i,y ix' i,z ix' i,x iy' i,y iy' i,z iy' i,x iz' i,y iz' i,z iz' i] T(11)
e=[e 11,e 12,e 13,e 21,e 22,e 23,e 31,e 32,e 33] T(12)
In formula, x ix' i, y ix' i, z ix' i, x iy' i, y iy' i, z iy' i, x iz' i, y iz' i, z iz' irepresent the light vector r of corresponding point in two panoramic pictures respectively iand r' iproduct between component, in formula, e 11, e 12, e 13, e 21, e 22, e 23, e 31, e 32, e 33the element of matrix e respectively.
Obtain essential matrix E by solving simultaneous equations to 8 groups of corresponding light vector r, computing method formula (13) represents;
min e | | U e | | 2 - - - ( 13 )
Wherein, U=[u 1, u 2..., u n] t, essential matrix E is with U tthe proper vector e of the minimal eigenvalue of U carries out calculating and obtains;
Calculate rotation matrix R and translation vector t from essential matrix E, shown in formula (14), essential matrix E is by rotation matrix R and translation vector t=[t x, t y, t z] t, represent;
E=RT(14)
Here T matrix representation the following.
T = 0 - t z t y t z 0 - t x - t y t x 0 - - - ( 15 )
Calculating rotation matrix R and T method from essential matrix E is adopt Matrix Singular Value, i.e. SVD method, and the method belongs to numerical operation method; But the geometric meaning between four groups of solutions that this decomposition method decomposites is not directly perceived, be difficult to ensure that decomposition result is only correct solution, and real-time is not good enough; Therefore, also need to estimate again and the process of yardstick matching process the motion of gecko.
The motion of gecko is estimated again: estimate that rotation matrix R and translation vector T may not necessarily obtain good result by SVD method from essential matrix E, this is because do not consider the various mistakes in image in SVD method.Therefore, to need to reappraise in gecko motion obtain the measuring error of each unique point in panoramic picture; Here the motion of light-stream adjustment to gecko is used to reappraise; The thought of the method is minimized feature re-projection error summation.
Yardstick matching process: owing to being only process the panoramic picture of input in SFM algorithm, do not comprise the information of any yardstick; Therefore, the distance between 2 observation stations can't be determined by SFM algorithm | t|; But, in panorama laser method of section result, include yardstick coordinate information; Therefore, the result by merging these two kinds of disposal routes realizes yardstick coupling.
First, the three-dimensional coordinate of an inner-walls of duct point is measured by panorama laser method of section; Then, with the three-dimensional coordinate of SFM algorithm measurement same point; Finally, yardstick coupling is realized by three-dimensional coordinate same point close as far as possible.
When same point is away from observation point, by two kinds of algorithms of different, i.e. SFM algorithm and panorama laser method of section, the minimum deflection processed between obtained coordinate figure same point is more responsive; Based on this, the distance between minimum deflection coordinate figure is adopted to calculate yardstick s', as shown in formula (16) here;
m i n &Sigma; k = 1 m | | l o g ( p k ) - l o g ( s &prime; p k &prime; ) | | 2 - - - ( 16 )
In formula, p k=[x k, y k, z k] trepresent panorama laser method of section measurement result, p k'=[x k', y k', z k'] trepresent SFM algorithm measurement result;
Texture: shown in Fig. 8 is modeling process to pipeline; After a certain spatial point carries out 3D measurement to inner-walls of duct, along with creeping of gecko carries out 3D measurement to next measurement point to inner-walls of duct; The 3D measurement result of each transversal section is spliced, finally also needs to carry out texture, realize the automatic 3D modeling of long and narrow pipeline.
The heavy burden ability of gecko or limited after all, in order to the long and narrow inner-walls of duct of Real-time Obtaining panoramic picture and carry out dissection process, here the panoramic picture adopting communication to be obtained by omnibearing vision sensor is transferred to pipe detection analytic system, the ability of current radio communication has reached kilometer level, substantially can meet the requirement of image transmitting during pipe detection; The full-view video image that pipe detection analytic system transmits according to cordless communication network detects and 3D modeling process detected pipeline.
Embodiment 2
All the other are identical with embodiment 1, and difference calculates rotation matrix R and T method from essential matrix E; The method is: first, utilize the attribute of essential matrix E order 2 obtain gecko move before and after between translational movement t, as shown in formula (17);
t = - e i 2 Me k 2 + e i 3 Me k 3 e i 1 Me k 2 Me k 3 T , j = 1 , k &NotEqual; j Me k 1 - e i 1 Me k 1 + e i 3 Me k 3 e i 2 Me k 3 T , j = 2 , k &NotEqual; j Me k 1 Me k 2 - e i 1 Me k 1 + e i 2 Me k 2 e i 3 T , j = 3 , k &NotEqual; j - - - ( 17 )
In formula, e ijfor the element of essential matrix E, Me ijfor e ijalgebraic complement;
And then try to achieve satisfied constraint || t|| 2two translational movement t of=1 1=1, t 2=-1, wherein,
t = t | | t | | 2 - - - ( 18 )
Then, utilize the method solving rotation matrix system of equations obtain gecko move before and after between rotation matrix R;
Formula (18) is updated to formula (14) and calculates rotation matrix R, obtain Four composition solution result; Finally, utilize the method directly asking for space 3D point imaging depth from Four composition solution result, to determine the only correct solution that meeting spatial 3D visibility of a point retrains fast, computing method such as formula (19) provides;
&sigma; 1 &sigma; 2 = - K - 1 x - R T K &prime; - 1 y ^ + R T t - - - ( 19 )
In formula, K' -1, K -1be respectively the inside and outside parameter inverse of a matrix matrix of omnibearing vision sensor, σ 1, σ 2be respectively corresponding point gecko move before and after between panoramic picture in imaging depth, obtained by the measurement of panorama laser method of section; R tfor the torque battle array of rotation matrix R, t is translation vector, x, be respectively corresponding point gecko move before and after between panoramic picture in imaging moiety;
As long as the σ in formula (19) 1, σ 2all meet the constraint being greater than zero, then corresponding R, t is only correct solution.

Claims (10)

1. based on the biorobot's active panoramic vision sensor of micro radio, it is characterized in that: the active panoramic vision sensor of the described micro radio based on biorobot comprises gecko and pipeline visual detection equipment; Described pipeline visual detection equipment is bundled on described gecko, and the pipeline visual detection equipment that described gecko creeps forward described in middle traction carries out the panoramic picture taking inner-walls of duct in small-bore long and narrow pipeline;
Described pipeline visual detection equipment mainly comprises wireless communication unit, active panoramic vision sensor and power supply;
Described active panoramic vision sensor mainly comprises omnibearing vision sensor, LED is with light source and panorama LASER Light Source;
Described omnibearing vision sensor comprises concave arc minute surface, concave arc minute surface lid, transparent glass, gib screw, outer cover and image unit;
The axial line of described concave arc minute surface has a threaded hole; The center of described transparent glass has an aperture; Described outer cover is harmonious by two semicircle column types and forms, and the male and female buckle in semicircle column type matches; First transparent glass is embedded into during assembling in the outer cover of a semicircle column type, then aims at the male and female buckle of two semicircular column type, and on its separately outer wall, apply external force make it synthesize outer cover that one secures transparent glass; Described outer cover bottom has a camera lens head bore; Then be connected with the threaded hole on concave arc minute surface with the aperture of gib screw through described transparent glass; The camera lens of described image unit is fixed in described outer cover camera lens head bore; Described concave arc minute surface lid center has an aperture;
Described panorama LASER Light Source, comprise conical minute surface, transparent housing, ring shape generating laser and base, ring shape generating laser is fixed on base, the utilizing emitted light axial line of ring shape generating laser is consistent with base axial line, conical minute surface is fixed on one end of transparent housing, and the base being fixed wtih ring shape generating laser is fixed on the other end of transparent housing; Ring shape laser transmitter projects circle laser out produces the panorama laser perpendicular to axial line by the reflection of conical minute surface; The back side of described conical minute surface has a threaded hole;
Described omnibearing vision sensor is coaxially fixedly connected with described panorama LASER Light Source, and LED is with light source to be looped around on the lower fixed seat of described omnibearing vision sensor;
Described pipeline visual detection equipment is mainly used in small-bore long and narrow piping disease detection, 3D measures and 3D full-view modeling.
2. as claimed in claim 1 based on the active panoramic vision sensor of micro radio of biorobot, it is characterized in that: the design objective of described omnibearing vision sensor is vertical field of view large as far as possible and imaging focal length short as far as possible, the employing mould of plastics processing and forming of described omnibearing vision sensor;
The vertical section of the catadioptric minute surface of described omnibearing vision sensor is designed to concave arc curve, according to optical reflection principle, obtains following formula;
&alpha; = &theta; + r &prime; - &pi; 2 - - - ( 1 )
δ=2θ+r'-π(2)
&theta; = a r c s i n ( r R ) - - - ( 3 )
&delta; = 2 a r c s i n ( r R ) + r &prime; - &pi; - - - ( 4 )
In formula, r is the height of incident beam, and r' is the angle of incident beam, and δ is the angle of folded light beam, and R is the arc radius of mirror surface, and α is the incident angle of incident beam, and θ is the grazing angle of mirror surface circular curve;
The catadioptric minute surface of omnibearing vision sensor is formed around axis of symmetry one week by the curve of concave arc.
3. as claimed in claim 1 or 2 based on the active panoramic vision sensor of micro radio of biorobot, it is characterized in that: described active 3D stereoscopic full views vision sensor, actual physics space coordinates are based upon the axial line of described panorama LASER Light Source and the intersection point perpendicular to the panorama laser of axial line, coordinate figure represents with X, Y, Z respectively; Panoramic picture establishment of coordinate system is at the center of panoramic picture, and coordinate figure represents with u, v respectively; The establishment of coordinate system of catadioptric minute surface is at the center of concave arc, and coordinate figure represents with X', Y' respectively;
In order to carry out 3D measurement to pipeline, needing to demarcate omnibearing vision sensor, finding out the r' corresponding relation of the height r of incident beam and the angle of incident beam from some p (u', v') imaging plane;
r = f ( p ( u &prime; , v &prime; ) ) r &prime; = g ( p ( u &prime; , v &prime; ) ) - - - ( 20 )
In formula, p (u', v') is a point in panoramic imagery plane, and r is the height of incident beam, and r' is the angle of incident beam, f (...) and g (...) difference representative function relation.
4. as claimed in claim 3 based on the active panoramic vision sensor of micro radio of biorobot, it is characterized in that: the 3D of inner-walls of duct measures, in order to calculate the some P (x on inner-walls of duct, y, z) spatial value, makes the center of arc O (B ,-H) of concave arc minute surface, wherein B is the distance of center of arc to cylindrical coordinate system axis of concave arc minute surface, and H is the vertical range of center of arc to panorama laser projection face of concave arc minute surface; The distance P of the point on inner-walls of duct to space coordinates initial point is calculated according to geometric relationship formula (5) r,
P R = &lsqb; H - r + ( B - R 2 - r 2 ) tanr &prime; &rsqb; tanr &prime; - - - ( 5 )
In formula, H is the vertical range of center of arc to panorama laser projection face of concave arc minute surface, B is the distance of center of arc to cylindrical coordinate system axis of concave arc minute surface, r is the height of the folded light beam of panorama laser on inner-walls of duct at concave arc minute surface, r' is the angle of the folded light beam of panorama laser on inner-walls of duct, and R is the radius-of-curvature of concave arc minute surface.
5. as claimed in claim 4 based on the active panoramic vision sensor of micro radio of biorobot, it is characterized in that: adopt panorama laser method of section to obtain sectioning image and spatial value that inner-walls of duct is irradiated in whole panorama laser projection face institute, comprising frame-to-frame differences method, locally connection method and annular traversal;
Described frame-to-frame differences method obtains the method for laser projection point as calculus of differences by the sectioning image of panorama laser scanning that obtains two adjacent positions; When gecko moves ahead in process, the sectioning image of the two frame panorama laser scannings that front and back position obtains, along inner-walls of duct spatial axis direction there will be comparatively significantly difference between its frame and frame, two frame subtract, obtain the absolute value of two two field picture luminance differences, judge whether it is greater than threshold value to analyze the laser projection point extracted in panorama sectioning image;
Owing to can there is the impact of noise in testing process, the section edges information obtained there will be discontinuous phenomenon; Therefore need by local connection method, discontinuous section edges to be connected, join algorithm thought is that the response intensity and gradient direction by comparing gradient operator determines whether two points belong to a limit together, judge with formula (6) and formula (7)
| &dtri; f ( x , y ) - &dtri; f ( x &prime; , y &prime; ) | &le; T &dtri; - - - ( 6 )
|α(x,y)-α(x',y')|≤A α(7)
In formula, for the frontier point Grad in inner-walls of duct neighborhood, for to be confirmed some Grad, T for gradient judgment threshold, α (x, y) is the deflection of the frontier point gradient vector in inner-walls of duct neighborhood, and α (x', y') is the deflection of to be confirmed some gradient vector, A αfor the deflection judgment threshold of gradient vector;
When formula (6) and formula (7) are all set up, represent that the Grad of point to be confirmed and the frontier point in inner-walls of duct neighborhood is all similar with deflection, 2 is connect, and namely point to be confirmed is the point belonged on inner-walls of duct; The complete closed conduct inner wall edge line of on the image plane one is obtained by above-mentioned process;
Further, for the imaging characteristics of panoramic picture, the closed conduct inner wall edge line obtained is adopted to the mode of annular traversal, take image center location as the center of circle, travel through from 0 ° → 360 ° with azimuthal angle beta with equal interval angles, calculate the pipe interior real space point cloud geometric data represented by cylindrical coordinate system according to formula (5); With formula (8), the pipe interior real space point cloud geometric data represented by cylindrical coordinate system is converted into pipe interior real space point cloud geometric data P (x, y, z) represented by cartesian coordinate system;
x = P R &times; s i n &beta; y = P R &times; cos &beta; z = 0 - - - ( 8 )
In formula, P rfor the point on inner-walls of duct is to the distance of space coordinates initial point, β is position angle.
6., as claimed in claim 5 based on the active panoramic vision sensor of micro radio of biorobot, it is characterized in that: for the 3D full-view modeling of pipeline, by estimating the motion of the gecko drawing active 3D stereoscopic full views vision sensor; Be based upon in the single view of omnibearing vision sensor by detection coordinates system, use SFM algorithm, the structural remodeling algorithm namely moved, estimates the motion of gecko, obtains the information of measurement point coordinate transform;
Concrete 3D full-view modeling process is as follows: first, and omnibearing vision sensor obtains the omnidirectional images sequence in its motion process; Then SFM algorithm is utilized to extract and tracking characteristics point, to obtain the corresponding point in omnidirectional images sequence; Then the motion of gecko is estimated with linear estimation methods, the position of two images mainly utilizing corresponding point to take in each observation station; Last in order to estimate the motion of gecko more accurately, the motion of gecko is again estimated with nonlinear Estimation Algorithms.
7. the active panoramic vision sensor of the micro radio based on biorobot as described in claim 1 or 6, it is characterized in that: the extraction of unique point and tracking, in order to improve the tracking accuracy of SIFT algorithm, according to gecko motion feature in the duct, character pair point collection and tracking being defined in a subrange, namely following the tracks of sector method by dividing; Concrete methods of realizing is: extract minutiae in N two field picture, follows the tracks of same unique point in the same sector then in N+1 two field picture.
8. as claimed in claim 7 based on the active panoramic vision sensor of micro radio of biorobot, it is characterized in that: in order to estimate the motion of the gecko drawing active panoramic vision sensor, calculate two observation stations, i.e. the fundamental matrix of the relative position between the single view of the omnibearing vision sensor of two diverse locations and the different information in orientation; Essential matrix E formula (9) represents;
r i &prime; T Er i = 0 - - - ( 9 )
Wherein, r i=[x i, y i, z i] t, r ' i=[x ' i, y ' i, z ' i] be respectively corresponding point in two panoramic pictures light vector, formula (9) is changed into formula (10);
u i Te=0(10)
Wherein,
u i=[x ix' i,y ix' i,z ix' i,x iy' i,y iy' i,z iy' i,x iz' i,y iz' i,z iz' i] T(11)
e=[e 11,e 12,e 13,e 21,e 22,e 23,e 31,e 32,e 33] T(12)
Obtain essential matrix E by solving simultaneous equations to 8 groups of corresponding light vector r, computing method formula (13) represents;
m i n e | | U e | | 2 - - - ( 13 )
Wherein, U=[u 1, u 2..., u n] t, essential matrix E is with U tthe proper vector e of the minimal eigenvalue of U carries out calculating and obtains;
Calculate rotation matrix R and translation vector t from essential matrix E, shown in formula (14), essential matrix E is by rotation matrix R and translation vector t=[t x, t y, t z] trepresent;
E=RT(14)
Here T matrix representation the following:
T = 0 - t z t y t z 0 - t x - t y t x 0 - - - ( 15 )
Calculate rotation matrix R and T from essential matrix E and adopt Matrix Singular Value, i.e. SVD method;
In order to obtain the higher estimated accuracy of gecko motion, the motion of light-stream adjustment to gecko is used to reappraise.
9. as claimed in claim 1 based on the active panoramic vision sensor of micro radio of biorobot, it is characterized in that: realize yardstick coupling by the result merging panorama laser method of section and SFM algorithm two kinds of disposal routes;
Algorithm concrete steps are:
STEP1: the three-dimensional coordinate being measured an inner-walls of duct point by panorama laser method of section;
STEP2: with the three-dimensional coordinate of SFM algorithm measurement same point;
STEP3: calculate yardstick s', as shown in formula (19) by adopting the distance between minimum deflection coordinate figure;
m i n &Sigma; k = 1 m | | l o g ( p k ) - l o g ( s &prime; p k &prime; ) | | 2 - - - ( 19 )
In formula, p k=[x k, y k, z k] trepresent panorama laser method of section measurement result, p k'=[x k', y k', z k'] trepresent SFM algorithm measurement result.
10. as claimed in claim 1 based on the active panoramic vision sensor of micro radio of biorobot, it is characterized in that: spliced by the 3D measurement result of panorama laser method of section to the transversal section of long and narrow pipeline, finally also need to carry out texture, realize the automatic 3D modeling of long and narrow pipeline; Triangular grid is the method discrete point cloud tri patch in space being built into body surface, and the process obtaining the cloud data of cross-section of pipeline is the panorama sectioning image of each frame of process, and the cloud data obtained is regularly arranged; Triangular grid model is adopted to carry out three-dimensionalreconstruction, and by the texture in panoramic picture in 3D model.
CN201510391913.5A 2015-07-02 2015-07-02 Biological robot-based miniature wireless active omni-directional vision sensor Pending CN105043351A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510391913.5A CN105043351A (en) 2015-07-02 2015-07-02 Biological robot-based miniature wireless active omni-directional vision sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510391913.5A CN105043351A (en) 2015-07-02 2015-07-02 Biological robot-based miniature wireless active omni-directional vision sensor

Publications (1)

Publication Number Publication Date
CN105043351A true CN105043351A (en) 2015-11-11

Family

ID=54450089

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510391913.5A Pending CN105043351A (en) 2015-07-02 2015-07-02 Biological robot-based miniature wireless active omni-directional vision sensor

Country Status (1)

Country Link
CN (1) CN105043351A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107483835A (en) * 2017-09-25 2017-12-15 黄亮清 A kind of detection method of pipeline TV detecting system
CN108734763A (en) * 2018-06-15 2018-11-02 重庆大学 The digitizing solution and system in the micro assemby space of microassembly system
CN108874335A (en) * 2017-05-15 2018-11-23 国立民用航空学院 It is shown by the selectivity in the environment of data set definition
CN110050172A (en) * 2016-12-16 2019-07-23 罗伯特·博世有限公司 For manufacturing the method and laser leveling device of the laser module of laser leveling device
CN111127455A (en) * 2019-12-27 2020-05-08 江苏恒澄交科信息科技股份有限公司 Pipeline measuring method based on video image analysis

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110050172A (en) * 2016-12-16 2019-07-23 罗伯特·博世有限公司 For manufacturing the method and laser leveling device of the laser module of laser leveling device
CN108874335A (en) * 2017-05-15 2018-11-23 国立民用航空学院 It is shown by the selectivity in the environment of data set definition
CN107483835A (en) * 2017-09-25 2017-12-15 黄亮清 A kind of detection method of pipeline TV detecting system
CN108734763A (en) * 2018-06-15 2018-11-02 重庆大学 The digitizing solution and system in the micro assemby space of microassembly system
CN108734763B (en) * 2018-06-15 2022-07-05 重庆大学 Method and system for digitizing micro-assembly space of micro-assembly system
CN111127455A (en) * 2019-12-27 2020-05-08 江苏恒澄交科信息科技股份有限公司 Pipeline measuring method based on video image analysis

Similar Documents

Publication Publication Date Title
Chen et al. 3D global mapping of large-scale unstructured orchard integrating eye-in-hand stereo vision and SLAM
CN109544679B (en) Three-dimensional reconstruction method for inner wall of pipeline
CN101419055B (en) Space target position and pose measuring device and method based on vision
CA2950791C (en) Binocular visual navigation system and method based on power robot
Ohno et al. Real-time robot trajectory estimation and 3d map construction using 3d camera
CN105043351A (en) Biological robot-based miniature wireless active omni-directional vision sensor
CN103971406B (en) Submarine target three-dimensional rebuilding method based on line-structured light
CN104330074B (en) Intelligent surveying and mapping platform and realizing method thereof
CN109472831A (en) Obstacle recognition range-measurement system and method towards road roller work progress
Steich et al. Tree cavity inspection using aerial robots
Bosch et al. Close-range tracking of underwater vehicles using light beacons
CN102927908A (en) Robot eye-on-hand system structured light plane parameter calibration device and method
CN108731587A (en) A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model
CN105014675B (en) A kind of narrow space intelligent mobile robot vision navigation system and method
Ponte et al. Visual sensing for developing autonomous behavior in snake robots
CN106625673A (en) Narrow space assembly system and assembly method
CN102253057B (en) Endoscope system and measurement method using endoscope system
CN110132226A (en) The distance and azimuth angle measurement system and method for a kind of unmanned plane line walking
Kang et al. Accurate fruit localisation using high resolution LiDAR-camera fusion and instance segmentation
CN104034261A (en) Surface normal measurement device and method
CN105023270A (en) Proactive 3D stereoscopic panorama visual sensor for monitoring underground infrastructure structure
More et al. Visual odometry using optic flow for unmanned aerial vehicles
Shen et al. A multi-view camera-projector system for object detection and robot-human feedback
Grudziński et al. Stereovision tracking system for monitoring loader crane tip position
CN104858877B (en) High-tension line drop switch changes the control method of control system automatically

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20151111

RJ01 Rejection of invention patent application after publication