CN111707668B - Tunnel detection and image processing method based on sequence images - Google Patents

Tunnel detection and image processing method based on sequence images Download PDF

Info

Publication number
CN111707668B
CN111707668B CN202010468183.5A CN202010468183A CN111707668B CN 111707668 B CN111707668 B CN 111707668B CN 202010468183 A CN202010468183 A CN 202010468183A CN 111707668 B CN111707668 B CN 111707668B
Authority
CN
China
Prior art keywords
tunnel
sensor
image data
tunnel section
sequence image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010468183.5A
Other languages
Chinese (zh)
Other versions
CN111707668A (en
Inventor
张德津
曹民
田霖
王新林
卢毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Optical Valley Excellence Technology Co ltd
Original Assignee
Wuhan Optical Valley Excellence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Optical Valley Excellence Technology Co ltd filed Critical Wuhan Optical Valley Excellence Technology Co ltd
Priority to CN202010468183.5A priority Critical patent/CN111707668B/en
Publication of CN111707668A publication Critical patent/CN111707668A/en
Application granted granted Critical
Publication of CN111707668B publication Critical patent/CN111707668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C22/00Measuring distance traversed on the ground by vehicles, persons, animals or other moving solid bodies, e.g. using odometers, using pedometers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The embodiment of the invention provides a tunnel detection and image processing method based on a sequence image, wherein the method comprises the following steps: acquiring sequence image data and point cloud data of any tunnel section based on each sensor; determining a sequence image data processing result of any tunnel section taking a multi-sensor integrated platform as a reference coordinate space; determining a mapping function relation between the position of each sensor when triggering and collecting the section data of each tunnel and the static tunnel coordinate parameter; and determining a tunnel data processing result taking the static tunnel as a reference coordinate space, wherein the tunnel data processing result is effective image data for tunnel detection in the sequence image data. The method provided by the embodiment of the invention realizes the fusion of a plurality of sensor data, the obtained tunnel data processing result can effectively and intuitively reflect the tunnel lining characteristics, and the detection efficiency of the tunnel lining is improved.

Description

Tunnel detection and image processing method based on sequence images
Technical Field
The invention relates to the technical field of tunnel detection, in particular to a tunnel detection and image processing method based on sequence images.
Background
The highway tunnel lining structure is usually provided with various degrees of seepage water, cracks, deformation, damage and other diseases, so that the traffic quality is influenced, the driving safety is threatened, and the maintenance period and the service life of the highway tunnel are shortened. At present, the daily detection work of the running tunnel is long-term dependent on manual modes such as naked eye identification, scale measurement, photographing record and the like to realize the detection of tunnel lining diseases, and the method has low accuracy, low efficiency and poor safety.
In recent years, an automatic tunnel detection system gradually replaces the traditional manual detection, and the efficiency of tunnel detection is greatly improved. Such systems typically use multiple cameras, lidar or other sensors to identify disease in place of the naked eye. However, the independent detection of diseases by using a single type of sensor is generally unreliable, and in order to achieve better detection effect, cooperation of multiple types of sensors is required, which involves the problems of data acquisition, data positioning and data fusion of the multiple sensors.
Disclosure of Invention
The embodiment of the invention provides a tunnel detection and image processing method based on a sequence image, which is used for solving the problems of data acquisition, data positioning and data fusion in tunnel defect detection by cooperation of multiple types of sensors.
The embodiment of the invention provides a tunnel detection and image processing method based on a sequence image, which comprises the following steps:
acquiring sequence image data and point cloud data of any tunnel section based on each sensor, wherein each sensor comprises a laser radar with relatively fixed positions and a plurality of image sensors, and each sensor is rigidly connected to a multi-sensor integrated platform; the sequence image data are tunnel image data acquired by the plurality of image sensors; the point cloud data are tunnel surface profile data acquired by the laser radar;
determining a sequence image data processing result of any tunnel section by taking a multi-sensor integrated platform as a reference coordinate space based on the sequence image data and point cloud data of any tunnel section and the mapping function relation among the coordinate parameters of each sensor;
based on the positioning and attitude determination sensors fixed on the multi-sensor integrated platform, solving the position information and the attitude information of each sensor relative to a coordinate space taking a static tunnel as a reference, and determining the mapping function relation between the position of each sensor when triggering and collecting the section data of each tunnel and the coordinate parameters of the static tunnel based on the position information and the attitude information;
And determining a tunnel data processing result by taking the static tunnel as a reference coordinate space based on a sequence image data processing result of each tunnel section and a mapping function relation between the position of each sensor when triggering and collecting the data of each tunnel section and the static tunnel coordinate parameter, wherein the tunnel data processing result is effective image data for tunnel detection in the sequence image data.
Optionally, the determining, based on the sequence image data and the point cloud data of any tunnel section and the mapping function relationship between the coordinate parameters of each sensor, a sequence image data processing result of any tunnel section using the multi-sensor integrated platform as a reference coordinate space specifically includes:
determining a spatial position relation between the sequence image data and the point cloud data of any tunnel section and the corresponding tunnel section based on the sequence image data and the point cloud data of any tunnel section and the mapping function relation among the coordinate parameters of each sensor;
and determining a sequence image data processing result of any tunnel section by taking the multi-sensor integrated platform as a reference coordinate space based on the sequence image data and the point cloud data of any tunnel section and the spatial position relation corresponding to any tunnel section.
Optionally, determining a sequence image data processing result of the any tunnel section with the multi-sensor integrated platform as a reference coordinate space based on the sequence image data and the point cloud data of the any tunnel section and the spatial position relationship corresponding to the any tunnel section, and further includes:
performing image processing on the sequence image data based on the sequence image data and the point cloud data of any tunnel section, the mapping function relation among the coordinate parameters of each sensor and the tunnel profile information of any tunnel section;
the tunnel profile information of any tunnel section is determined based on the point cloud data of any tunnel section.
Optionally, the performing image processing on the sequence image data includes:
image processing is performed on the sequence image data using at least one of positioning, registration and cropping.
Optionally, the determining a tunnel data processing result using the static tunnel as a reference coordinate space based on the sequence image data processing result of each tunnel section and the mapping function relation between the position of each sensor when triggering and collecting the data of each tunnel section and the static tunnel coordinate parameter specifically includes:
Determining a spatial position relation between the sequence image data processing result of each tunnel section and a static tunnel based on the sequence image data processing result of each tunnel section and a mapping function relation between the position of each sensor when triggering and collecting the data of each tunnel section and the static tunnel coordinate parameter;
and determining a tunnel data processing result by taking the static tunnel as a reference coordinate space based on the sequence image data processing result of each tunnel section and the spatial position relation corresponding to the static tunnel.
Optionally, based on the positioning and attitude determination sensor fixed on the multi-sensor integrated platform, the method solves the position information and the attitude information of each sensor relative to the coordinate space taking the static tunnel as a reference, and specifically comprises the following steps:
based on a positioning and attitude determining sensor fixed on a multi-sensor integrated platform, acquiring a moving distance and a course angle of the multi-sensor integrated platform from a previous control point to a current position;
based on the position information of the last control point, the moving distance and the course angle, solving the position information and the attitude information of each sensor relative to a coordinate space taking a static tunnel as a reference;
The control points are physical control points arranged in the tunnel or virtual control points determined by a navigation satellite system.
Optionally, the solving the position information and the attitude information of each sensor relative to the coordinate space taking the static tunnel as a reference based on the position information, the moving distance and the heading angle of the last control point further includes:
correcting the position information of the multi-sensor integrated platform based on the first horizontal offset and the second horizontal offset;
wherein the first horizontal offset is determined based on the position information of the last control point, the moving distance, and the heading angle; the second horizontal offset is determined based on a cross-sectional distance of the multi-sensor integrated platform relative to a static tunnel.
According to the tunnel detection and image processing method based on the sequence image, the sequence image data and the point cloud data of the tunnel section are obtained, the obtained multi-source data are converted into the coordinate space of the multi-sensor integrated platform and then fused to obtain the sequence image data processing result of the tunnel section, the sequence image data processing result is converted into the static tunnel coordinate space and then fused to obtain the tunnel data processing result, fusion of a plurality of sensor data is achieved, the obtained tunnel data processing result can effectively and intuitively reflect tunnel lining characteristics, and the detection efficiency of tunnel lining is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a tunnel detection and image processing method based on a sequence image according to an embodiment of the present invention;
fig. 2 is a schematic diagram of tunnel data acquisition according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a three-dimensional model of a tunnel and an expanded view thereof according to an embodiment of the present invention;
fig. 4 is a schematic diagram of parameter calculation of a tunnel detection and image processing method based on a sequence image according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a result of processing tunnel section sequence image data according to an embodiment of the present invention;
fig. 6 is a schematic diagram of overlapping camera imaging corresponding to adjacent image data according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of overlapping area widths of adjacent image data according to an embodiment of the present invention;
Fig. 8 is a schematic diagram of image structure dislocation in a driving direction according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a positioning method for tunnel detection and image processing based on sequential images according to an embodiment of the present invention;
reference numerals illustrate:
210-tunneling; 220-multisensor integrated level 221-optical camera one; a stage;
222—optical camera two; 223-optical camera three; 230—a mobile platform;
301-tunnel image three-dimensional model 302-tunnel image three-dimensional model 303-image data; a shape; unfolding the panoramic image;
501-two-dimensional tunnel section; 502-an image; 503-image center.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a schematic flow chart of a tunnel detection and image processing method based on a sequence image according to an embodiment of the present invention, as shown in fig. 1, the method includes:
Step 101, respectively acquiring sequence image data and point cloud data of any tunnel section based on each sensor, wherein each sensor comprises a laser radar with relatively fixed positions and a plurality of image sensors, and each sensor is rigidly connected to a multi-sensor integrated platform; the sequence image data are tunnel image data acquired by a plurality of image sensors; the point cloud data are tunnel surface profile data acquired by a laser radar;
specifically, the image sensor for acquiring the sequence image data of the tunnel section may be an optical camera or an infrared imager, and the type of the image sensor for acquiring the data is not particularly limited in the embodiment of the present invention.
Each sensor is rigidly connected to a multi-sensor integrated platform, and when each sensor works, the relative position and relative posture relation between the sensors are fixed, and the relative position and relative posture relation between each sensor and the multi-sensor integrated platform are also fixed.
Step 102, determining a sequence image data processing result of any tunnel section by taking a multi-sensor integrated platform as a reference coordinate space based on sequence image data and point cloud data of any tunnel section and a mapping function relation between coordinate parameters of each sensor;
Specifically, since the relative position and the relative posture relationship between the sensors are fixed when the respective sensors are operated, the relative position and the relative posture relationship between the respective sensors with respect to the multi-sensor integrated platform are also fixed. That is, the relative relationship between the sensors does not change during the detection process, and the relative relationship between the sensors and the multi-sensor integrated platform does not change. Therefore, the multi-sensor integrated platform is used as a reference coordinate space, the relative position and the relative posture of each sensor can be obtained in advance through a calibration and resolving mode, and the mapping function relation between the coordinate parameters of each sensor and the coordinate parameters of the multi-sensor integrated platform is determined.
According to the mapping function relation between the coordinate parameters of each sensor and the coordinate parameters of the multi-sensor integrated platform, the spatial position relation between the point cloud data and the sequence image data of any tunnel section respectively acquired by each sensor can be determined, and according to the spatial position relation among the data, the acquired data can be fused to obtain the sequence image data processing result of the corresponding tunnel section.
Step 103, based on the positioning and attitude determination sensors fixed on the multi-sensor integrated platform, solving the position information and attitude information of each sensor relative to a coordinate space with a static tunnel as a reference, and determining the mapping function relation between the position of each sensor when triggering and collecting the section data of each tunnel and the coordinate parameters of the static tunnel based on the position information and the attitude information;
specifically, during tunnel inspection, the multi-sensor integrated platform may be rigidly affixed to the mobile platform as the mobile platform moves within the tunnel. The method comprises the steps of establishing a coordinate space with a static tunnel as a reference, acquiring the speed and the course angle of the multi-sensor integrated platform by a positioning and attitude determination sensor fixed on the multi-sensor integrated platform, determining the position information and the attitude information of the multi-sensor integrated platform relative to the static tunnel, and further determining the mapping function relation between the position of each sensor when triggering and collecting the section data of each tunnel and the coordinate parameters of the static tunnel.
Step 104, determining a tunnel data processing result by taking the static tunnel as a reference coordinate space based on the sequence image data processing result of each tunnel section and the mapping function relation between the position of each sensor when triggering and collecting the data of each tunnel section and the static tunnel coordinate parameter, wherein the tunnel data processing result is effective image data for tunnel detection in the sequence image data.
Specifically, according to the mapping function relation between the position of each tunnel section data triggered and collected by each sensor and the static tunnel coordinate parameter, the spatial position relation of the sequence image data processing result of each tunnel section in the static tunnel coordinate space can be determined, and then data fusion is carried out according to the spatial position relation, so that the tunnel data processing result taking the static tunnel as the reference coordinate space is obtained. The tunnel data processing result comprises a fusion result of sequence image data and point cloud data of the whole tunnel. The tunnel data processing result can be presented in a three-dimensional model form, and can be unfolded into a corresponding panoramic view, so that detection personnel can conveniently analyze the conditions of water leakage, cracks, deformation, damage and the like on the surface of the tunnel lining.
The following is by way of example. Fig. 2 is a schematic diagram of tunnel data collection provided in the embodiment of the present invention, as shown in fig. 2, a tunnel 210 to be measured is a highway tunnel, and a truck is used as a mobile platform 230 in a tunnel detection system. The mobile platform 230 has the multisensor integrated platform 220 loaded thereon. The first optical camera 221, the second optical camera 222, and the third optical camera 223 are rigidly fixed to the multi-sensor integrated platform 220.
And acquiring sequence image data of any tunnel section through 3 optical cameras. With the multi-sensor integrated platform 220 as a coordinate space, since the 3 optical cameras are rigidly fixed on the multi-sensor integrated platform, the coordinate parameters of each camera relative to the multi-sensor integrated platform can be obtained through a calibration and resolving mode, and the information such as the optical axis and imaging view field of each camera under the reference system can also be obtained. Therefore, a fixed spatial position relationship exists between the sequence image data shot by each camera, the mapping function relationship between the coordinate parameters of each camera and the coordinate parameters of the multi-sensor integrated platform can be calculated through the relative position and the relative posture of each camera, the spatial position relationship between the sequence image data is further obtained, and the acquired sequence image data can be fused according to the spatial position relationship between the sequence image data to obtain the sequence image data processing result of the corresponding tunnel section.
As the multi-sensor integration platform 220 moves from the start end to the end of tunnel detection, the 3 optical cameras collect the image data of the tunnel lining surface, and the sequence image data processing results of a plurality of tunnel sections are continuously obtained.
Fig. 3 is a schematic diagram of a three-dimensional model of a tunnel and an expanded view thereof according to an embodiment of the present invention, as shown in fig. 3, according to a mapping function relationship between a position of a multi-sensor integrated platform when triggering and collecting data of each tunnel section and a static tunnel coordinate parameter, a sequence image data processing result of the tunnel section is fused to obtain a tunnel data processing result of the whole tunnel. The image data 303 is the image data collected by the second optical camera 222 in the embodiment of the invention. The tunnel data processing result of the whole tunnel can be presented in the form of an image three-dimensional model 301, and can be unfolded into a corresponding panorama 302.
According to the tunnel detection and image processing method based on the sequence image, the sequence image data and the point cloud data of the tunnel section are obtained, the obtained multi-source data are converted into the coordinate space of the multi-sensor integrated platform and then fused to obtain the sequence image data processing result of the tunnel section, the sequence image data processing result is converted into the static tunnel coordinate space and then fused to obtain the tunnel data processing result, fusion of a plurality of sensor data is achieved, the obtained tunnel data processing result can effectively and intuitively reflect tunnel lining characteristics, and the detection efficiency of tunnel lining is improved.
Based on the above embodiment, step 102 specifically includes:
determining the spatial position relation between the sequence image data and the point cloud data of any tunnel section and any tunnel section based on the sequence image data and the point cloud data of any tunnel section and the mapping function relation among the coordinate parameters of each sensor;
and determining a sequence image data processing result of any tunnel section by taking the multi-sensor integrated platform as a reference coordinate space based on the sequence image data and the point cloud data of any tunnel section and the spatial position relation corresponding to any tunnel section.
Specifically, the sequence image data processing result of any tunnel section is obtained by splicing a plurality of sequence images of any tunnel section according to the sequence image data and the point cloud data of any tunnel section and the spatial position relation corresponding to any tunnel section.
Fig. 4 is a schematic diagram of parameter calculation of a tunnel detection and image processing method based on a sequential image according to an embodiment of the present invention, where, as shown in fig. 4, a laser radar is rigidly fixed on a multi-sensor integrated platform, a relative coordinate system using the laser radar as an origin is established, a point L is a position of the laser radar, a point C is a position of any camera, a direction of the laser radar horizontally toward a tunnel lining surface is an X-axis, a forward direction of the multi-sensor integrated platform is a Y-axis, and a direction perpendicular to a road is a Z-axis.
The spatial positional relationship of each camera with respect to the lidar is fixed, and each camera and the lidar may be disposed in the same plane. Therefore, the relative coordinates of each camera in the relative coordinate system with the lidar as the origin can be obtained by the spatial positional relationship. For any camera, the imaging center (x, y, z) of any camera can be obtained according to the relative coordinates (Deltax, deltay, deltaz), and the solving formula is as follows:
wherein alpha is the angle of the optical axis of any camera, and d is the working distance of any camera. Because any camera and the laser radar are in a coplanar relationship, the imaging center of any camera is equal to the current position of the multi-sensor integrated platform in the Y axis, namely y= delta Y.
The working distance d of any camera is solved according to the following formula:
wherein alpha is the angle of the optical axis of any camera, h is the distance between the laser radar and the tunnel lining, and b is the distance between any camera and the laser radar.
The number N of cameras can be determined according to the size of the tunnel section, and the solving formula is as follows:
N=S/l
where l is the coverage field size of any camera and S is the arc length of the tunnel measured at one time.
l=2d/tanγ
Where l is the coverage field size of any camera, d is the working distance of any camera, and γ is the field angle of any camera.
According to the solving method, imaging centers of the rest cameras can be obtained by sequentially solving. Because the imaging center of the camera is a straight line located at the center of the image and perpendicular to the optical axis of the camera, the image data collected by each camera is a quadrangle centered on the imaging center.
And determining the spatial position relation between the image data collected by each camera and the tunnel section according to the imaging center of each camera. And determining a sequence image data processing result of any tunnel section by taking the multi-sensor integrated platform as a reference coordinate space according to the sequence image data and the point cloud data of any tunnel section and the spatial position relation corresponding to any tunnel section.
Fig. 5 is a schematic diagram of a processing result of sequence image data of a tunnel section, which is provided in an embodiment of the present invention, as shown in fig. 5, the image data 502 collected by each camera is projected to the tunnel section 501 according to the corresponding spatial position relationship, so that a series of collinear sequence images in the center 503 of the same tunnel section can be obtained, and the collinear sequence images in the center of the same tunnel section are fused to obtain a fused image, that is, the sequence image data processing result of the tunnel section.
Based on any of the above embodiments, determining a sequence image data processing result of any tunnel section using the multi-sensor integrated platform as a reference coordinate space based on the sequence image data and the point cloud data of any tunnel section and a spatial position relationship corresponding to any tunnel section, further includes:
performing image processing on the sequence image data based on the sequence image data and the point cloud data of any tunnel section, the mapping function relation among the coordinate parameters of each sensor and the tunnel contour information of any tunnel section;
the tunnel profile information of any tunnel section is determined based on the point cloud data of any tunnel section.
Specifically, the tunnel lining image is a sequence image acquired independently by a plurality of cameras. In order to ensure continuity and integrity of the images, redundant overlapping areas exist between adjacent sequence image data. The lining sequence image has relative displacement due to installation errors of a plurality of cameras, possible relative displacement among the cameras and the like, and is represented by crack disease dislocation, structure dislocation, marker dislocation and the like. For images of dislocated sequences with relative displacement, anomalies such as duplications or imperfections of objects in the images may occur.
According to the point cloud data of any tunnel section and the mapping function relation between the coordinate parameters of the laser radar of the point cloud data and the coordinate parameters of the multi-sensor integrated platform, the tunnel profile information of any tunnel section can be obtained.
Because the tunnel section is a curved surface, the curvature of each section is closely related to the tunnel profile information. When the image data are subjected to data fusion, certain picture processing is required to be performed by considering tunnel profile information of the tunnel section, so that the processed sequence image data processing result is closer to the real tunnel lining.
Based on any of the above embodiments, performing image processing on the sequence image data includes:
image processing is performed on the sequence image data using at least one of positioning, registration and cropping.
Specifically, the positioning method for image processing provided by the embodiment of the invention is as follows:
the section of the tunnel isAn arc-shaped curved surface. For the ith camera of tunnel detection, the camera is rigidly fixed on a multi-sensor integrated platform and forms an included angle beta with the horizontal i The calibration method can be obtained through calibration according to the fixed relation between the camera and the multi-sensor integrated platform. The imaging optical axis of the camera and the tunnel curved surface focus (x) i Yi) satisfies the equation:
wherein d i And H is the ground clearance of the axial focus of the ith camera.
From the above, the imaging optical axis of each camera and the tunnel curved surface focus coordinate (x i ,y i ) Fixed angle beta i And working distance d i Relationship between them.
The tunnel section can be regarded as a plane with a curved surface unfolded, the horizontal coordinate of the plane is taken as the curved surface direction beta of the tunnel section, and the vertical coordinate of the plane is taken as the running direction of the multi-sensor integrated platform. The images photographed by the N cameras for tunnel detection are N images arranged in series in the coordinate system. The imaging center of the N images is (beta) i ,z 0 ),β i Depending on the angle beta between the ith camera and the horizontal i ,z 0 Depending on the current driving coordinates of the multi-sensor integrated platform.
The registration method for image processing provided by the embodiment of the invention comprises the following steps:
obvious image features can exist in the overlapped area, and defects such as tunnel cracks and the like can be reflected. Image feature space matching may be performed on the overlapping regions using the SURF (Speeded Up Robust Features) algorithm. Because the overlapping area occupies less space of the image data, the SURF algorithm only needs to detect the overlapping area, and the speed of feature point detection is greatly increased. Because partial pseudo point pairs may exist in the matching points obtained by the SURF algorithm, the error is larger, and the RANSAC (Random Sample Consensus) algorithm can be used for filtering the matching points to obtain effective matching points. Based on the valid matching points, an image registration result is determined.
The method for clipping processing of the image processing provided by the embodiment of the invention comprises the following steps:
due to the different curvature of the tunnel section, the imaging plane of each camera is different, so that the adjacent image data has an overlapping area.
Fig. 6 is a schematic diagram of overlapping imaging of cameras corresponding to adjacent image data provided in an embodiment of the present invention, where, as shown in fig. 6, the adjacent image data is acquired by a camera a and a camera B, respectively, θ A And theta B Half the field angle of camera a and camera B, respectively. D (D) A And D B The object distances of camera a and camera B to the respective actual imaging planes, respectively. D is the object distance of camera a and camera B to the tunnel projection plane. The tunnel projection plane is a tunnel image fusion plane and is a reference plane for tunnel detection.
According to the camera imaging principle, the physical dimensional relationship of camera a and camera B to the picture formed by the actual imaging plane and the tunnel projection plane, respectively, can be formulated as:
L A =2*D A *tanθ A
L B =2*D B *tanθ B
L′ A =2*D*tanθ A
L′ B =2*D*tanθ B
wherein L is A For the physical dimensions, L ', of the picture formed by camera A to the actual imaging plane' A For the physical dimensions, L, of the picture formed by camera A to the tunnel projection plane B For the physical dimensions, L ', of the picture formed by the camera B to the actual imaging plane' B The physical dimensions of the picture formed for camera B to the tunnel projection plane.
FIG. 7 is a schematic diagram showing the width of overlapping areas of adjacent image data according to an embodiment of the present invention, wherein the width of adjacent image data is L 'on a tunnel projection plane as shown in FIG. 7' A And L' B The overlap region width L can be formulated as:
L=0.5*(L′ A +L′ B )-L′
where L' is the imaging center distance of adjacent image data.
The image corresponding to the overlapping area width L of the adjacent image data is the part needing cutting. In the above embodiment, the physical resolution of the image data, that is, the physical distance corresponding to each pixel point, may be obtained by solving based on the pose position information of the corresponding camera and the camera imaging principle. Let the physical distance corresponding to each pixel point be p, the solving formula is as follows:
p=l/q
where l is the coverage field size of any camera, q is the camera resolution factor, and the camera resolution of any camera is qxq.
In engineering applications, the physical resolution, also referred to as the object physical precision or object resolution, may be determined by the relevant tunnel detection specifications.
The image processing method provided by the embodiment of the invention is as follows:
in consideration of the robustness of the multi-sensor calibration result, the rigidity of the multi-sensor integrated platform, the positioning and attitude determination precision of the inertial component and other factors in a dynamic measurement environment, registration errors are usually generated based on geometric parameter calculation, so that image structure dislocation usually exists at joints, double images are formed in an image overlapping area, and joints are formed at the edges of the images.
The generated structural misalignment is generated by geometric parameter calculation, and the corresponding registration error mainly comes from the structural misalignment caused by camera view field range error and imaging error of the camera view field center.
And cutting out overlapped parts in the adjacent sequence image data based on the image registration result and the resolution of the adjacent image data, and determining effective image data of the overlapped parts.
According to the physical resolution of the adjacent image data, the image data with higher physical resolution can be used for covering the image data with higher physical resolution, so that the resolution of the sequence image data processing result of the tunnel section is improved.
Thus, the image processing can be completed by performing mainly translation on the sequence image data with a small amount of scaling operation.
Furthermore, misalignment of the image structure may also occur in the direction of travel of the multi-sensor integrated platform. The central axis of the ideal image data passes through the image center of the image data and is parallel to the direction of the tunnel section. The central axis of the sequential image data captured by the plurality of cameras on the multi-sensor integrated platform should be exactly collinear with the central axis of detection representing the tunnel section. However, in the actual detection process, due to the installation error of the sensors on the multi-sensor integrated platform, the central axis of the image data and the detection central axis of the tunnel section have errors of several millimeters to tens of millimeters.
In view of the above-described problems, still another clipping method for image processing provided by the embodiments of the present invention is as follows.
Fig. 8 is a schematic diagram of image structure dislocation in a driving direction according to an embodiment of the present invention, where, as shown in fig. 8, the central axes of n pieces of sequential image data shot by a camera are not exactly collinear with the detection central axis of the tunnel section. Errors of the central axes of the n pieces of sequence image data and the detected central axes of the tunnel sections are respectively expressed as follows: δO (delta-O) 1 ,δO 2 ,…,δO n . The driving direction is the moving direction of the multi-integrated sensor platform.
For any image data (image i), the field width in the running direction is E i Positioning error delta O exists between central axis of image and detection section i The actual distance between the edges of the two sides and the detection section is L i And R is i The equation is satisfied:
wherein the field of view width E i For the coverage field size of any image data (image i) corresponding to the camera, the positioning error δO is solved by the method described above i And solving for the registered image data. The boundaries of the two sides of the image in the driving direction are not collinear, and cutting is needed. Clipping range is image edge distance detectionMinimum value of cross section.
Wherein L is Clipping method Detecting a left minimum value of a section for the edge distance of the sequence image, R Clipping method The right minimum of the section is detected for the sequence image edge distance.
Cropping is one of the effective means of handling the overlapping areas between images. Because of the different camera parameters and shooting environments, there is a difference in imaging quality of different images in the overlapping region. In principle, higher quality images are more suitable as a data source for the overlapping area. Typically image quality is used to quantify the sharpness of an image in the overlapping region, depending on two factors, the theoretical value of the camera imaging quality and the sharpness of the actual image.
In theory, the imaging quality of a camera is mainly related to pixel points, gray values, signal to noise ratios, imaging distances and the like. Forms the basic unit of the photosensitive layer of the video camera. The diameter of the pixel point is in the order of micrometers, and the more the pixel points are in the unit area, the finer the formed image is, and the higher the resolution is. The higher the gray value, the more gray the camera can resolve, and the more gray details are. The signal-to-noise ratio is the ratio of the normal optical signal to the abnormal signal, and the higher the signal-to-noise ratio is, the more true and clear the image is. Imaging distance refers to the working distance of the camera to the lining, and generally the closer the distance is, the clearer the image is.
Under the actual detection condition, the image definition is different from the theoretical value of the image definition and needs to be corrected under the influence of factors such as ambient light, illumination light, material reflection characteristics, air impurities and the like. In the quality evaluation of the reference-free image, the definition of the image is a main index for measuring the quality of the image, and can better correspond to the subjective feeling of a person. Several of the more common, representative sharpness algorithms currently include: tenengrad gradient function, laplacian gradient function, SMD (gray variance) function, energy gradient function, entropy function, brenner gradient function, tenengrad gradient function, volloth function, EAV point sharpness algorithm function, and the like.
Any tunnel sequence image is cut to obtain a width of L Clipping method -R Clipping method Valid tunnel profile data of i, i.e. sequence image data processing results.
Based on any of the above embodiments, step 104 specifically includes:
determining a spatial position relation corresponding to the static tunnel between the sequence image data processing result of each tunnel section based on the sequence image data processing result of each tunnel section and the mapping function relation between the position of each sensor when triggering and collecting the data of each tunnel section and the static tunnel coordinate parameter;
and determining a tunnel data processing result by taking the static tunnel as a reference coordinate space based on the sequence image data processing result of each tunnel section and the spatial position relation corresponding to the static tunnel.
Specifically, according to the sequence image data processing result of each tunnel section and the mapping function relation between the position of each sensor when triggering and collecting the data of each tunnel section and the static tunnel coordinate parameter, the spatial position relation between the sequence image data processing result of each tunnel section and the static tunnel can be determined.
The moving speed of the multi-sensor integrated platform corresponding to the sequence image data processing result of each tunnel section can be determined through the spatial position relation between the sequence image data processing result of each tunnel section and the static tunnel and the acquisition time of the sequence image.
The moving speed of the multi-sensor integrated platform is v, and in order to prevent data omission in the driving direction, the section measurement interval is t, and the equation should be satisfied:
vt≥|L clipping method -R Clipping method |
Wherein L is Clipping method Detecting a left minimum value of a section for the edge distance of the sequence image, R Clipping method The right minimum of the section is detected for the sequence image edge distance.
At this time, the sequence image data processing results of the tunnel section can be spliced by taking the step size vt as a unit according to the sequence image data acquisition time tag sequence, and the tunnel data processing result taking the static tunnel as a reference coordinate space is obtained.
The tunnel data processing result comprises a fusion result of point cloud data and image data of the whole tunnel. The tunnel data processing result can be presented in a three-dimensional model form, and can be unfolded into a corresponding panoramic view, so that detection personnel can conveniently analyze the conditions of water leakage, cracks, deformation, damage and the like on the surface of the tunnel lining.
The tunnel is a closed space, lacking GPS (Global Positioning System ) signals. In the prior art, relative coordinate positioning is used, the odometer measures the moving distance of the multi-sensor integrated platform, the inertial measurement unit measures the course angle of the movement of the multi-sensor integrated platform, and the position information of the multi-sensor integrated platform is calculated based on the distance and the course angle.
Based on any of the above embodiments, step 103 specifically includes:
based on a positioning and attitude determining sensor fixed on the multi-sensor integrated platform, acquiring a moving distance and a course angle of the multi-sensor integrated platform from a previous control point to a current position;
based on the position information, the moving distance and the course angle of the last control point, solving the position information and the attitude information of each sensor relative to a coordinate space taking a static tunnel as a reference;
the control point is a physical control point installed in the tunnel or a virtual control point determined by the navigation satellite system.
Specifically, 1 entity control point is installed in the tunnel at intervals, or virtual control points determined by the navigation satellite system are marked in the tunnel, each control point is positioned in advance, and the position information of the multi-sensor integrated platform is combined, so that the more accurate position information of the multi-sensor integrated platform relative to a coordinate space taking the static tunnel as a reference is obtained.
And taking the control point as a reference starting point, and acquiring the moving distance and the course angle of the multi-sensor integrated platform moving to the current position. The positioning and attitude determination sensor comprises an odometer (DMI) and an Inertial Measurement Unit (IMU). The positioning and attitude determining sensor is fixed on the multi-sensor integrated platform.
The moving distance may be acquired by an odometer (DMI), the heading angle may be acquired by an Inertial Measurement Unit (IMU), and the moving distance and the heading angle may be acquired by other sensors, which is not particularly limited in the embodiment of the present invention.
Based on the position information, the moving distance and the course angle of the last control point, the position information and the attitude information of each sensor relative to a coordinate space taking the static tunnel as a reference are solved.
The following is illustrated by way of example. FIG. 9 is a schematic diagram showing a positioning process of a tunnel detection and image processing method based on sequential images according to an embodiment of the present invention, wherein as shown in FIG. 9, a multi-sensor integrated platform is rigidly fixed on a vehicle-mounted platform, a relative coordinate system is established with a driving direction as a Y axis and a tunnel transverse direction as an X axis, a reference start point A is a previous control point, and the coordinates thereof are (X) 0 ,y 0 ). After a unit time, the multi-sensor integrated platform moves to a point B, and the coordinates of the point B are (x 1 ,y 1 ) The calculation method of the point B is as follows:
wherein d is the moving distance of the multi-sensor integrated platform from the point A to the point B, and alpha is the course angle of the multi-sensor integrated platform from the point A to the point B.
Based on any of the above embodiments, based on the position information, the movement distance, and the heading angle of the last control point, solving the position information and the attitude information of each sensor with respect to a coordinate space with reference to the static tunnel, further includes:
Correcting the position information of the multi-sensor integrated platform based on the first horizontal offset and the second horizontal offset;
wherein the first horizontal offset is determined based on the position information, the moving distance and the heading angle of the last control point; the second horizontal offset is determined based on a cross-sectional distance of the multi-sensor integrated platform relative to the static tunnel.
Specifically, because the multi-sensor integrated platform can generate horizontal offset in the moving process, the positioning accuracy of the multi-sensor integrated platform is affected. Therefore, correction of the position information of the multi-sensor integrated platform is required. Based on the position information, the moving distance and the course angle of the last control point, the first horizontal offset can be determined, and the solving formula is as follows:
where Δx is the first horizontal offset, which has an accumulated error.
The laser radar on the multi-sensor integrated platform can be utilized to acquire the cross section distance L of the multi-sensor integrated platform relative to the left and right lining of the tunnel at the point A 1 And L 2 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring a cross-sectional distance L 'of the multi-sensor integrated platform relative to left and right lining of a tunnel at the point B' 1 And L' 2 The second horizontal offset may be formulated as:
where Δl is a second horizontal offset of the multi-sensor integrated platform based on lidar acquisition.
The position information of the multi-sensor integrated platform may be corrected using a combination of the first horizontal offset and the second horizontal offset.
Based on any of the above embodiments, correcting the position information of the multi-sensor integrated platform based on the first horizontal offset and the second horizontal offset includes:
and if the accumulated error between the first horizontal offset and the second horizontal offset exceeds the constraint value, correcting the position information of the multi-sensor integrated platform.
Specifically, by measuring the sequence |ΔL- Δx| can be obtained, the average error thereof can be obtainedWith average error->As an integrated error constraint value between the first horizontal offset deltax and the second horizontal offset deltal.
If it isThe accumulated noise of the odometer (DMI) and Inertial Measurement Unit (IMU) positioning is considered to exceed the error of the lidar measurement, at which point the accumulated error is calibrated, Δx=Δl.
Assuming that the measurement noise of the odometer (DMI) and the Inertial Measurement Unit (IMU) and the measurement noise of the lidar both satisfy the assumption of gaussian distribution, the corrected heading angle data α 'and displacement data Δx' are then generated with a filtering algorithm. The filtering algorithm may employ kalman complementary filtering, which is not particularly limited in the embodiments of the present invention.
After correction, the position of the multisensor integrated platform at point B (x' 1 ,y′ 1 ) Can be expressed as:
according to the tunnel detection and image processing method based on the sequence images, the control points are arranged in the tunnel, the multi-sensor combination positioning in the tunnel is realized by utilizing the multi-sensor, and the high-precision positioning in the tunnel can be realized under the conditions of no GPS signal and lack of reference objects.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course may be implemented by means of hardware. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (6)

1. The tunnel detection and image processing method based on the sequence image is characterized by comprising the following steps:
acquiring sequence image data and point cloud data of any tunnel section based on each sensor, wherein each sensor comprises a laser radar with relatively fixed positions and a plurality of image sensors, and each sensor is rigidly connected to a multi-sensor integrated platform; the sequence image data are tunnel image data acquired by the plurality of image sensors; the point cloud data are tunnel surface profile data acquired by the laser radar;
determining a sequence image data processing result of any tunnel section by taking a multi-sensor integrated platform as a reference coordinate space based on the sequence image data and point cloud data of any tunnel section and the mapping function relation among the coordinate parameters of each sensor;
based on the positioning and attitude determination sensors fixed on the multi-sensor integrated platform, solving the position information and the attitude information of each sensor relative to a coordinate space taking a static tunnel as a reference, and determining the mapping function relation between the position of each sensor when triggering and collecting the section data of each tunnel and the coordinate parameters of the static tunnel based on the position information and the attitude information;
Determining a tunnel data processing result by taking a static tunnel as a reference coordinate space based on a sequence image data processing result of each tunnel section and a mapping function relation between a position of each sensor when triggering and collecting data of each tunnel section and a static tunnel coordinate parameter, wherein the tunnel data processing result is effective image data for tunnel detection in the sequence image data;
the determining a sequence image data processing result of any tunnel section by taking the multi-sensor integrated platform as a reference coordinate space based on the sequence image data and the point cloud data of any tunnel section and the mapping function relation among the coordinate parameters of each sensor specifically comprises the following steps:
determining a spatial position relation between the sequence image data and the point cloud data of any tunnel section and the corresponding tunnel section based on the sequence image data and the point cloud data of any tunnel section and the mapping function relation among the coordinate parameters of each sensor;
determining a sequence image data processing result of any tunnel section by taking a multi-sensor integrated platform as a reference coordinate space based on the sequence image data and the point cloud data of any tunnel section and the spatial position relation corresponding to any tunnel section;
The determining a spatial position relationship between the sequence image data and the point cloud data of any tunnel section and the corresponding tunnel section based on the mapping function relationship between the sequence image data and the point cloud data of any tunnel section and the coordinate parameters of each sensor comprises:
determining the relative coordinates of each image sensor in a relative coordinate system taking the laser radar as an origin based on the spatial position relation between each image sensor and the laser radar;
determining an imaging center of each image sensor based on the relative coordinates;
based on the imaging center of each image sensor, determining the spatial position relation between the tunnel image data acquired by each image sensor and any tunnel section;
wherein the image sensor is a camera; the image sensor and the laser radar are arranged on the same plane;
the determining a sequence image data processing result of any tunnel section by taking the multi-sensor integrated platform as a reference coordinate space based on the sequence image data and the point cloud data of any tunnel section and the spatial position relation corresponding to any tunnel section comprises:
According to the spatial position relation between the tunnel image data collected by each image sensor and any tunnel section, projecting the tunnel image data collected by each image sensor to any tunnel section to obtain a series of image center collineation sequence images of any tunnel section;
fusing a series of images which are collinear at the centers of a series of images of any tunnel section to obtain a fused image; and the fusion image is a sequence image data processing result of any tunnel section.
2. The method for detecting and processing a tunnel based on sequential images according to claim 1, wherein determining the sequential image data processing result of any tunnel section using the multi-sensor integrated platform as a reference coordinate space based on the sequential image data and the point cloud data of any tunnel section and the spatial position relationship corresponding to the any tunnel section, further comprises:
performing image processing on the sequence image data based on the sequence image data and the point cloud data of any tunnel section, the mapping function relation among the coordinate parameters of each sensor and the tunnel profile information of any tunnel section;
The tunnel profile information of any tunnel section is determined based on the point cloud data of any tunnel section.
3. The tunnel detection and image processing method based on the sequence image according to claim 2, wherein the image processing of the sequence image data comprises:
image processing is performed on the sequence image data using at least one of positioning, registration and cropping.
4. The method for detecting and processing the tunnel based on the sequence image according to claim 1, wherein the determining the result of processing the tunnel data by using the static tunnel as the reference coordinate space comprises:
determining a spatial position relation between the sequence image data processing result of each tunnel section and a static tunnel based on the sequence image data processing result of each tunnel section and a mapping function relation between the position of each sensor when triggering and collecting the data of each tunnel section and the static tunnel coordinate parameter;
And determining a tunnel data processing result by taking the static tunnel as a reference coordinate space based on the sequence image data processing result of each tunnel section and the spatial position relation corresponding to the static tunnel.
5. The tunnel detection and image processing method based on sequential images according to claim 1, wherein the solving the position information and the posture information of each sensor with respect to the coordinate space with reference to the static tunnel based on the positioning and posture sensor fixed on the multi-sensor integrated platform specifically comprises:
based on a positioning and attitude determining sensor fixed on a multi-sensor integrated platform, acquiring a moving distance and a course angle of the multi-sensor integrated platform from a previous control point to a current position;
based on the position information of the last control point, the moving distance and the course angle, solving the position information and the attitude information of each sensor relative to a coordinate space taking a static tunnel as a reference;
the control points are physical control points arranged in the tunnel or virtual control points determined by a navigation satellite system.
6. The tunnel detection and image processing method based on sequential images according to claim 5, wherein the solving of the position information and the attitude information of each sensor with respect to the coordinate space with reference to the static tunnel based on the position information of the previous control point, the moving distance, and the heading angle further comprises:
Correcting the position information of the multi-sensor integrated platform based on the first horizontal offset and the second horizontal offset;
wherein the first horizontal offset is determined based on the position information of the last control point, the moving distance, and the heading angle; the second horizontal offset is determined based on a cross-sectional distance of the multi-sensor integrated platform relative to a static tunnel.
CN202010468183.5A 2020-05-28 2020-05-28 Tunnel detection and image processing method based on sequence images Active CN111707668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010468183.5A CN111707668B (en) 2020-05-28 2020-05-28 Tunnel detection and image processing method based on sequence images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010468183.5A CN111707668B (en) 2020-05-28 2020-05-28 Tunnel detection and image processing method based on sequence images

Publications (2)

Publication Number Publication Date
CN111707668A CN111707668A (en) 2020-09-25
CN111707668B true CN111707668B (en) 2023-11-17

Family

ID=72538879

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010468183.5A Active CN111707668B (en) 2020-05-28 2020-05-28 Tunnel detection and image processing method based on sequence images

Country Status (1)

Country Link
CN (1) CN111707668B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112637594B (en) * 2020-12-23 2023-05-26 青岛大学 No-reference 3D point cloud quality assessment method based on bit stream
CN113376170B (en) * 2021-06-16 2023-01-10 博众精工科技股份有限公司 Calibration method and calibration block of product appearance detection equipment
CN114022370B (en) * 2021-10-13 2022-08-05 山东大学 Galvanometer laser processing distortion correction method and system
CN115343299B (en) * 2022-10-18 2023-03-21 山东大学 Lightweight highway tunnel integrated detection system and method
CN116358492B (en) * 2023-06-01 2023-08-04 辽宁省交通规划设计院有限责任公司 Tunnel intelligent detection device and method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008267843A (en) * 2007-04-17 2008-11-06 Tobishima Corp Tunnel face surface measuring system
CN103438823A (en) * 2012-12-27 2013-12-11 广州市地下铁道总公司 Tunnel section outline measuring method and device based on vision measurement
CN106053475A (en) * 2016-05-24 2016-10-26 浙江工业大学 Tunnel disease full-section dynamic rapid detection device based on active panoramic vision
CN106097348A (en) * 2016-06-13 2016-11-09 大连理工大学 A kind of three-dimensional laser point cloud and the fusion method of two dimensional image
CN107492069A (en) * 2017-07-01 2017-12-19 国网浙江省电力公司宁波供电公司 Image interfusion method based on more lens sensors
CN109029277A (en) * 2018-06-27 2018-12-18 常州沃翌智能科技有限公司 A kind of tunnel deformation monitoring system and method
CN109801216A (en) * 2018-12-20 2019-05-24 武汉武大卓越科技有限责任公司 The quick joining method of Tunnel testing image
CN109919839A (en) * 2019-01-18 2019-06-21 武汉武大卓越科技有限责任公司 A kind of tunnel graphic joining method
CN110827199A (en) * 2019-10-29 2020-02-21 武汉大学 Tunnel image splicing method and device based on guidance of laser range finder

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006084385A1 (en) * 2005-02-11 2006-08-17 Macdonald Dettwiler & Associates Inc. 3d imaging system
JP5362639B2 (en) * 2010-04-12 2013-12-11 住友重機械工業株式会社 Image generating apparatus and operation support system
US10254395B2 (en) * 2013-12-04 2019-04-09 Trimble Inc. System and methods for scanning with integrated radar detection and image capture
US10816347B2 (en) * 2017-12-12 2020-10-27 Maser Consulting, Inc. Tunnel mapping system and methods

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008267843A (en) * 2007-04-17 2008-11-06 Tobishima Corp Tunnel face surface measuring system
CN103438823A (en) * 2012-12-27 2013-12-11 广州市地下铁道总公司 Tunnel section outline measuring method and device based on vision measurement
CN106053475A (en) * 2016-05-24 2016-10-26 浙江工业大学 Tunnel disease full-section dynamic rapid detection device based on active panoramic vision
CN106097348A (en) * 2016-06-13 2016-11-09 大连理工大学 A kind of three-dimensional laser point cloud and the fusion method of two dimensional image
CN107492069A (en) * 2017-07-01 2017-12-19 国网浙江省电力公司宁波供电公司 Image interfusion method based on more lens sensors
CN109029277A (en) * 2018-06-27 2018-12-18 常州沃翌智能科技有限公司 A kind of tunnel deformation monitoring system and method
CN109801216A (en) * 2018-12-20 2019-05-24 武汉武大卓越科技有限责任公司 The quick joining method of Tunnel testing image
CN109919839A (en) * 2019-01-18 2019-06-21 武汉武大卓越科技有限责任公司 A kind of tunnel graphic joining method
CN110827199A (en) * 2019-10-29 2020-02-21 武汉大学 Tunnel image splicing method and device based on guidance of laser range finder

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Autonomous robotic system for tunnel structural inspection and assessment;Loupos, K.等;INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS;第2卷(第1期);43-66 *
基于空间映射图像拼接算法探讨;王恒立等;光电技术应用;第33卷(第3期);35-38, 57 *
车载全景影像与激光点云数据配准方法研究;闫利;曹亮;陈长军;黄亮;;测绘通报(2015年第3期);32-36 *
高分辨率热感图像与可见光图像配准方法研究;卢毅等;现代测绘;第41卷(第02期);24-26 *

Also Published As

Publication number Publication date
CN111707668A (en) 2020-09-25

Similar Documents

Publication Publication Date Title
CN111707668B (en) Tunnel detection and image processing method based on sequence images
JP6543520B2 (en) Survey data processing apparatus, survey data processing method and program for survey data processing
CN108692719B (en) Object detection device
CN111260615B (en) Laser and machine vision fusion-based method for detecting apparent diseases of unmanned aerial vehicle bridge
JP2009008662A (en) Object detection cooperatively using sensor and video triangulation
CN108802043B (en) Tunnel detection device, tunnel detection system and tunnel defect information extraction method
JP6460700B2 (en) Method for diagnosing whether there is a defect on the inner wall of the tunnel and a program for diagnosing the presence of a defect on the inner wall of the tunnel
WO2017119202A1 (en) Structure member specifying device and method
EP3032818B1 (en) Image processing device
JP5418176B2 (en) Pantograph height measuring device and calibration method thereof
CN105953741B (en) System and method for measuring local geometric deformation of steel structure
EP3155369B1 (en) System and method for measuring a displacement of a mobile platform
CA2669973A1 (en) System and method for inspecting the interior surface of a pipeline
JPWO2018042954A1 (en) In-vehicle camera, adjustment method of in-vehicle camera, in-vehicle camera system
JP5388921B2 (en) Three-dimensional distance measuring apparatus and method
CN113310987B (en) Tunnel lining surface detection system and method
JP2023029441A (en) Measuring device, measuring system, and vehicle
EP3648050B1 (en) Image compositing method, image compositing device, and recording medium
Feng et al. Crack assessment using multi-sensor fusion simultaneous localization and mapping (SLAM) and image super-resolution for bridge inspection
JP2009052907A (en) Foreign matter detecting system
JP6725675B2 (en) Self-position estimating device, self-position estimating method, program, and image processing device
CN111121714B (en) Method and system for measuring driving sight distance
Alkaabi et al. Application of A Drone camera in detecting road surface cracks: A UAE testing case study
JP2006317418A (en) Image measuring device, image measurement method, measurement processing program, and recording medium
Jutzi et al. Improved UAV-borne 3D mapping by fusing optical and laserscanner data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 430223 No.6, 4th Road, Wuda Science Park, Donghu high tech Zone, Wuhan City, Hubei Province

Applicant after: Wuhan Wuda excellence Technology Co.,Ltd.

Address before: 430223 No.6, 4th Road, Wuda Science Park, Donghu high tech Zone, Wuhan City, Hubei Province

Applicant before: WUHAN WUDA ZOYON SCIENCE AND TECHNOLOGY Co.,Ltd.

Address after: 430223 No.6, 4th Road, Wuda Science Park, Donghu high tech Zone, Wuhan City, Hubei Province

Applicant after: Wuhan Optical Valley excellence Technology Co.,Ltd.

Address before: 430223 No.6, 4th Road, Wuda Science Park, Donghu high tech Zone, Wuhan City, Hubei Province

Applicant before: Wuhan Wuda excellence Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant