CN114529808B - Pipeline detection panoramic shooting processing system and method - Google Patents
Pipeline detection panoramic shooting processing system and method Download PDFInfo
- Publication number
- CN114529808B CN114529808B CN202210418233.8A CN202210418233A CN114529808B CN 114529808 B CN114529808 B CN 114529808B CN 202210418233 A CN202210418233 A CN 202210418233A CN 114529808 B CN114529808 B CN 114529808B
- Authority
- CN
- China
- Prior art keywords
- module
- pipeline
- target
- visual
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 69
- 238000012545 processing Methods 0.000 title claims abstract description 36
- 238000000034 method Methods 0.000 title claims abstract description 24
- 230000008569 process Effects 0.000 claims abstract description 10
- 230000000007 visual effect Effects 0.000 claims description 103
- 230000006870 function Effects 0.000 claims description 57
- 238000000605 extraction Methods 0.000 claims description 46
- 230000009466 transformation Effects 0.000 claims description 40
- 230000004913 activation Effects 0.000 claims description 25
- 238000013507 mapping Methods 0.000 claims description 24
- 230000008447 perception Effects 0.000 claims description 22
- 238000011176 pooling Methods 0.000 claims description 18
- 238000007781 pre-processing Methods 0.000 claims description 16
- 239000000126 substance Substances 0.000 claims description 14
- 238000012937 correction Methods 0.000 claims description 12
- 230000001131 transforming effect Effects 0.000 claims description 10
- 238000007689 inspection Methods 0.000 claims description 7
- 238000012986 modification Methods 0.000 claims description 7
- 230000004048 modification Effects 0.000 claims description 7
- 238000003672 processing method Methods 0.000 claims description 7
- 238000005070 sampling Methods 0.000 claims description 7
- 238000012512 characterization method Methods 0.000 claims description 6
- OLBCVFGFOZPWHH-UHFFFAOYSA-N propofol Chemical compound CC(C)C1=CC=CC(C(C)C)=C1O OLBCVFGFOZPWHH-UHFFFAOYSA-N 0.000 claims description 6
- 229960004134 propofol Drugs 0.000 claims description 6
- 230000002708 enhancing effect Effects 0.000 claims description 5
- 230000007613 environmental effect Effects 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 3
- 230000007547 defect Effects 0.000 abstract description 11
- 238000011156 evaluation Methods 0.000 abstract description 5
- 201000010099 disease Diseases 0.000 abstract description 4
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 abstract description 4
- 230000008439 repair process Effects 0.000 abstract description 3
- 239000010865 sewage Substances 0.000 description 8
- 230000006872 improvement Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000007849 functional defect Effects 0.000 description 3
- 238000011835 investigation Methods 0.000 description 3
- 230000007847 structural defect Effects 0.000 description 3
- XLYOFNOQVPJJNP-ZSJDYOACSA-N Heavy water Chemical compound [2H]O[2H] XLYOFNOQVPJJNP-ZSJDYOACSA-N 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000009193 crawling Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000008595 infiltration Effects 0.000 description 1
- 238000001764 infiltration Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 239000003381 stabilizer Substances 0.000 description 1
- 239000002352 surface water Substances 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B37/00—Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8887—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A20/00—Water conservation; Efficient water supply; Efficient water use
- Y02A20/20—Controlling water pollution; Waste water treatment
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
The invention designs a pipeline detection panoramic shooting processing system and method, wherein the system comprises a main controller, a crawler, a camera and an image processing subsystem, the controller is used for controlling the crawler to advance, the camera is installed on the crawler, the crawler advances in a pipeline at a constant speed, the camera shoots panoramic pictures of the inner wall of the pipeline in the whole advancing process, and the image processing subsystem carries out splicing processing on the panoramic pictures to generate an expanded panoramic picture projected from the bottom to the top of the pipeline. The invention provides clearer and more intuitive picture data for disease snapshot, defect analysis, repair evaluation and the like during pipeline detection, and reduces omission caused by manual operation.
Description
Technical Field
The invention relates to the technical field of drainage pipeline detection, in particular to a panoramic shooting processing system and method for pipeline detection.
Background
After the rapid development of urban sewage treatment for decades, the urban sewage treatment gradually enters a systematic defect and leakage checking and repairing stage, a series of problems such as heavy ground, light underground, heavy factory light net, heavy water light mud and the like in the development process are corrected, and the problems of quality improvement and efficiency improvement of sewage treatment become important points of industrial attention. However, the problems of "internal leakage and external leakage", pipe network misconnection and mixed connection ", dry branch unconnection and the like of the sewage pipe network system still exist generally, so that the sewage treatment efficiency is low. The optimization of a drainage system and the improvement of the quality of a drainage pipe network are one of the keys of quality improvement and efficiency improvement of the existing sewage treatment. At present, a four-in-one investigation method is adopted to carry out surveying and mapping, investigation, detection and evaluation on a drainage pipe network. Checking the basic condition of a drainage pipe network through surveying and mapping; through investigation, the problems of direct sewage discharge, rain and sewage mixed connection, overflow pollution, surface water backflow, external water infiltration and the like of a drainage pipe network and drainage household pipe connection conditions are checked; detecting and clearing structural defects and functional defects of the pipeline and the inspection well; through evaluation, the problems existing in the drainage pipe network system are combed, a problem rectification item list is formed, and a basis is provided for subsequent rectification work.
The detection modes of the drainage pipeline commonly used at present comprise:
1) television detection (a method for detecting a pipeline by adopting a closed circuit television system, which is called CCTV detection for short);
2) sonar detection (a method for detecting the conditions below the water surface in a pipeline by adopting a sound wave detection technology);
3) pipeline periscope detection (a method for detecting a pipeline in an inspection well by adopting the pipeline periscope, which is called QV detection for short).
CCTV detection shooting equipment work flow:
after the robot enters the pipeline, the robot is controlled by a detector to crawl in the pipeline. During the moving process of the robot, the camera shoots the pictures in the pipeline, the pictures keep forward horizontal, shooting angles and focal lengths from changing midway, detection personnel watch the video pictures transmitted in real time, and the pictures are not required to be paused, recorded discontinuously and spliced.
Secondly, when the internal defects of the pipeline are found through real-time transmission of video pictures, the robot stops moving forward, stays at the position where the defects can be completely analyzed for at least 10 seconds, and focuses on shooting the defect parts.
The detecting personnel discovers the structural/functional defects and special structures in the pipeline through the real-time transmitted video pictures, fills in an original recording table, preliminarily judges and records the name, grade and distance information of the defects to form original data.
And fourthly, after the on-site detection is finished, rechecking the detection video and the original data by a data processing personnel, capturing a high-definition picture of the structural/functional defect position in the pipeline, noting the length of the defect and the position indicated by the annular clock in the pipeline, and sorting and compiling a detection and evaluation report.
Under the operation of the detecting personnel skilled in the technology, the complete and clear internal detection video data of a section of pipeline can be obtained by using the existing CCTV detecting equipment. However, the existing CCTV detection equipment is manually controlled by detection personnel, the crawling speed and stability of the equipment cannot be guaranteed to be always in a balanced state during detection, lateral capture of defects in a pipeline is also dependent on the visual discovery of field detection personnel, and the shooting angle, the shooting duration and the image definition are also limited by the shooting technology of the detection personnel. Therefore, under the condition that the field detection personnel can not guarantee skillful use technology and strict and serious working attitude of the instrument and equipment, the quality of shot video data is irregular, and the subsequent pipeline defect interpretation, report making and repair evaluation are influenced.
Disclosure of Invention
The invention aims to provide a pipeline detection panoramic shooting processing method aiming at the problems of difficulty in defect capture, large capacity of shot videos, difficulty in storage and transmission and large workload of manual checking in the existing pipeline detection process, so that the pipeline detection accuracy is improved, the working efficiency is improved, and the labor cost is saved.
The invention discloses a panoramic shooting processing system for pipeline detection, which comprises: the system comprises a main controller, a crawler, a camera and an image processing subsystem, wherein the controller is used for controlling the crawler to advance, the camera is installed on the crawler, the crawler advances in a pipeline at a constant speed at a preset speed, the camera shoots panoramic pictures of the inner wall of the pipeline in the whole advancing process, and the image processing subsystem carries out splicing processing on the panoramic pictures to generate an expanded panoramic picture projected from the bottom to the top of the pipeline;
the image processing subsystem comprises an image preprocessing module, a feature extraction module, a target detection and identification module, an enhanced space transformation module and an image splicing module,
the image preprocessing module is used for preprocessing the pictures shot by the camera, reducing noise and enhancing the pictures,
the feature extraction module is used for extracting visual features from the preprocessed pictures,
the object detection and identification module is used for detecting and identifying the same object in the pictures acquired by different cameras,
the enhanced spatial transform module is used for calculating the spatial coordinate mapping relation of the same target in different cameras,
and the image splicing module is used for splicing the images after the enhanced spatial transformation to generate an expanded panoramic image projected from the bottom to the top of the pipeline.
Furthermore, the cameras are three panoramic cameras which are installed in equal proportion, the front ends of the cameras are provided with illuminating lamps capable of adjusting light intensity, when a pipeline is detected, the cameras adjust the positions of the cameras according to pipe diameters, so that the cameras at the front ends of the crawlers are always kept on the axis positions in the pipeline, during shooting, the panoramic cameras automatically adjust shooting focuses according to internal environmental factors of the pipeline, during the process that the crawlers advance at a constant speed, the three panoramic cameras respectively shoot panoramic photos of partial inner surfaces of the inner wall of the pipeline from different directions, and 15-degree overlapping parts exist between every two photos.
Further, the feature extraction module is composed of a feature extraction module RCNNB and a channel attention module CHAB in the faster RCNN, and the photos after image preprocessing are respectively processed by the feature extraction module RCNNB and the channel attention module CHAB in the faster RCNN to obtain the feature extraction module RCNNB output featureAnd channel attention module CHAB output characteristics willThe two are combined to form a visual feature with stronger characterization capabilityThe formula is as follows:
Further, the channel attention module is composed of a global average pooling, a convolutional layer, a Mish activation function, and a Sigmoid function, and a data processing formula of the channel attention module CHAB is as follows:
wherein the content of the first and second substances,the channel attention module data is represented as,is the characteristic of the image block,representing Sigmoid function, Mish representing Mish activation functionConv1D represents a 1-dimensional convolution operation, and GAP represents a global average pooling.
Further, the network structure of the target detection and identification module is as follows:
wherein, the first and the second end of the pipe are connected with each other,conv1 denotes a convolution operation with a convolution kernel of 1,conv3 denotes convolution with a convolution kernel of 3Operation, relu represents the activation function linear correction unit, feature represents the input visual features in the object detection recognition module, imginfoRepresenting image information, reshape being a warping operation, propofol representing a region candidate operation, ROIPOOL being a region of interest pooling operation, softmax representing a softmax function, FC being a fully join operation,the regression offset representing the target candidate box,representing the probability that the candidate object belongs to a particular class;
probability of candidate object belonging to specific classWhen the value is higher than the preset value, the corresponding target is recorded and marked as U, and the corresponding visual characteristic isIdentifying a target V corresponding to the target U according to a cosine similarity criterion, wherein the corresponding visual characteristic is。
Further, the enhanced spatial transform module comprises a perceptual visual feature extraction module and a spatial coordinate transform module, wherein,
the perception visual feature extraction module is used for extracting input visual features of space coordinate transformation and constructing effective visual features by combining channel perception;
the space coordinate transformation module is used for carrying out space coordinate extraction, coordinate mapping and pixel acquisition on the visual features output by the perception visual feature extraction module;
wherein the content of the first and second substances,representing the visual characteristics of the s-th camera target U in the target detection and identification module,representing visual featuresVisual output after passing through the three convolution modules, ConvB represents a convolution operation;
represents a Sigmoid function Sigmoid, Relu represents a linear modification unit activation function, FC represents a full-connection operation, GAP represents a global average pooling operation,represents channel level multiplication;
the space coordinate transformation module is used for transforming visual characteristicsExtracting space coordinates and outputting space coordinate parametersCan be expressed as:
wherein FC isFull join operation, Relu denotes the Linear correction Unit activation function, CMRB1 、CMRB2、CMRB 33 convolution-pooling-activation modules are represented;
the space coordinate transformation module is used for transforming space coordinate parametersCoordinate mapping and pixel acquisition are carried out to complete the coordinate transformation from the coordinate of the target U to the corresponding deformed target VExpressed as:
where Map represents a coordinate mapping function and Sample represents a pixel sampling function.
The invention also discloses a pipeline detection panoramic shooting processing method, and the pipeline detection panoramic shooting processing system comprises the following steps:
step 1: the main controller controls the crawler to enter the pipeline, the initial position is determined, and the distance measuring instrument on the crawler returns to zero;
step 2: the crawler advances at a constant speed in the pipeline at a preset speed, the camera automatically focuses and shoots panoramic pictures of the inner wall of the pipeline during advancing, and the pictures are shot at a preset frequency;
and step 3: the image processing subsystem synchronously splices the panoramic photos, automatically calculates and marks position coordinates on a panoramic expansion picture of the inner wall of the pipeline according to the advancing speed and the shooting time of the crawler;
and 4, step 4: after shooting is finished, the main controller sends a return instruction to the crawler.
Further, the step 3 comprises:
step 301: the image preprocessing module reduces noise of the picture shot by each camera and enhances the picture to be q, and the pixel value of the ith pixel is as follows:
wherein, I represents the input picture and parameter taken by the cameraAndrespectively represent the k-th local areaI represents the pixel coordinates of the picture I taken by the camera;
step 302: the picture q is processed by a feature extraction module RCNNB and a channel attention module CHAB in the master RCNN respectively to obtain the output features of the RCNNB moduleAnd channel attention module CHAB output characteristicsCombined new visual characteristics with strong characterization abilityThe formula is as follows:
step 303: the target detection and identification module extracts and identifies the visual characteristics of the pictures taken by different cameras at the same timeThe same object in between is that,
wherein the content of the first and second substances,conv1 denotes the convolution operation with a convolution kernel of 1,conv3 represents convolution operation with convolution kernel of 3, relu represents activation function linear correction unit, feature represents input visual feature in target detection identification module, imginfoRepresenting image information, reshape being a warping operation, propofol representing a region candidate operation, ROIPOOL being a region of interest pooling operation, softmax representing a softmax function, FC being a fully join operation,the regression offset representing the target candidate box,representing the probability that the candidate object belongs to a particular class;
probability of candidate object belonging to specific classWhen the value is higher than the preset value, the corresponding target is recorded and marked as U, and the corresponding visual characteristic isIdentifying a target V corresponding to the target U according to a cosine similarity criterion, wherein the corresponding visual characteristic is;
Step 304: enhancing visual features of the same object obtained by the spatial transform moduleCalculating the space coordinate mapping relation of the same target in different collectors;
step 305: and after the target coordinate transformation is completed, the image splicing module splices and fuses the panorama of the image collected by the camera according to the corresponding coordinate.
Further, in the step 301, the parameterAndthe value of (d) is obtained by the Lagrange multiplier method, and the formula is as follows:
wherein the content of the first and second substances,andrespectively the k-th windowThe mean value and the standard deviation of the (c),is a constraint parameter;
to obtain effective parametersAndthe reconstructed pixel of the whole local area is as close as possible to the original pixel, i.e. twoEnergy sum of pixel difference of local areaAt a minimum, the formula is:
Further, the enhanced spatial transform module comprises a perceptual visual feature extraction module and a spatial coordinate transform module, wherein,
the perception visual feature extraction module is used for extracting input visual features of space coordinate transformation and constructing effective visual features by combining channel perception;
the space coordinate transformation module is used for carrying out space coordinate extraction, coordinate mapping and pixel acquisition on the visual features output by the perception visual feature extraction module;
wherein, the first and the second end of the pipe are connected with each other,representing an objectDetecting the visual characteristics of the s-th camera target U in the identification module,representing visual featuresVisual output after passing through the three convolution modules, ConvB represents a convolution operation;
represents a Sigmoid function Sigmoid, Relu represents a linear modification unit activation function, FC represents a full-connection operation, GAP represents a global average pooling operation,represents channel level multiplication;
the space coordinate transformation module is used for transforming visual characteristicsExtracting space coordinates and outputting space coordinate parametersCan be expressed as:
where FC is the full join operation, Relu denotes the linear correction unit activation function, CMRB1 、CMRB2、CMRB33 convolution-pooling-activation modules are represented;
the space coordinate transformation module is used for transforming space coordinate parametersCoordinate mapping and pixel acquisition are carried out to complete the coordinate transformation from the coordinate of the target U to the corresponding deformed target VExpressed as:
where Map represents a coordinate mapping function and Sample represents a pixel sampling function.
Compared with the prior art, the invention has the beneficial effects that:
1. the panoramic photo of the inner wall of the pipeline replaces a video, so that the problem that when a detection video is shot by a manual operation device, defects are missed due to tiny pipeline diseases and careless omission of detection personnel is solved;
2. the problems of missed judgment and wrong judgment of the pipeline diseases by the auditors caused by the short shooting time, the unclear image and the like in the detection video are avoided;
3. the requirements of field detection personnel on equipment operation are reduced, the difficulty of the auditors in interpreting the pipeline diseases is reduced, the labor input is reduced, and the personnel training cost is reduced;
4. the storage cost is reduced, the quality of the detection data is improved, and a detection smooth report and a repair scheme are convenient to compile subsequently;
5. the regular monitoring and the comprehensive maintenance of the urban underground comprehensive pipe gallery can be completed by a small amount of hands.
Drawings
FIG. 1 is a diagram of an exemplary operation of the disclosed system;
FIG. 2 is an enlarged schematic view of a camera disclosed herein;
FIG. 3 is a flow chart of an image stitching network architecture disclosed herein;
FIG. 4 is a schematic diagram of a channel attention module network disclosed in the present invention;
FIG. 5 is a schematic diagram of a network structure of an object detection and identification module according to the present disclosure;
FIG. 6 is a diagram of an exemplary network architecture for an enhanced spatial transform module according to the present disclosure;
FIG. 7 is an exemplary illustration of a photo effort disclosed herein.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
as shown in fig. 1, the present invention discloses a pipeline detection panoramic shooting processing system, which comprises: the system comprises a main controller 1, a crawler 3, a camera 2 and an image processing subsystem. The main controller 1 is used for controlling the crawler 3 to advance, the camera 2 is installed on the crawler 3, the crawler 3 advances at a constant speed in a pipeline at a preset speed, the camera 2 shoots panoramic photos of the inner wall of the pipeline in the advancing process, the image processing subsystem carries out splicing processing on the panoramic photos, and an expansion panoramic image projected from the bottom to the top of the pipeline is generated.
In this embodiment, as shown in fig. 2, the camera 2 adopts three panoramic shooting cameras installed in equal proportion, the front end of the camera 2 is provided with a lighting lamp with adjustable light intensity, when the pipeline is detected, the camera adjusts the position of the camera 2 according to the pipe diameter, so that the camera 2 at the front end of the crawler 3 is always kept on the axis position in the pipeline, and during shooting, the panoramic camera 2 automatically adjusts shooting focus according to the internal environmental factors of the pipeline. In the process that the crawler 3 advances at a constant speed, the three panoramic cameras 2 respectively shoot panoramic photos of the inner surface of the inner wall of the pipeline from different directions, and 15-degree overlapping parts are formed between every two panoramic photos. The camera 2 is connected with the crawler 3 by a holder stabilizer, so that pictures taken by the crawler 3 in a bumpy state still keep complete and clear.
As shown in fig. 3, the image processing subsystem includes an image preprocessing module, a feature extraction module, a target detection and identification module, an enhanced spatial transform module, and an image stitching module.
The image preprocessing module is used for denoising and enhancing the pictures shot by the camera. In this embodiment, the noise of the captured image is reduced by using enhanced guided filtering, so that on one hand, the noise caused by the captured image can be effectively reduced, and on the other hand, the detail feature of the image is also enhanced. The image preprocessing module preprocesses the photo, and the pixel value of the ith pixel of the output photo q can be utilized to be local with the ith pixel of the input image IIs linearly expressed, the formula is as follows:
wherein the parametersAndrespectively represent the k-th local areaI denotes the pixel coordinates of the photograph I.
In order to obtain effective parameters, it is necessary to make the reconstructed pixels of the whole local region as close as possible to the original pixels, i.e. both local regionsEnergy sum of pixel differences of (2)At a minimum, i.e.
In order to obtain effective ginsengThe values, combined with the Lagrange multiplier method, can be used to obtain parametersAndthe value formula is:
wherein the content of the first and second substances,andrespectively, the k-th windowThe mean value and the standard deviation of (a),are constraint parameters.
The feature extraction module is used for extracting visual features of the picture after image preprocessing. The feature extraction module FAB consists of a feature extraction module RCNNB in the faster RCNN and a channel attention module CHAB.
The channel attention module consists of Global Average Pooling (GAP), convolutional layer (Conv1D), Mish activation function, Sigmoid function: (A), (B), (C) and C) a C) a) The network flow structure is shown in fig. 4, and the formula is:
wherein the content of the first and second substances,it is the channel attention module data that,is the image block characteristic, and the form of the Mish function is as follows:
wherein x represents an output characteristic after the 1-dimensional convolution operation,is a function of the hyperbolic tangent,representing a logarithmic function based on a constant e.
And feature extraction output in faster RCNNCombined to form new visual characteristics with strong characterization capabilityThe formula is as follows:
whereinRepresenting channel level multiplication.Respectively representing three panoramas installed in equal proportionA camera is provided.
The target detection and identification module is used for detecting the same target in the visual characteristics and marking the same target. As shown in fig. 5, the network structure can be represented as:
wherein the content of the first and second substances,conv1 denotes a convolution operation with a convolution kernel of 1,conv3 represents convolution operation with convolution kernel of 3, relu represents activation function linear correction unit, feature represents input visual feature in target detection identification module,imginfoRepresenting image information, reshape being a warping operation, propofol representing a region candidate operation, ROIPOOL being a region of interest pooling operation, softmax representing a softmax function, FC being a full join operation,the regression offset representing the target candidate box,representing the probability of a candidate object belonging to a particular class.
In the present embodiment, when satisfiedThen, the corresponding target is recorded and marked as U, and the corresponding visual characteristic is. Identifying a target V of another camera corresponding to the target U according to a cosine similarity criterion, wherein the corresponding visual characteristic is。
The enhanced spatial transformation module comprises a perception visual feature extraction module and a spatial coordinate transformation module. The perception visual feature extraction module is used for extracting input visual features of space coordinate transformation and constructing effective visual features by combining channel perception. The space coordinate transformation module is used for carrying out space coordinate extraction, coordinate mapping and pixel acquisition on the target output by the perception visual feature extraction module.
wherein the content of the first and second substances,representing visual featuresThe visual output after passing through the three rolling blocks,representing the visual characteristics of the target of the s-th camera in the target detection and identification module, wherein a convolution block (ConvB) represents a convolution operation;
represents a Sigmoid function Sigmoid, Relu represents a linear modification unit activation function, FC is a full-connection operation, GAP represents a global average pooling operation,represents channel level multiplication;
the space coordinate transformation module is used for transforming visual characteristicsExtracting space coordinates and outputting space coordinate parametersCan be expressed as:
where FC is full join operation, Relu denotes the Linear correction Unit activation function, CMRB1 、CMRB2、CMRB33 convolution-pooling-activation modules are represented;
the space coordinate transformation module is used for transforming space coordinate parametersCoordinate mapping and pixel acquisition are carried out to complete the coordinate transformation from the coordinate of the target U to the corresponding deformed target VExpressed as:
wherein Map represents a coordinate mapping function, Sample represents a pixel sampling function, and Sample in this embodiment adopts a bilinear interpolation sampling function.
As shown in fig. 1 to 7, based on the pipeline detection panoramic photography processing system, the invention also discloses a pipeline detection panoramic photography processing method, which comprises the following steps:
step 1: the main controller 1 controls the crawler 3 to enter a pipeline, determines an initial position, and controls a distance meter on the crawler 3 to return to zero;
and 2, step: the crawler 3 advances in the pipeline at a preset speed at a constant speed, and the camera 2 automatically focuses and shoots panoramic pictures of the inner wall of the pipeline during advancing;
and step 3: the image processing subsystem synchronously splices the photos shot by the three panoramic cameras, automatically calculates and marks position coordinates on a panoramic expansion image of the inner wall of the pipeline according to the travelling speed and the shooting time of the crawler, as shown in fig. 3, by taking splicing processing of two panoramic cameras in the three panoramic cameras as an example, I1 and I2 represent pictures collected by the cameras in two different directions, and I12 is a picture obtained by splicing the two pictures, and specifically comprises the following steps:
step 301: the image preprocessing module reduces noise of the picture shot by the three cameras and enhances the picture to be q, and the pixel value of the ith pixel can be utilized to be in the local area of the ith pixel of the input image IIs used to indicate the linear representation of the pixel,
wherein the parametersAndrespectively represent the k local regionI denotes the pixel coordinates of the photograph I. In order to obtain effective parameters, it is necessary to make the reconstructed pixels of the whole local region as close as possible to the original pixels, i.e. to minimize the energy sum of the pixel differences of the two local regions, i.e. to obtain effective parameters
In order to obtain effective parameter value, the Lagrange multiplier method is combined to obtain parameterAndthe formula is as follows:
wherein the content of the first and second substances,andrespectively, the k-th windowThe mean value and the standard deviation of the (c),are constraint parameters.
Step 302: the photo q output after image preprocessing respectively outputs visual characteristics through a characteristic extraction module RCNNB in the faster RCNNAnd channel attention module CHAB output visual characteristics。
Wherein the content of the first and second substances,the channel attention module data is represented as,is the characteristic of the image block,representing Sigmoid function, Mish representing Mish activation functionConv1D represents a 1-dimensional convolution operation, GAP represents global average pooling;
wherein x represents an output characteristic after the 1-dimensional convolution operation,is a function of the hyperbolic tangent,representing a logarithmic function based on a constant e.
Characterizing RCNNB module outputsAnd channel attention module CHAB output characteristicsThe two are combined to form visual characteristics with stronger characterization capabilityThe formula is as follows:
whereinRepresenting channel level multiplication. The images collected by the other two cameras adopt the same steps to obtain corresponding visual characteristicsAnd。
step 303: the target detection and identification module extracts the visual features through step 302,Andextracting and identifyingAnd,andthe same objects are noted.
As shown in fig. 5, the network structure of the target detection and identification module is:
wherein the content of the first and second substances,conv1 denotes a convolution operation with a convolution kernel of 1,conv3 represents convolution operation with convolution kernel of 3, relu represents activation function linear correction unit, feature represents input visual feature in target detection identification module、And,imginforepresenting image information, reshape being a warping operation, propofol representing a region candidate operation, ROIPOOL being a region of interest pooling operation, softmax representing a softmax function, FC being a fully join operation,the regression offset representing the target candidate box,representing the probability of a candidate object belonging to a particular class.
When it is satisfied withThen, the corresponding target is recorded and marked as U, and the corresponding visual characteristic is(、And) Identifying a target V of another camera corresponding to the target U according to a cosine similarity criterion, wherein the corresponding visual characteristics are。
Step 304: the enhanced spatial transform module obtains the visual characteristics of the same target, i.e., the visual characteristics of the first camera target, according to step 303And target visual characteristics of the second cameraThird camera target visionCalculating the space coordinate mapping relation of the same target in different collectors;
taking the first camera target coordinate transformation as an example, the first camera target visual characteristicsOutputting visual features through a perception visual feature extraction moduleCan be expressed as:
wherein, the first and the second end of the pipe are connected with each other,representing the visual output of the visual feature U after three convolution blocks,representing the target visual characteristics of a first camera in the target detection and identification module, wherein a convolution block (ConvB) represents a convolution operation;
represents a Sigmoid function Sigmoid, Relu represents a linear modification unit activation function, FC represents a full-connection operation, GAP represents a global average pooling operation,represents channel level multiplication;
spatial coordinate transformation module for visual featuresExtracting space coordinates and outputting space coordinate parametersCan be expressed as:
where FC is full join operation, Relu denotes the Linear correction Unit activation function, CMRB1 、CMRB2、CMRB33 convolution-pooling-activation modules are represented;
the space coordinate transformation module carries out coordinate mapping and pixel acquisition on space coordinate parameters to complete the transformation of the coordinates of the target U to the coordinates of the corresponding deformed target VExpressed as:
where Map represents a coordinate mapping function and Sample represents a pixel sampling function.
The same method accomplishes the visual characteristics of the targetAndthe target coordinates of (1) are transformed.
Step 305: after the target coordinate transformation is completed, the image splicing module cascades the three images together according to the corresponding coordinates, and the panoramas of the images collected by the three cameras are spliced and fused. The method comprises the following specific steps:
as shown in FIG. 7, a first camera takes a picture of 0-150 degrees, a second camera takes a picture of 120-240 degrees, and a third camera takes a picture of 240-360 degrees. The shading indicates the local area for image stitching. For the fusion of the two images, the leftmost part of the image is completely taken as the left part of the image, and the overlapped part of the right acquisition block of the leftmost image and the left acquisition block of the middle image is the weighted average of the converted acquisition blocks. The overlapping area of the leftmost image and the middle image is completely taken from the information of the image block on the left side of the middle image acquisition, and then the overlapping part of the image block on the right side of the middle image acquisition and the image block on the left side of the rightmost image acquisition is the weighted average after the acquisition block is transformed. After splicing, the panoramic picture is horizontally corrected. Processing the pictures taken by the same camera according to the trained space coordinate parametersAnd carrying out space coordinate transformation on the whole image to obtain the same coordinate image.
And 4, step 4: after the shooting is finished, the main controller 1 issues a return instruction to the crawler 3.
The foregoing illustrates and describes the principal features, utilities, and principles of the invention, as well as advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to explain the principles of the invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the invention as expressed in the following claims. The scope of the invention is defined by the appended claims and equivalents thereof.
Claims (8)
1. A pipeline inspection panorama shooting processing system, comprising: the system comprises a main controller, a crawler, a camera and an image processing subsystem, wherein the controller is used for controlling the crawler to advance, the camera is installed on the crawler, the crawler advances in a pipeline at a constant speed at a preset speed, the camera shoots panoramic pictures of the inner wall of the pipeline in the whole advancing process, and the image processing subsystem carries out splicing processing on the panoramic pictures to generate an expanded panoramic picture projected from the bottom to the top of the pipeline;
the image processing subsystem comprises an image preprocessing module, a feature extraction module, a target detection and identification module, an enhanced space transformation module and an image splicing module,
the image preprocessing module is used for preprocessing the pictures shot by the camera, reducing noise and enhancing the pictures,
the feature extraction module is used for extracting visual features from the preprocessed pictures,
the target detection and identification module is used for detecting and identifying the same target in the pictures acquired by different cameras,
the enhanced spatial transformation module is used for calculating the spatial coordinate mapping relation of the same target in different cameras,
the image splicing module is used for splicing the images after the enhanced spatial transformation to generate an expanded panoramic image projected from the bottom to the top of the pipeline;
the feature extraction module consists of a feature extraction module RCNNB and a channel attention module CHAB in the faster RCNN, and the photos after image preprocessing are respectively processed by the feature extraction module RCNNB and the channel attention module CHAB in the faster RCNN to obtain the output features of the feature extraction module RCNNBAnd channel attention module CHAB output characteristicsCombining the two to form visual characteristics with stronger characterization capabilityThe formula is as follows:
the channel attention module consists of a global average pooling layer, a convolutional layer, a Mish activation function and a Sigmoid function, and a data processing formula of the channel attention module CHAB is as follows:
wherein, the first and the second end of the pipe are connected with each other,the channel attention module data is represented as,is the characteristic of the image block,representing Sigmoid function, Mish representing Mish activation functionConv1D represents a 1-dimensional convolution operation, and GAP represents a global average pooling.
2. The pipeline inspection panorama shooting processing system of claim 1, wherein the camera adopts three panorama shooting cameras installed in equal proportion, the front end of the camera is provided with a lighting lamp capable of adjusting light intensity, when the pipeline is inspected, the camera adjusts the position of the camera according to the pipe diameter, so that the camera at the front end of the crawler is always kept at the axis position in the pipeline, during shooting, the panorama camera automatically adjusts shooting focus according to the internal environmental factors of the pipeline, during the uniform-speed forward process of the crawler, the three panorama cameras respectively shoot panorama photos of partial inner surfaces of the inner wall of the pipeline from different directions, and 15-degree overlapping parts exist between every two photos.
3. The pipeline inspection panorama shooting processing system of claim 1, wherein the network structure of the target detection recognition module is:
wherein the content of the first and second substances,conv1 denotes the convolution operation with a convolution kernel of 1,conv3 represents convolution operation with convolution kernel of 3, relu represents activation function linear correction unit, feature represents input visual feature in target detection identification module, imginfoRepresenting image information, reshape being a warping operation, propofol representing a region candidate operation, ROIPOOL being a region of interest pooling operation, softmax representing a softmax function, FC being a fully join operation,the regression offset representing the target candidate box,representing the probability that the candidate object belongs to a particular class;
probability of candidate object belonging to specific classWhen the value is higher than the preset value, the corresponding target is recorded and marked as U, and the corresponding visual characteristic isIdentifying a target V of another camera corresponding to the target U according to a cosine similarity criterion, wherein the corresponding visual characteristic is。
4. The pipeline inspection panorama shooting processing system of claim 1, wherein the enhanced spatial transform module comprises a perceptual visual feature extraction module and a spatial coordinate transform module, wherein,
the perception visual feature extraction module is used for extracting input visual features of space coordinate transformation and constructing effective visual features by combining channel perception;
the space coordinate transformation module is used for carrying out space coordinate extraction, coordinate mapping and pixel acquisition on the visual features output by the perception visual feature extraction module;
wherein the content of the first and second substances,the visual characteristics of the s-th camera target U in the target detection and identification module are shown,representing visual featuresVisual output after passing through the three convolution modules, ConvB represents a convolution operation;
representing a Sigmoid function Sigmoid, Relu representing a linear modification unit activation function, FC representing a fully connected operation, GAP representing a global average pooling operation,represents a channel level multiplication;
the space coordinate transformation module is used for transforming visual characteristicsExtracting space coordinates and outputting space coordinate parametersCan be expressed as:
where FC is full join operation, Relu denotes the Linear correction Unit activation function, CMRB1 、CMRB2、CMRB3Represents 3 convolution-pooling-activation modules;
the space coordinate transformation module is used for transforming space coordinate parametersCoordinate mapping and pixel acquisition are carried out to complete the coordinate transformation from the coordinate of the target U to the corresponding deformed target VExpressed as:
where Map represents a coordinate mapping function and Sample represents a pixel sampling function.
5. A pipeline detection panoramic photography processing method based on the pipeline detection panoramic photography processing system of any one of claims 1 to 4, which is characterized by comprising the following steps:
step 1: the main controller controls the crawler to enter the pipeline, determines the initial position, and controls the distance measuring instrument on the crawler to return to zero;
step 2: the crawler advances at a constant speed in the pipeline at a preset speed, the camera automatically focuses and shoots panoramic pictures of the inner wall of the pipeline during advancing, and the pictures are shot at a preset frequency;
and 3, step 3: the image processing subsystem synchronously splices the panoramic photos, and automatically calculates and marks position coordinates on a panoramic expansion map of the inner wall of the pipeline according to the travelling speed and the shooting time of the crawler;
and 4, step 4: after shooting is finished, the main controller sends a return instruction to the crawler.
6. The pipeline detection panorama shooting processing method of claim 5, wherein the step 3 comprises:
step 301: the image preprocessing module reduces noise of the picture shot by each camera and enhances the picture to be q, and the pixel value of the ith pixel is as follows:
wherein, I represents the input picture and parameter taken by the cameraAndrespectively represent the k-th local areaI represents the pixel coordinates of the picture I taken by the camera;
step 302: the picture q is processed by a feature extraction module RCNNB and a channel attention module CHAB in the master RCNN respectively to obtain the output features of the RCNNB moduleAnd channel attention module CHAB output characteristicsCombined new visual features with strong characterization abilityThe formula is as follows:
step 303: the target detection and identification module extracts and identifies the visual characteristics of the pictures taken by different cameras at the same timeThe same object in between is that,
wherein the content of the first and second substances,conv1 denotes a convolution operation with a convolution kernel of 1,conv3 represents convolution operation with convolution kernel of 3, relu represents activation function linear correction unit, feature represents input visual feature in target detection identification module, imginfoRepresenting image information, reshape being a warping operation, propofol representing a region candidate operation, ROIPOOL being a region of interest pooling operation, softmax representing a softmax function, FC being a fully join operation,the regression offset representing the target candidate box,representing the probability that the candidate object belongs to a particular class;
probability of candidate object belonging to specific classWhen the distance is higher than the preset value, recording the corresponding target and marking the target as U, and marking the visual characteristic as UIdentifying a target V corresponding to the target U according to a cosine similarity criterion, wherein the visual characteristics are expressed as ;
Step 304: enhancing visual features of the same object obtained by the spatial transform moduleCalculating the space coordinate mapping relation of the same target in different collectors;
step 305: and after the target coordinate transformation is completed, the image splicing module splices and fuses the panorama of the image collected by the camera according to the corresponding coordinate.
7. The pipeline detection panorama shooting processing method of claim 6, wherein in the step 301, the parameter is setAndthe value of (d) is obtained by the Lagrange multiplier method, and the formula is as follows:
wherein, the first and the second end of the pipe are connected with each other,andrespectively, the k-th windowThe mean value and the standard deviation of (a),is a constraint parameter;
to obtain effective parametersAndthe reconstructed pixel of the whole local area is as close as possible to the original pixel, namely the energy sum of the pixel difference of the two local areasAt a minimum, the formula is:
8. The pipeline inspection panorama shooting processing method of claim 6, wherein the enhanced spatial transform module comprises a perceptual visual feature extraction module and a spatial coordinate transform module, wherein,
the perception visual feature extraction module is used for extracting input visual features of space coordinate transformation and constructing effective visual features by combining channel perception;
the space coordinate transformation module is used for carrying out space coordinate extraction, coordinate mapping and pixel acquisition on the visual features output by the perception visual feature extraction module;
wherein, the first and the second end of the pipe are connected with each other,representing the visual characteristics of the s-th camera target U in the target detection and identification module,representing visual featuresVisual output after passing through the three convolution modules, ConvB represents a convolution operation;
represents a Sigmoid function Sigmoid, Relu represents a linear modification unit activation function, FC represents a full-connection operation, GAP represents a global average pooling operation,represents a channel level multiplication;
the space coordinate transformation module is used for transforming visual characteristicsExtracting space coordinates and outputting space coordinate parametersCan be expressed as:
where FC is the full join operation, Relu denotes the linear correction unit activation function, CMRB1 、CMRB2、CMRB33 convolution-pooling-activation modules are represented;
the space coordinate transformation module is used for transforming space coordinate parametersCoordinate mapping and pixel acquisition are carried out to complete the coordinate transformation from the coordinates of the target U to the corresponding deformed target VExpressed as:
where Map represents a coordinate mapping function and Sample represents a pixel sampling function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210418233.8A CN114529808B (en) | 2022-04-21 | 2022-04-21 | Pipeline detection panoramic shooting processing system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210418233.8A CN114529808B (en) | 2022-04-21 | 2022-04-21 | Pipeline detection panoramic shooting processing system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114529808A CN114529808A (en) | 2022-05-24 |
CN114529808B true CN114529808B (en) | 2022-07-19 |
Family
ID=81627869
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210418233.8A Active CN114529808B (en) | 2022-04-21 | 2022-04-21 | Pipeline detection panoramic shooting processing system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114529808B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114827480B (en) * | 2022-06-29 | 2022-11-15 | 武汉中仪物联技术股份有限公司 | Pipeline inner wall panoramic expansion map acquisition method and device and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111860398A (en) * | 2020-07-28 | 2020-10-30 | 河北师范大学 | Remote sensing image target detection method and system and terminal equipment |
CN112734640A (en) * | 2020-12-30 | 2021-04-30 | 山东大学 | Tunnel surrounding rock image acquisition device, processing system and panoramic image splicing method |
CN113989741A (en) * | 2021-10-29 | 2022-01-28 | 西安热工研究院有限公司 | Method for detecting pedestrians sheltered in plant area of nuclear power station by combining attention mechanism and fast RCNN |
WO2022047828A1 (en) * | 2020-09-07 | 2022-03-10 | 南京翱翔信息物理融合创新研究院有限公司 | Industrial augmented reality combined positioning system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105205785B (en) * | 2015-10-09 | 2019-03-19 | 济南东朔微电子有限公司 | A kind of orientable oversize vehicle operation management system and its operation method |
-
2022
- 2022-04-21 CN CN202210418233.8A patent/CN114529808B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111860398A (en) * | 2020-07-28 | 2020-10-30 | 河北师范大学 | Remote sensing image target detection method and system and terminal equipment |
WO2022047828A1 (en) * | 2020-09-07 | 2022-03-10 | 南京翱翔信息物理融合创新研究院有限公司 | Industrial augmented reality combined positioning system |
CN112734640A (en) * | 2020-12-30 | 2021-04-30 | 山东大学 | Tunnel surrounding rock image acquisition device, processing system and panoramic image splicing method |
CN113989741A (en) * | 2021-10-29 | 2022-01-28 | 西安热工研究院有限公司 | Method for detecting pedestrians sheltered in plant area of nuclear power station by combining attention mechanism and fast RCNN |
Non-Patent Citations (1)
Title |
---|
基于主动式全景视觉的管道形貌缺陷检测系统;汤一平等;《红外与激光工程》;20161125(第11期);183-189 * |
Also Published As
Publication number | Publication date |
---|---|
CN114529808A (en) | 2022-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108918539B (en) | Apparent disease detection device and method for tunnel structure | |
CN115439424B (en) | Intelligent detection method for aerial video images of unmanned aerial vehicle | |
CN111192198B (en) | Pipeline panoramic scanning method based on pipeline robot | |
CN108769578B (en) | Real-time panoramic imaging system and method based on multiple cameras | |
WO2020110576A1 (en) | Information processing device | |
CN102982520B (en) | Robustness face super-resolution processing method based on contour inspection | |
CN105578027A (en) | Photographing method and device | |
KR102170235B1 (en) | State information analysis and modelling method of sewerage pipe | |
CN103902953B (en) | A kind of screen detecting system and method | |
CN103150716B (en) | Infrared image joining method | |
CN111383204A (en) | Video image fusion method, fusion device, panoramic monitoring system and storage medium | |
CN114529808B (en) | Pipeline detection panoramic shooting processing system and method | |
CN114973028B (en) | Aerial video image real-time change detection method and system | |
JP7387261B2 (en) | Information processing device, information processing method and program | |
CN114742797A (en) | Defect detection method for drainage pipeline inner wall panoramic image and image acquisition robot | |
CN111667470A (en) | Industrial pipeline flaw detection inner wall detection method based on digital image | |
CN115578315A (en) | Bridge strain close-range photogrammetry method based on unmanned aerial vehicle image | |
CN112348775A (en) | Vehicle-mounted all-round-looking-based pavement pool detection system and method | |
CN115619623A (en) | Parallel fisheye camera image splicing method based on moving least square transformation | |
CN111476314B (en) | Fuzzy video detection method integrating optical flow algorithm and deep learning | |
CN117214172A (en) | Method and device for detecting defects of inner wall of long barrel cylinder and storage medium | |
CN112037192A (en) | Method for collecting burial depth information in town gas public pipeline installation process | |
CN116342693A (en) | Bridge cable surface damage rapid positioning method based on point cloud and convolutional neural network | |
CN116132636A (en) | Video splicing method and device for fully-mechanized coal mining face | |
JP3589271B2 (en) | Image information analysis apparatus and image information analysis method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |