CN109741271B - Detection method and system - Google Patents

Detection method and system Download PDF

Info

Publication number
CN109741271B
CN109741271B CN201811536020.5A CN201811536020A CN109741271B CN 109741271 B CN109741271 B CN 109741271B CN 201811536020 A CN201811536020 A CN 201811536020A CN 109741271 B CN109741271 B CN 109741271B
Authority
CN
China
Prior art keywords
image data
target
data
distance
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811536020.5A
Other languages
Chinese (zh)
Other versions
CN109741271A (en
Inventor
邵永军
王小雄
张建
张士兵
刘波
任晓辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Express Xingzhan Technology Co ltd
Shaanxi Expressway Testing & Measuring Co ltd
Original Assignee
Shaanxi Express Xingzhan Technology Co ltd
Shaanxi Expressway Testing & Measuring Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Express Xingzhan Technology Co ltd, Shaanxi Expressway Testing & Measuring Co ltd filed Critical Shaanxi Express Xingzhan Technology Co ltd
Priority to CN201811536020.5A priority Critical patent/CN109741271B/en
Publication of CN109741271A publication Critical patent/CN109741271A/en
Application granted granted Critical
Publication of CN109741271B publication Critical patent/CN109741271B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Optical Radar Systems And Details Thereof (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides a detection method and a detection system, relates to tunnel defect detection, and can solve the problems of low efficiency and large detection damage during tunnel defect detection. The specific technical scheme is as follows: acquiring at least one image data and corresponding target position information; analyzing whether the at least one image data accords with a preset rule or not according to the preset rule, and generating target image data when the at least one image data accords with the preset rule; generating target detection data after the target image data is marked with target position information, and uploading the target detection data to a database; when the detected image data does not accord with the preset rule, the shooting distance or the shooting parameters are adjusted, and then at least one piece of image data is obtained again until the image data which accords with the preset rule is obtained, so that a closed-loop system in the process of obtaining the detected data is realized. The method is used for detecting tunnel diseases.

Description

Detection method and system
Technical Field
The present disclosure relates to the field of tunnel detection, and in particular, to a detection method and system.
Background
With the rapid development of economy, China has accelerated the pace of constructing highways in recent decades. When a mountain is encountered in the construction process of the highway, the environment for cutting the mountain is prevented from being damaged by constructing bridges and tunnels, so that the proportion of the bridge in the planning of the highway is greatly increased. During the construction of the tunnel, the tunnel is influenced by factors such as geology, water seepage and stress, and the tunnel is inevitably subjected to deformation such as cracking, pressing of the tunnel arch wall, vault sinking, arch rising of the tunnel bottom and the like. Therefore, in order to ensure the safe operation of the tunnel, deformation detection and early warning are necessary for the tunnel.
In the prior art, the tunnel is detected in an artificial mode, and the artificial detection mode is as follows: embedding detection points at a specified position in the lining, and detecting by using a profiler or other detection instruments, wherein the method is time-consuming, labor-consuming, long in detection time, low in efficiency and large in damage to the surface of the lining; more importantly, the time of manual detection is regular detection or detection and evaluation are carried out when the damage condition occurs, at the moment, the tunnel is always damaged greatly, and the evaluation and maintenance cost is inevitably high.
Disclosure of Invention
The embodiment of the disclosure provides a detection method and a detection system, which can solve the problems of low detection efficiency and large detection damage in tunnel detection. The technical scheme is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a detection method, including:
acquiring at least one image data and corresponding target position information, wherein the target position information refers to position information when the detection device acquires the image data;
analyzing whether at least one image data accords with a preset rule or not according to the preset rule;
generating target image data when at least one image data accords with a preset rule, marking the target image data with target position information, generating target detection data, and uploading the target detection data to a database;
and when the detected image data does not accord with the preset rule, adjusting the shooting distance or the shooting parameters and then acquiring at least one piece of image data again, wherein the shooting distance refers to the distance between the shooting device and the target object when the image data is acquired.
In one embodiment, obtaining the target location information further comprises:
acquiring initial position information by a satellite positioning method;
after the target movement speed is obtained through a speed measuring encoder, a target driving distance is generated according to the target driving time;
target position information is generated by calculating the starting position information and the target travel distance.
In one embodiment, the preset rule further included in the method means,
screening the image data, wherein the screening refers to deleting the image data which do not meet the requirement of the image overlapping degree;
and carrying out filtering and denoising according to quality parameters of the screened image data, wherein the quality parameters comprise the parameter information of the caused image distortion, the image noise and the image distortion.
In one embodiment, the method further comprises generating target image data, including:
after the image data which accord with the preset rule are spliced, target image data are generated, and the splicing comprises the following steps: and aligning the spatial position of the filtered and denoised image data, and fusing the overlapped part of the aligned image data.
In one embodiment, the spatial position alignment comprises:
acquiring the characteristics of at least two image data, and determining a common characteristic structure of the at least two image data, wherein the characteristic structure comprises an angular point, an edge and a boundary of an object;
matching the feature structures of the two image data by similarity measurement to obtain a space geometric transformation relation and a coordinate transformation parameter of the image;
performing transformation gray interpolation of coordinates according to the space geometric transformation relation and the coordinate transformation parameters to complete image registration;
and performing fusion processing on the overlapped part of the aligned image data, wherein the fusion of the image data is realized according to the pixel and the characteristic of the image data, and the fusion comprises the following steps: euclidean distance method, wavelet transform method, average value method, linear fade-in and fade-out method and hat function weighted average method.
In one embodiment, the acquiring the at least one image data again after adjusting the shooting distance or the shooting parameters includes:
determining quality parameters and corresponding shooting distances of the first image data which do not accord with preset rules;
according to the quality parameter and the shooting distance, acquiring second image data after adjusting the shooting distance or the shooting parameter;
analyzing whether the second image data accords with a preset rule, and generating target image data when the second image data accords with the preset rule; and when the preset rule is not met, continuously adjusting the shooting distance until the acquired image data meets the preset rule.
In one embodiment, the shooting distance includes:
by emitting a laser pulse signal to a target object, when the laser pulse signal reaches the target object and returns to a receiving end of laser ranging equipment, echo signals and target time data are obtained, wherein the target time data refer to round-trip time data from the receiving end of the laser ranging equipment to the target object;
generating first target distance data by processing pulse data and target time data of the laser pulse signal;
generating second target distance data by processing the phase difference between the laser pulse signal and the echo signal;
and generating a shooting distance through the first target distance data and the second target distance data.
In one embodiment, the shooting parameters are adjusted, including,
and acquiring corresponding target illumination intensity according to the target position information, determining a target illumination compensation coefficient, and acquiring at least one piece of image data after adjusting the illumination intensity of the shooting equipment.
According to the detection method provided by the embodiment of the disclosure, at least one piece of image data and corresponding target position information are acquired; analyzing whether the at least one image data accords with a preset rule or not according to the preset rule, and generating target image data when the at least one image data accords with the preset rule; generating target detection data after the target image data is marked with target position information, and uploading the target detection data to a database; when the detected image data does not accord with the preset rule, the shooting distance or the shooting parameters are adjusted, and then at least one piece of image data is obtained again until the image data which accords with the preset rule is obtained, so that a closed-loop system in the process of obtaining the detected data is realized.
According to a second aspect of embodiments of the present disclosure, there is provided a detection system comprising:
the system comprises a positioning subsystem, a distance measuring subsystem and an image processing subsystem;
the positioning subsystem acquires target position information and transmits the target position information to the image processing subsystem, wherein the target position information is used for indicating the position information of the detection device when the image data is acquired;
the distance measurement subsystem acquires a target shooting distance and transmits the target shooting distance to the image processing subsystem, and the distance information is used for indicating the distance between the laser distance measurement equipment and a target object to be measured;
the image processing subsystem analyzes whether the at least one image data accords with a preset rule according to the preset rule, and generates target image data when the at least one image data accords with the preset rule;
marking target image data with target position information to generate target detection data, and uploading the target detection data to a database;
and when the detected image data does not accord with the preset rule, adjusting the shooting distance or the shooting parameters and then acquiring at least one piece of image data again, wherein the shooting distance refers to the distance between the shooting device and the target object when the image data is acquired.
In one embodiment, the ranging subsystem in the detection system further comprises
By emitting a laser pulse signal to a target object, when the laser pulse signal reaches the target object and returns to a receiving end of laser ranging equipment, echo signals and target time data are obtained, wherein the target time data refer to round-trip time data from the receiving end of the laser ranging equipment to the target object;
generating first target distance data by processing pulse data and target time data of the laser pulse signal;
generating second target distance data by processing the phase difference between the laser pulse signal and the echo signal;
and generating a shooting distance through the first target distance data and the second target distance data.
The detection system provided by the embodiment of the disclosure acquires at least one image data and corresponding target position information; analyzing whether the at least one image data accords with a preset rule or not according to the preset rule, and generating target image data when the at least one image data accords with the preset rule; generating target detection data after the target image data is marked with target position information, and uploading the target detection data to a database; when the detected image data does not accord with the preset rule, the shooting distance or the shooting parameters are adjusted, and then at least one piece of image data is obtained again until the image data which accords with the preset rule is obtained, so that a closed-loop system in the process of obtaining the detected data is realized. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flow chart of a detection method provided by an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a detection system provided in an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Example one
The embodiment of the present disclosure provides a detection method, as shown in fig. 1, the detection method includes the following steps:
101. at least one image data and corresponding target position information are acquired.
The target position information refers to position information of the detection apparatus at the time of acquiring the image data.
In an alternative embodiment, obtaining the target location information includes:
acquiring initial position information by a satellite positioning method;
after the target movement speed is obtained through a speed measuring encoder, a target driving distance is generated according to the target driving time;
target position information is generated by calculating the starting position information and the target travel distance.
In an optional embodiment, the positioning method comprises the steps of obtaining position information through a Chinese Beidou satellite navigation system and a global positioning system; therefore, all-weather, continuous and real-time three-dimensional navigation positioning and speed measurement can be realized in the global range; in addition, with this method, high-precision time data and high-precision positioning data can be acquired.
102. And analyzing whether the at least one image data conforms to the preset rule or not according to the preset rule.
In an alternative embodiment, the preset rules include:
screening the image data, wherein the screening refers to deleting the image data which do not meet the requirement of the image overlapping degree;
and carrying out filtering and denoising according to quality parameters of the screened image data, wherein the quality parameters comprise the parameter information of the caused image distortion, the image noise and the image distortion.
103. And generating target image data when at least one image data accords with a preset rule, marking the target image data with target position information, generating target detection data, and uploading the target detection data to a database.
In an optional embodiment, because the use scene of the disclosure includes tunnel detection, the top of the tunnel is arched, and a single photo cannot continuously and completely display disease information, a single photo, that is, target image data, needs to be generated after a plurality of photos are combined.
In an alternative embodiment, an implementation of generating target image data includes: including image registration and image fusion.
Image registration refers to the registration of at least one image data in spatial position by calculating the best match between the two images. The accuracy of image registration determines the quality of image stitching.
And carrying out image data fusion processing on the registered image data.
The image fusion means that image data which are collected by a multi-source channel and related to the same target are subjected to image processing, computer technology and the like, so that favorable information in each channel is extracted to the maximum extent, and finally, the favorable information is synthesized into a high-quality image to eliminate splicing traces of the image data.
In an alternative embodiment, an implementation of generating target image data includes:
after the image data which accord with the preset rule are spliced, target image data are generated, and the splicing comprises the following steps: and aligning the spatial position of the filtered and denoised image data, and fusing the overlapped part of the aligned image data.
In an alternative embodiment, a method of achieving spatial position alignment includes:
acquiring the characteristics of at least two image data, and determining a common characteristic structure of the at least two image data, wherein the characteristic structure comprises an angular point, an edge and a boundary of an object;
matching the feature structures of the two image data by similarity measurement to obtain a space geometric transformation relation and a coordinate transformation parameter of the image;
performing transformation gray interpolation of coordinates according to the space geometric transformation relation and the coordinate transformation parameters to complete image registration;
in an alternative embodiment, the fusion processing of the overlapped parts of the aligned image data includes the fusion of the image data according to the image elements and the features of the image data, and the fusion includes: euclidean distance method, wavelet transform method, average value method, linear fade-in and fade-out method and hat function weighted average method.
The average value method is to realize the pixel fusion of the spliced images by averaging the pixels of the overlapped area of the two images to be used as the pixel value of the overlapped area after splicing, and the pixel value of the original image is reserved in the non-overlapped area.
The linear fade-in and fade-out method is that after the overlapped part of one image is linearly transited to the overlapped part of the other image in the overlapped area part of the images, namely the images are fused, the gray value of the overlapped area is obtained by adding the gray values of the two original images according to a certain weight proportion.
The hat function weighted average method is called a hat function because the pixel weight function image of each point in the image overlapping region is triangular. Its basic idea is similar to a linear cross fade method, but the pixel values of each point in its image overlap region are determined according to the distance of each point from the center of the image, in short, pixels closer to the center of the image are weighted more heavily and are smallest at the edges.
The Euclidean distance method is based on the distance from a pixel point to the nearest invisible point, namely the image edge, each pixel of an image is assigned with a weight, the longer the distance is, the larger the weight is, and the distance to the nearest transparent point is calculated by utilizing the Euclidean distance and the block distance, so that the light intensity contribution of the pixel point close to the edge is reduced.
The wavelet transform method is to decompose N images into M images by discrete wavelet transform method, then to fuse M × N images at each level, to obtain M-level fused images, and then to perform inverse transformation to obtain fused results.
In an optional embodiment, after the target image data is marked with the target position information to generate the target detection data, the user can acquire the image data information and the position information of the tunnel diseases through the target detection data, so that the user can conveniently and quickly lock the diseases, and the detection efficiency is improved.
104. And when the detected image data does not accord with the preset rule, adjusting the shooting distance or the shooting parameters and then acquiring at least one piece of image data again.
The shooting distance is a distance between the shooting device and the target object when the image data is acquired.
In an alternative embodiment, the acquiring the at least one image data again after adjusting the shooting distance or the shooting parameter includes:
determining quality parameters and corresponding shooting distances of the first image data which do not accord with preset rules;
according to the quality parameter and the shooting distance, acquiring second image data after adjusting the shooting distance or the shooting parameter;
analyzing whether the second image data accords with a preset rule, and generating target image data when the second image data accords with the preset rule; and when the preset rule is not met, continuously adjusting the shooting distance until the acquired image data meets the preset rule.
In an optional embodiment, the method for realizing the shooting distance is realized in a pulse phase laser ranging manner, the distance calculation includes two parts of rough measurement and precise measurement, namely, the rough measurement of the distance is realized by counting high-frequency pulses, and the precise measurement of the distance is completely realized by the phase difference between a transmitting signal and an echo signal by a precise measurement part, and the specific method includes:
by emitting a laser pulse signal to a target object, when the laser pulse signal reaches the target object and returns to a receiving end of laser ranging equipment, echo signals and target time data are obtained, wherein the target time data refer to round-trip time data from the receiving end of the laser ranging equipment to the target object;
generating first target distance data by processing pulse data and target time data of the laser pulse signal;
generating second target distance data by processing the phase difference between the laser pulse signal and the echo signal;
and generating a shooting distance through the first target distance data and the second target distance data.
In an alternative embodiment, the distance between the shooting equipment and the side wall of the tunnel can be tested by acquiring the shooting distance, so that the motion track of the device and the acquisition of image data are detected
In an alternative embodiment, a method of adjusting shooting parameters is implemented, comprising,
and acquiring corresponding target illumination intensity according to the target position information, determining a target illumination compensation coefficient, and acquiring at least one piece of image data after adjusting the illumination intensity of the shooting equipment.
In an optional embodiment, the method provided by the disclosure analyzes the acquired image data, analyzes an image histogram according to the illumination intensity of the scene where the acquired image data corresponds to the target position information, calculates an illumination compensation coefficient, determines the illumination intensity required to be set, and adjusts the light source intensity of the flash lamp by controlling the switch of the flash lamp, so that the shooting device achieves the optimal illumination intensity. Through the orderly matching of the acquisition and the analysis of the image data and the feedback control, the image data shot in all weather are ensured to accord with the preset rule, so that the problems of light reflection and strong light direct irradiation of various tunnel walls can be effectively solved, and the detection of the crack information of the tunnel walls by users is facilitated.
According to the detection method provided by the embodiment of the disclosure, at least one piece of image data and corresponding target position information are acquired; analyzing whether the at least one image data accords with a preset rule or not according to the preset rule, and generating target image data when the at least one image data accords with the preset rule; generating target detection data after the target image data is marked with target position information, and uploading the target detection data to a database; when the detected image data does not accord with the preset rule, the shooting distance or the shooting parameters are adjusted, and then at least one piece of image data is obtained again until the image data which accords with the preset rule is obtained, so that a closed-loop system in the process of obtaining the detected data is realized.
This is disclosed through the acquisition and the analysis to image data, has realized detecting high-speed, high density, the high-accuracy data sampling of tunnel, and then the accurate tunnel disease data of calculating if: cracking, pressing down of tunnel arch wall, sinking of arch crown, arching of tunnel bottom and the like. According to the tunnel disease detection method and device, the influence of human errors of traditional detection is effectively solved through accurate detection of image data on tunnel diseases. Compared with the detection technology method in the prior art, the method has the advantages of strong anti-interference capability, accurate detection, short construction period, convenient maintenance, difficult influence of weather conditions and the like.
Example two
Based on the detection method provided by the embodiment corresponding to fig. 1, another embodiment of the present disclosure provides a detection system, which may be applied to detection of tunnel diseases, and as shown in fig. 2, the detection system provided by this embodiment includes: a positioning subsystem 201, a ranging subsystem 202 and an image processing subsystem 203;
the positioning subsystem 201 acquires target position information and transmits the target position information to the image processing subsystem, wherein the target position information is used for indicating the position information of the detection device when image data are acquired;
the distance measurement subsystem 202 is used for acquiring a target shooting distance and transmitting the target shooting distance to the image processing subsystem, and the distance information is used for indicating the distance between the laser distance measurement equipment and a target object to be measured;
the image processing subsystem 203 analyzes whether the at least one image data accords with a preset rule according to the preset rule, and generates target image data when the at least one image data accords with the preset rule;
marking target image data with target position information to generate target detection data, and uploading the target detection data to a database;
and when the detected image data does not accord with the preset rule, adjusting the shooting distance or the shooting parameters and then acquiring at least one piece of image data again, wherein the shooting distance refers to the distance between the shooting device and the target object when the image data is acquired.
In an alternative embodiment, the ranging subsystem 202 further includes:
by emitting a laser pulse signal to a target object, when the laser pulse signal reaches the target object and returns to a receiving end of laser ranging equipment, echo signals and target time data are obtained, wherein the target time data refer to round-trip time data from the receiving end of the laser ranging equipment to the target object;
generating first target distance data by processing pulse data and target time data of the laser pulse signal;
generating second target distance data by processing the phase difference between the laser pulse signal and the echo signal;
and generating a shooting distance through the first target distance data and the second target distance data.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (8)

1. A method of detection, the method comprising:
acquiring at least one image data and corresponding target position information, wherein the target position information refers to position information when a detection device acquires the image data, and the image data comprises an image in an acquisition tunnel;
analyzing whether the at least one image data accords with a preset rule or not according to the preset rule;
when the at least one image data accords with a preset rule, generating target image data after splicing the at least one image data, marking the target image data with target position information, generating target detection data, and uploading the target detection data to a database, wherein the splicing process comprises the following steps: carrying out spatial position alignment on the filtered and denoised image data, and carrying out fusion processing on the overlapped part of the aligned image data;
when the at least one piece of image data does not accord with the preset rule, the at least one piece of image data is obtained again after the shooting distance or the shooting parameters are adjusted, wherein the shooting distance refers to the distance between the shooting device and the target object when the image data is obtained;
wherein, the preset rule comprises:
screening the image data, wherein the screening refers to deleting the image data which do not meet the requirement of the image overlapping degree;
filtering and denoising according to quality parameters of the screened image data, wherein the quality parameters comprise: parameter information of the image distortion, image noise, and image distortion caused.
2. The method of claim 1, wherein the obtaining target location information comprises:
acquiring initial position information by a satellite positioning method;
after the target movement speed is obtained through a speed measuring encoder, a target driving distance is generated according to the target driving time;
and generating the target position information according to the initial position information and the target driving distance.
3. The method of claim 1, wherein the spatial position alignment comprises:
acquiring features of at least two image data, and determining a common feature structure of the at least two image data, wherein the feature structure comprises: the corners, edges, boundaries of the object;
matching the characteristic structures of the two image data by similarity measurement to obtain a space geometric transformation relation and a coordinate transformation parameter of the image;
performing transformation gray interpolation of coordinates according to the space geometric transformation relation and the coordinate transformation parameters to complete image registration;
the fusion processing of the overlapped part of the alignment image data comprises the fusion of the image data according to the pixel and the characteristic of the image data, and the fusion comprises the following steps: euclidean distance method, wavelet transform method, average value method, linear fade-in and fade-out method and hat function weighted average method.
4. The method according to claim 1, wherein the re-acquiring at least one image data after adjusting the shooting distance or the shooting parameters comprises:
determining quality parameters and corresponding shooting distances of the first image data which do not accord with preset rules;
adjusting the shooting distance or shooting parameters according to the quality parameters and the shooting distance to obtain second image data;
analyzing whether the second image data accords with a preset rule or not, and generating target image data when the second image data accords with the preset rule; and when the preset rule is not met, continuously adjusting the shooting distance until the acquired image data meets the preset rule.
5. The method of claim 1, wherein said adjusting the shooting parameters comprises,
and acquiring corresponding target illumination intensity according to the target position information, determining a target illumination compensation coefficient, and acquiring at least one piece of image data after adjusting the illumination intensity of the shooting equipment.
6. The method of claim 1, wherein the shot distance comprises:
by emitting a laser pulse signal to a target object, when the laser reaches the target object and returns to a receiving end of laser ranging equipment, echo signals and target time data are obtained, wherein the target time data refer to round-trip time data from the receiving end of the laser ranging equipment to the target object;
generating first target distance data by processing pulse data of the laser pulse signal and the target time data;
generating second target distance data by processing a phase difference of the laser pulse signal and the echo signal;
and generating the shooting distance through the first target data and the second target distance data.
7. A detection system, comprising: the system comprises a positioning subsystem, a distance measuring subsystem and an image processing subsystem;
the positioning subsystem acquires target position information and transmits the target position information to the image processing subsystem, wherein the target position information is used for indicating the position information of the detection device when the image data is acquired;
the distance measurement subsystem acquires a target shooting distance and transmits the target shooting distance to the image processing subsystem, wherein the target shooting distance is used for indicating the distance between the laser distance measurement equipment and a target object to be measured;
the image processing subsystem acquires at least one image data and analyzes whether the at least one image data accords with a preset rule or not according to the preset rule, wherein the image data comprises an image in an acquisition tunnel;
when the at least one image data accords with a preset rule, generating target image data after performing splicing processing on the at least one image data, wherein the splicing processing comprises the following steps: carrying out spatial position alignment on the filtered and denoised image data, and carrying out fusion processing on the overlapped part of the aligned image data;
marking the target image data with target position information to generate target detection data, and uploading the target detection data to a database;
when the at least one piece of image data does not accord with the preset rule, the at least one piece of image data is obtained again after the shooting distance or the shooting parameters are adjusted, wherein the shooting distance refers to the distance between the shooting device and the target object when the image data is obtained;
wherein, the preset rule comprises:
screening the image data, wherein the screening refers to deleting the image data which do not meet the requirement of the image overlapping degree;
filtering and denoising according to quality parameters of the screened image data, wherein the quality parameters comprise: parameter information of the image distortion, image noise, and image distortion caused.
8. A detection system according to claim 7, wherein the ranging subsystem further comprises:
by emitting a laser pulse signal to a target object, when the laser reaches the target object and returns to a receiving end of laser ranging equipment, echo signals and target time data are obtained, wherein the target time data refer to round-trip time data from the receiving end of the laser ranging equipment to the target object;
generating first target distance data by processing pulse data of the laser pulse signal and the target time data;
generating second target distance data by processing a phase difference of the laser pulse signal and the echo signal;
and generating the shooting distance through the first target data and the second target distance data.
CN201811536020.5A 2018-12-14 2018-12-14 Detection method and system Active CN109741271B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811536020.5A CN109741271B (en) 2018-12-14 2018-12-14 Detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811536020.5A CN109741271B (en) 2018-12-14 2018-12-14 Detection method and system

Publications (2)

Publication Number Publication Date
CN109741271A CN109741271A (en) 2019-05-10
CN109741271B true CN109741271B (en) 2021-11-19

Family

ID=66359465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811536020.5A Active CN109741271B (en) 2018-12-14 2018-12-14 Detection method and system

Country Status (1)

Country Link
CN (1) CN109741271B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110086999B (en) * 2019-05-28 2021-01-08 维沃移动通信有限公司 Image information feedback method and terminal equipment
CN110493507A (en) * 2019-05-30 2019-11-22 福建知鱼科技有限公司 A kind of grasp shoot method
CN110505383B (en) * 2019-08-29 2021-07-23 重庆金山医疗技术研究院有限公司 Image acquisition method, image acquisition device and endoscope system
CN110533015A (en) * 2019-08-30 2019-12-03 Oppo广东移动通信有限公司 Verification method and verifying device, electronic equipment, computer readable storage medium
CN110491797B (en) * 2019-09-29 2021-10-22 云谷(固安)科技有限公司 Line width measuring method and apparatus
CN116576792B (en) * 2023-07-12 2023-09-26 佳木斯大学 Intelligent shooting integrated device based on Internet of things

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550995A (en) * 2016-01-27 2016-05-04 武汉武大卓越科技有限责任公司 Tunnel image splicing method and system
CN105979141A (en) * 2016-06-03 2016-09-28 北京奇虎科技有限公司 Image shooting method, device and mobile terminal
CN108702448A (en) * 2017-09-27 2018-10-23 深圳市大疆创新科技有限公司 Unmanned plane image-pickup method and unmanned plane
CN108802744A (en) * 2017-05-04 2018-11-13 四川医达科技有限公司 A kind of method and apparatus of remote laser ranging
CN108875625A (en) * 2018-06-13 2018-11-23 联想(北京)有限公司 A kind of recognition methods and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550995A (en) * 2016-01-27 2016-05-04 武汉武大卓越科技有限责任公司 Tunnel image splicing method and system
CN105979141A (en) * 2016-06-03 2016-09-28 北京奇虎科技有限公司 Image shooting method, device and mobile terminal
CN108802744A (en) * 2017-05-04 2018-11-13 四川医达科技有限公司 A kind of method and apparatus of remote laser ranging
CN108702448A (en) * 2017-09-27 2018-10-23 深圳市大疆创新科技有限公司 Unmanned plane image-pickup method and unmanned plane
CN108875625A (en) * 2018-06-13 2018-11-23 联想(北京)有限公司 A kind of recognition methods and electronic equipment

Also Published As

Publication number Publication date
CN109741271A (en) 2019-05-10

Similar Documents

Publication Publication Date Title
CN109741271B (en) Detection method and system
CN111855664B (en) Adjustable three-dimensional tunnel defect detection system
CN104005325B (en) Based on pavement crack checkout gear and the method for the degree of depth and gray level image
CN106978774B (en) A kind of road surface pit slot automatic testing method
KR101235815B1 (en) Imaging position analyzing device, imaging position analyzing method, recording medium, and image data acquiring device
KR101097119B1 (en) Method of inspecting tunnel inner part damage by vision sensor system
CN104502990A (en) Geological survey method of tunnel face based on digital image
CN104749187A (en) Tunnel lining disease detection device based on infrared temperature field and gray level image
CN108020825A (en) Laser radar, Laser video camera head, the fusion calibration system of video camera and method
CN105809668A (en) Object surface deformation feature extraction method based on line scanning three-dimensional point cloud
CN105548197A (en) Non-contact steel rail surface flaw inspection method and device
CN103938531B (en) Laser road faulting of slab ends detecting system and method
CN109060820B (en) Tunnel disease detection method and tunnel disease detection device based on laser detection
US11671574B2 (en) Information processing apparatus, image capture apparatus, image processing system, and method of processing a plurality of captured images of a traveling surface where a moveable apparatus travels
CN107102004A (en) A kind of tunnel detector
CN104776977A (en) Coastal engineering silt physical model test bottom bed dynamic and comprehensive observation method
CN109544607A (en) A kind of cloud data registration method based on road mark line
CN210052283U (en) Detection device
CN117538861A (en) Unmanned aerial vehicle detection method combining vision, laser SLAM and ground penetrating radar
Gorodnichev et al. Technical vision for monitoring and diagnostics of the road surface quality in the smart city program
JP7206726B2 (en) Measuring devices, measuring systems and vehicles
Alkaabi et al. Application of a drone camera in detecting road surface cracks: a UAE testing case study
Zhang et al. Visual Odometry and 3D Point Clouds Under Low-Light Conditions
Kage et al. Method of rut detection using lasers and in-vehicle stereo camera
CN112325791A (en) Road surface structure depth testing device based on photogrammetry technology and testing method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant