CN116188348A - Crack detection method, device and equipment - Google Patents
Crack detection method, device and equipment Download PDFInfo
- Publication number
- CN116188348A CN116188348A CN202111417479.5A CN202111417479A CN116188348A CN 116188348 A CN116188348 A CN 116188348A CN 202111417479 A CN202111417479 A CN 202111417479A CN 116188348 A CN116188348 A CN 116188348A
- Authority
- CN
- China
- Prior art keywords
- image
- camera
- pixel
- pixels
- dimensional point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 116
- 238000000034 method Methods 0.000 claims abstract description 48
- 230000008569 process Effects 0.000 claims description 18
- 238000004422 calculation algorithm Methods 0.000 claims description 15
- 238000012545 processing Methods 0.000 claims description 13
- 238000007781 pre-processing Methods 0.000 claims description 12
- 238000003860 storage Methods 0.000 claims description 9
- 239000000284 extract Substances 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 3
- 239000000463 material Substances 0.000 abstract description 8
- 230000000694 effects Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 10
- 230000000903 blocking effect Effects 0.000 description 8
- 238000000605 extraction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000012216 screening Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000032683 aging Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000005520 cutting process Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000005336 cracking Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000004643 material aging Methods 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
- 238000011895 specific detection Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
Abstract
A crack detection method, apparatus and device are disclosed to reduce crack detection costs. The method comprises the following steps: the camera acquires N frames of first images, wherein the N frames of first images are obtained by shooting different areas of a target object through a preset operation, and the preset operation is to rotate a lens of the camera and adjust the focal length of the lens; the camera sends the N frames of first images to the computing device; the computing equipment splices part of each image in the N frames of first images to obtain a second image; the camera detects a crack of the target object based on the second image. The camera collects N frames of first images and then sends the N frames of first images to the computing equipment, and the computing equipment splices parts of each image in the N frames of first images to obtain a second image. The computing device detects a crack of the target object based on the second image. Therefore, manual detection is not needed, manpower and material resources can be saved, and the crack detection cost is reduced.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, and a device for detecting a crack.
Background
The occurrence of cracks in a building such as a road or a wall may affect the life of the building and even threaten the personal safety of the person using the building. Accordingly, it is desirable to detect cracks in a building to repair the cracked building, reducing the risk of further damage to the building.
Traditional crack detection relies mainly on manual inspection. In recent years, along with the continuous addition of technological means, intelligent equipment such as road detection vehicles, unmanned aerial vehicles and the like can be adopted for crack detection.
However, no matter manual inspection or crack detection is performed by means of intelligent equipment, manpower and material resources are required to be input, and the detection cost is high.
Disclosure of Invention
The application provides a crack detection method, a crack detection device and crack detection equipment, so that crack detection cost is reduced.
A first aspect provides a crack detection method. The method is performed by a camera and a computing device. The camera is connected with a computing device, the camera is used for acquiring a first image, and the computing device is used for extracting cracks based on the first image. Specifically, the method comprises the following steps: the camera acquires N frames of first images. N is an integer greater than or equal to 2. The N frames of first images are obtained by shooting different areas of the target object through a preset operation. The target object is, for example, a road, a wall, or the like. The preset operation is to rotate the lens of the camera and adjust the focal length of the lens. The distance between different areas of the target object and the camera is different, and the distance between the areas and the camera is proportional to the focal length of the lens. Specifically, an area of a target object is selected by rotating a lens of a camera, and a focal length of the lens of the camera is changed: for areas farther from the camera, the larger the focal length of the lens; for the area closer to the camera, the smaller the focal length of the lens. Thereby enabling the target objects in the different first images to be clearly presented. The camera collects N frames of first images and then sends the N frames of first images to the computing equipment, and the computing equipment splices parts of each image in the N frames of first images to obtain a second image. The computing device detects a crack of the target object based on the second image. Therefore, manual detection is not needed, manpower and material resources can be saved, and the crack detection cost is reduced.
With reference to the first aspect, in a first implementation manner of the first aspect of the present application, the second image is an image without a perspective effect. Because the perspective phenomenon of the camera lens can cause the effect that the target object shot by the camera is far, small, near and large, the detail of the target object at a distance is lost, the crack cannot be accurately detected, the N frames of first images are spliced to obtain the second image without perspective effect, the detail of the target object at a distance can be clearly presented, and the crack detection based on the second image can be more accurate.
With reference to the first aspect or the first implementation manner of the first aspect, in a second implementation manner of the first aspect of the present application, the computing device stitching a portion of each image in the N frames of the first image to obtain a second image includes: the computing device intercepts M rows of pixels of the N frames of the first image, respectively, M being an integer greater than or equal to 1. The computing device stitches M rows of pixels of the N frames of the first image to obtain a second image. Thereby being capable of rapidly stitching to obtain a second image.
With reference to the first aspect or the first implementation manner of the first aspect, in a third implementation manner of the first aspect of the present application, the computing device stitching a portion of each image in the N frames of the first image to obtain a second image includes: the computing device intercepts a portion of pixels of each image in the N frames of first images, respectively, wherein the number of pixels intercepted is different for each image. The computing device stitches some pixels of each of the N frames of the first image to obtain a second image. The partial pixels of each first image correspond to one sub-area of the target object, and the second image comprises the complete target object consisting of N sub-areas. Since the pose and focal length of the camera corresponding to each first image are different, the number of pixels truncated in each first image may be different.
With reference to the first aspect or the first implementation manner of the first aspect, in a fourth implementation manner of the first aspect of the present application, the computing device stitching a portion of each image in the N frames of the first image to obtain a second image includes: the computing device intercepts P pixels of the N frames of the first image, respectively, P being an integer greater than or equal to 1. The computing device stitches P pixels of the N frames of the first image to obtain a second image. The same number of pixels is cut from each first image, and the second images can be quickly spliced.
With reference to the first aspect or any one of the first to fourth implementation manners of the first aspect, in a fifth implementation manner of the first aspect of the present application, the detecting, by the computing device, the crack of the target object based on the second image includes: the computing device preprocesses the second image to obtain J first image blocks, wherein the first image blocks comprise parts of the target object, and J is an integer greater than or equal to 1. And the computing equipment processes each first image block to obtain J second image blocks, and at least part of the target pixels corresponding to the cracks are marked in the second image blocks. The computing device merges the J second image blocks to obtain a third image. The computing device extracts the target pixel in the third image, resulting in a crack. The second image is segmented and the image blocks are processed respectively, so that the influence of the external environment on the extraction of cracks in the target object can be reduced, and the accuracy of crack detection is improved.
With reference to the fifth implementation manner of the first aspect, in a sixth implementation manner of the first aspect of the present application, the processing, by the computing device, each first image block to obtain J second image blocks includes: the computing device classifies pixels in the first image block by using a region growing algorithm to obtain K pixel sets, wherein each pixel set comprises at least one pixel, and K is an integer greater than or equal to 1. The computing device obtains average gray values of the K pixel sets respectively, wherein the average gray values are average values of gray values of pixels in the pixel sets. The computing device determines an average gray value corresponding to a pixel set with the largest number of pixels in the K pixel sets as a gray threshold. The computing device sets the gray value of the pixel in the pixel set with the average gray value smaller than the gray threshold value as a first gray value, and sets the gray value of the pixel in the pixel set with the average gray value larger than or equal to the gray threshold value as a second gray value, so that a second image block is obtained. Wherein, the first gray value may be 0 and the second gray value may be 255. Alternatively, the first gray value is 255 and the second gray value is 0. And screening out pixels corresponding to the cracks in the image block, and then assigning the gray value of the pixels as a first gray value, thereby finishing binarization of the image block, namely marking the pixels corresponding to the cracks.
With reference to the sixth implementation manner of the first aspect, in a seventh implementation manner of the first aspect of the present application, the classifying, by the computing device, pixels in the image block using the region growing algorithm, to obtain K pixel sets includes: the computing device converts the image block into a three-dimensional point cloud. The computing equipment obtains a neighborhood radius of each three-dimensional point in the three-dimensional point cloud, wherein the neighborhood radius is an average value of Euclidean distances between a target three-dimensional point in the three-dimensional point cloud and L neighborhood three-dimensional points, the L neighborhood three-dimensional points are three-dimensional points corresponding to L pixels adjacent to the pixel corresponding to the target three-dimensional point, and L is an integer greater than or equal to 4. The computing equipment obtains the searching radius of the three-dimensional point cloud, wherein the searching radius is the average value of the neighborhood radiuses of all three-dimensional points in the three-dimensional point cloud. The computing equipment classifies three-dimensional points in the three-dimensional point cloud according to the search radius to obtain K pixel sets corresponding to the K three-dimensional point sets. Each image block obtains a corresponding searching radius according to the characteristics of the three-dimensional point cloud, so that the three-dimensional points in the image block can be more accurately classified, and the three-dimensional points corresponding to the cracks in the image block are segmented.
With reference to the seventh implementation manner of the first aspect, in an eighth implementation manner of the first aspect of the present application, classifying three-dimensional points in the three-dimensional point cloud according to a search radius, obtaining K pixel sets corresponding to the K three-dimensional point sets includes: and determining one unclassified three-dimensional point in the three-dimensional point cloud as a seed point. And classifying the three-dimensional points positioned in the searching radius of the seed point into a target three-dimensional point set where the seed point is positioned by taking the seed point as the center. And determining one three-dimensional point which is not used as a seed point in the target three-dimensional point set as a seed point, returning to the step of marking the three-dimensional point which is positioned in the searching radius of the seed point into the target three-dimensional point set corresponding to the seed point by taking the seed point as the center. And returning to the step of determining one three-dimensional point which is not classified in the three-dimensional point cloud as the seed point until the three-dimensional point which is not used as the seed point does not exist in the target three-dimensional point set.
A second aspect provides a crack detection method. The method is applied to a camera, and comprises the following steps: the camera acquires N frames of first images. N is an integer greater than or equal to 2. The N frames of first images are obtained by shooting different areas of the target object through a preset operation. The target object is, for example, a road, a wall, or the like. The preset operation is that the camera rotates the lens of the camera and adjusts the focal length of the lens. The distance between different areas of the target object and the camera is different, and the distance between the areas and the camera is proportional to the focal length of the lens. Specifically, an area of a target object is selected by rotating a lens of a camera, and a focal length of the lens of the camera is changed: for areas farther from the camera, the larger the focal length of the lens; for the area closer to the camera, the smaller the focal length of the lens. Thereby enabling the target objects in the different first images to be clearly presented. And the camera splices the part of each image in the N frames of first images to obtain a second image. The camera detects a crack of the target object based on the second image. Therefore, manual detection is not needed, manpower and material resources can be saved, and the crack detection cost is reduced.
With reference to the second aspect, in a first implementation manner of the second aspect of the present application, the second image is an image without perspective effect. Because the perspective phenomenon of the camera lens can cause the effect that the target object shot by the camera is far, small, near and large, the detail of the target object at a distance is lost, the crack cannot be accurately detected, the N frames of first images are spliced to obtain the second image without perspective effect, the detail of the target object at a distance can be clearly presented, and the crack detection based on the second image can be more accurate.
With reference to the second aspect or the first implementation manner of the second aspect, in a second implementation manner of the second aspect of the present application, the stitching, by the camera, a portion of each image in the N frames of the first image to obtain the second image includes: the camera intercepts M rows of pixels of the N frames of the first image respectively, wherein M is an integer greater than or equal to 1. And the camera splices M rows of pixels of the N frames of the first image to obtain a second image. Thereby being capable of rapidly stitching to obtain a second image.
With reference to the second aspect or the first implementation manner of the second aspect, in a third implementation manner of the second aspect of the present application, the stitching, by the camera, a portion of each image in the N frames of the first image to obtain the second image includes: the camera intercepts part of pixels of each image in the N frames of first images respectively, wherein the intercepted pixels of each image are different in number. And the camera splices partial pixels of each image in the N frames of first images to obtain a second image. The partial pixels of each first image correspond to one sub-area of the target object, and the second image comprises the complete target object consisting of N sub-areas. Since the pose and focal length of the camera corresponding to each first image are different, the number of pixels truncated in each first image may be different.
With reference to the second aspect or the first implementation manner of the second aspect, in a fourth implementation manner of the second aspect of the present application, the stitching, by the camera, a portion of each image in the N frames of the first image to obtain the second image includes: the camera intercepts P pixels of the N frames of first images respectively, wherein P is an integer greater than or equal to 1. The camera splices P pixels of the N frames of the first image to obtain a second image. The same number of pixels is cut from each first image, and the second images can be quickly spliced.
With reference to the second aspect or any one of the first to fourth implementations of the second aspect, in a fifth implementation of the second aspect of the present application, the detecting, by the camera, a crack of the target object based on the second image includes: the camera pre-processes the second image to obtain J first image blocks. The first image block includes a portion of the target object, and J is an integer greater than or equal to 1. The preprocessing may be blocking, filtering or blocking. And processing each first image block by the camera to obtain J second image blocks, wherein target pixels corresponding to the sub-cracks are marked in the second image blocks. The sub-fracture is a portion of the fracture. The camera merges the J second image blocks to obtain a third image. And extracting a target pixel in the third image by the camera to obtain a crack. The second image is segmented and the image blocks are processed respectively, so that the influence of the external environment on the extraction of cracks in the target object can be reduced, and the accuracy of crack detection is improved.
With reference to the fifth implementation manner of the second aspect, in a sixth implementation manner of the first aspect of the present application, the processing, by the camera, each first image block to obtain J second image blocks includes: the camera classifies pixels in the first image block by using a region growing algorithm to obtain K pixel sets. Wherein each pixel set comprises at least one pixel, and K is an integer greater than or equal to 1. The camera respectively acquires average gray values of the K pixel sets, wherein the average gray values are average values of gray values of pixels in the pixel sets. The camera determines an average gray value corresponding to the pixel set with the largest number of pixels in the K pixel sets as a gray threshold. The camera sets the gray value of the pixel in the pixel set with the average gray value smaller than the gray threshold value as a first gray value, and sets the gray value of the pixel in the pixel set with the average gray value larger than or equal to the gray threshold value as a second gray value, so as to obtain a second image block. Wherein, the first gray value may be 0 and the second gray value may be 255. Alternatively, the first gray value is 255 and the second gray value is 0. And screening out pixels corresponding to the cracks in the image block, and then assigning the gray value of the pixels as a first gray value, thereby finishing binarization of the image block, namely marking the pixels corresponding to the cracks.
With reference to the sixth implementation manner of the second aspect, in a seventh implementation manner of the second aspect of the present application, the classifying, by the camera, pixels in the image block using the region growing algorithm, to obtain K pixel sets includes: the camera converts the image block into a three-dimensional point cloud. The camera obtains a neighborhood radius of each three-dimensional point in the three-dimensional point cloud. The neighborhood radius is the average value of Euclidean distances between a target three-dimensional point and L neighborhood three-dimensional points in the three-dimensional point cloud, and the L neighborhood three-dimensional points are three-dimensional points corresponding to L adjacent pixels corresponding to the pixel corresponding to the target three-dimensional point. L is an integer greater than or equal to 4. The camera acquires the searching radius of the three-dimensional point cloud, wherein the searching radius is the average value of the neighborhood radiuses of all three-dimensional points in the three-dimensional point cloud. And classifying the three-dimensional points in the three-dimensional point cloud by the camera according to the search radius to obtain K pixel sets corresponding to the K three-dimensional point sets. Each image block obtains a corresponding searching radius according to the characteristics of the three-dimensional point cloud, so that the three-dimensional points in the image block can be more accurately classified, and the three-dimensional points corresponding to the cracks in the image block are segmented.
With reference to the seventh implementation manner of the second aspect, in an eighth implementation manner of the second aspect of the present application, classifying three-dimensional points in the three-dimensional point cloud according to a search radius, obtaining K pixel sets corresponding to the K three-dimensional point sets includes: and determining one unclassified three-dimensional point in the three-dimensional point cloud as a seed point. And classifying the three-dimensional points positioned in the searching radius of the seed point into a target three-dimensional point set where the seed point is positioned by taking the seed point as the center. And determining one three-dimensional point which is not used as a seed point in the target three-dimensional point set as a seed point, returning to the step of marking the three-dimensional point which is positioned in the searching radius of the seed point into the target three-dimensional point set corresponding to the seed point by taking the seed point as the center. And returning to the step of determining one three-dimensional point which is not classified in the three-dimensional point cloud as the seed point until the three-dimensional point which is not used as the seed point does not exist in the target three-dimensional point set.
A third aspect provides a crack detection device. The device comprises an acquisition module, a splicing module and a detection module. The acquisition module is used for acquiring N frames of first images. The N frames of first images are obtained by shooting different areas of the target object through a preset operation. The preset operation is that the camera rotates the lens of the camera and adjusts the focal length of the lens, and the distances between different areas and the camera are different. N is an integer greater than or equal to 2. And the splicing module is used for splicing the part of each image in the N frames of first images to obtain a second image. And a detection module for detecting a crack of the target object based on the second image.
With reference to the third aspect, in a first implementation manner of the third aspect of the present application, the second image is an image without perspective effect. Because the perspective phenomenon of the camera lens can cause the effect that the target object shot by the camera is far, small, near and large, the detail of the target object at a distance is lost, the crack cannot be accurately detected, the N frames of first images are spliced to obtain the second image without perspective effect, the detail of the target object at a distance can be clearly presented, and the crack detection based on the second image can be more accurate.
With reference to the third aspect or the first implementation manner of the third aspect, in a second implementation manner of the third aspect of the present application, the stitching module is specifically configured to: m rows of pixels of the first image of N frames are respectively intercepted, wherein M is an integer greater than or equal to 1. And splicing M rows of pixels of the N frames of the first image to obtain a second image.
With reference to the third aspect or the first implementation manner of the third aspect, in a third implementation manner of the third aspect of the present application, the stitching module is specifically configured to: and respectively cutting out partial pixels of each image in the N frames of first images, wherein the number of the cut pixels of each image is different. And stitching partial pixels of each image in the N frames of first images to obtain a second image.
With reference to the third aspect or the first implementation manner of the first aspect, in a fourth implementation manner of the third aspect of the present application, the stitching module is specifically configured to: p pixels of the first image of the N frames are respectively intercepted, wherein P is an integer greater than or equal to 1. And splicing P pixels of the N frames of the first image to obtain a second image.
With reference to the third aspect or any implementation manner of the first to fourth aspects, in a fifth implementation manner of the third aspect of the present application, the detection module is specifically configured to: and preprocessing the second image to obtain J first image blocks, wherein the first image blocks comprise the part of the target object, and J is an integer greater than or equal to 1. And processing each first image block to obtain J second image blocks, wherein at least part of the target pixels corresponding to the cracks are marked in the second image blocks. And combining J second image blocks to obtain a third image. And extracting the target pixel in the third image to obtain the crack.
With reference to the fifth implementation manner of the third aspect, in a sixth implementation manner of the third aspect of the present application, the detection module is specifically further configured to: and classifying pixels in the first image block by using a region growing algorithm to obtain K pixel sets, wherein each pixel set comprises at least one pixel, and K is an integer greater than or equal to 1. And respectively acquiring average gray values of the K pixel sets, wherein the average gray values are average values of gray values of pixels in the pixel sets. And determining an average gray value corresponding to the pixel set with the largest number of pixels in the K pixel sets as a gray threshold value. And setting the gray value of the pixel in the pixel set with the average gray value smaller than the gray threshold value as a first gray value, and setting the gray value of the pixel in the pixel set with the average gray value larger than or equal to the gray threshold value as a second gray value, so as to obtain a second image block.
With reference to the sixth implementation manner of the third aspect, in a seventh implementation manner of the third aspect of the present application, the detection module is specifically further configured to: the image block is converted into a three-dimensional point cloud. Obtaining a neighborhood radius of each three-dimensional point in the three-dimensional point cloud, wherein the neighborhood radius is an average value of Euclidean distances between a target three-dimensional point and L neighborhood three-dimensional points in the three-dimensional point cloud, the L neighborhood three-dimensional points are three-dimensional points corresponding to L pixels adjacent to the pixel corresponding to the target three-dimensional point, and L is an integer greater than or equal to 4. And acquiring a search radius of the three-dimensional point cloud, wherein the search radius is an average value of neighbor radiuses of all three-dimensional points in the three-dimensional point cloud. And classifying the three-dimensional points in the three-dimensional point cloud according to the search radius to obtain K pixel sets corresponding to the K three-dimensional point sets.
With reference to the seventh implementation manner of the third aspect, in an eighth implementation manner of the third aspect of the present application, the detection module is specifically further configured to: and determining one unclassified three-dimensional point in the three-dimensional point cloud as a seed point. And classifying the three-dimensional points positioned in the searching radius of the seed point into a target three-dimensional point set where the seed point is positioned by taking the seed point as the center. And determining one three-dimensional point which is not used as a seed point in the target three-dimensional point set as a seed point, returning to the step of marking the three-dimensional point which is positioned in the searching radius of the seed point into the target three-dimensional point set corresponding to the seed point by taking the seed point as the center. And returning to the step of determining one three-dimensional point which is not classified in the three-dimensional point cloud as the seed point until the three-dimensional point which is not used as the seed point does not exist in the target three-dimensional point set.
A fourth aspect provides a crack detection device. The crack detection device comprises a processor and a memory, the processor being coupled to the memory, the processor being configured to perform the crack detection method of the first aspect or any implementation thereof described above based on instructions stored in the memory.
A fifth aspect provides a computer-readable storage medium. The computer readable storage medium stores a computer program that is executed by a processor to implement the crack detection method of the first aspect or any implementation manner thereof, or to implement the crack detection method of the second aspect or any implementation manner thereof.
Drawings
Fig. 1 is a schematic flow chart of a first embodiment of a crack detection method provided in the present application;
fig. 2 is a schematic diagram of a camera provided in the present application for photographing a target object;
FIG. 3 is a schematic diagram showing the relationship between the first images of two adjacent frames;
FIG. 4 is a schematic illustration of a second image provided herein;
FIG. 5 is a schematic flow chart of a second embodiment of a crack detection method provided in the present application;
FIG. 6 is a schematic structural diagram of an embodiment of a crack detection device provided in the present application;
FIG. 7 is a schematic diagram of a crack detection system according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of an embodiment of a crack detection device provided in the present application.
Detailed Description
The embodiment of the application provides a crack detection method, a crack detection device and crack detection equipment, so as to reduce the crack detection cost.
Embodiments of the present invention will be described below with reference to the accompanying drawings in the embodiments of the present invention. The terminology used in the description of the embodiments of the invention herein is for the purpose of describing particular embodiments of the invention only and is not intended to be limiting of the invention. As one of ordinary skill in the art can appreciate, with the development of technology and the appearance of new scenes, the technical solutions provided in the embodiments of the present application are applicable to similar technical problems.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The construction objects such as roads, bridges, tunnels, walls and the like can cause cracks to form on the surfaces of the construction objects due to the influences of factors such as improper construction process, overload and use, temperature change, foundation settlement or material aging and the like. Cracking of a building object can lead to reduced bearing capacity and accelerated damage of the building object, thereby affecting the comfort and safety of the building object. Therefore, cracks on building objects need to be detected and inspected in time.
In view of the fact that the existing crack detection means of building objects rely on manpower to a large extent, the efficiency is low and the cost is high. The present application provides the following embodiments to accomplish crack detection of a building object at low cost and high efficiency.
The embodiment of the crack detection method can be independently completed by a camera or can be completed by the cooperation of the camera and the computing equipment. These two cases are described separately below.
As shown in fig. 1, fig. 1 is a schematic flow chart of a first embodiment of a crack detection method provided in the present application. The execution body of the present embodiment includes a camera and a computing device. The camera is connected with the computing equipment, and is responsible for collecting N frames of first images and sending the N frames of first images to the computing equipment, and after the computing equipment receives the N frames of first images, the computing equipment detects cracks of the target object based on the N frames of first images. The computing device is a device with computing power, such as a cloud server, a computer or an intelligent terminal.
Specifically, the present embodiment includes the steps of:
s101: the camera collects N frames of first images, the N frames of first images are obtained by shooting different areas of a target object through a preset operation, the preset operation is that the camera rotates a lens of the camera and adjusts the focal length of the lens, and distances between the different areas and the camera are different.
In the present application, the camera is a pan/tilt camera having a pan/tilt control function and a zoom function, for example, a dome camera, specifically, for example, an omnidirectional moving and zoom (PTZ) camera. Of course, the camera may be a gun-type camera, a barrel-type camera, or the like, as long as the camera has the pan/tilt control and zoom functions described above, and the specific form of the camera is not limited in this application.
The camera comprises a tripod head and a lens, wherein the tripod head can rotate up and down and/or left and right, and the tripod head can rotate to drive the lens to rotate. The camera rotates the cradle head, so that the lens of the camera rotates to shoot different areas of a target object.
The target object may be a road, bridge, wall or tunnel building object within the visible range of the camera, or a part of the building object within the visible range. For example, as shown in fig. 2, fig. 2 is a schematic view of a scene in which a camera provided in the present application photographs a target object. In fig. 2, the target object is a road, and the camera is typically suspended above the road. Because the focal length range of the camera is limited, and the camera lens has perspective and other reasons, in the area beyond the distance corresponding to the maximum focal length of the camera lens in the road, the image shot by the camera may have problems of unclear, missing details and the like, and then the target object is the part of the road within the range which can be clearly shot by the camera.
The camera needs to aim at least part of the area of the target object before shooting the target object. The camera may be automatically recognized and aimed at the target object. For example, the camera recognizes the target object through the trained neural network, determines the relative position of the target object with respect to the camera, and then the camera rotates the cradle head to align the lens with the target object. Of course, the target object is selected manually, and the camera is controlled to rotate the cradle head to enable the lens to be aligned with the target object.
The distance between different areas of the target object and the camera is different. It will be appreciated that there may be overlapping portions between different regions. The distance between the area of the target object and the camera may be the distance between the intersection of the optical axis of the camera lens and the target object and the camera. In this embodiment, the camera shoots different areas of the target object with different focal lengths to obtain N frames of first images of the target object, and the distance between the different areas and the camera is proportional to the focal length of the lens of the camera. That is, the farther the target object is from the camera, the larger the focal length; the closer the target object is to the camera, the smaller the focal length. Thus, the area of the target object further from the camera can be clearly and in detail presented in the first image.
In this embodiment, the camera may take a photograph of the target object in a line scan manner. Specifically, the camera rotates the lens along a fixed direction, so that the lens shoots different areas of the target object from far to near or from near to far, the focal length is adjusted while or after the lens rotates, and N frames of first images are shot based on different camera poses and focal lengths. The camera may be rotated in only one direction, e.g. only up or down, or only left or right, depending on the position and extension direction of the target object relative to the camera. For example, as shown in fig. 2, taking the case that the camera shoots the target object from the near to the far as an example, the road is located at the lower front of the camera, the camera changes the shooting angle of the lens by rotating the lens upward (i.e., increasing the pitch angle of the lens), so that the shooting range of the lens falls in the area where the target object is farther, and at the same time, the lens increases the focal length, and then shooting is performed until the shooting of the road is completed.
For the same target object, the larger the amplitude of each rotation of the camera is, the smaller the number N of the collected first images is, the more pixels are used for splicing the second images in each subsequent first image, and the larger the area of the target object corresponding to the pixels used for splicing the second images in each first image is, the larger the influence of perspective effect is; conversely, the smaller the amplitude of each rotation of the camera, the larger the number N of the acquired first images, the fewer the pixels used for splicing the second images in each subsequent first image, the smaller the area of the target object corresponding to the pixels used for splicing the second images in each first image, and the smaller the influence of perspective effect. Therefore, in this embodiment, each rotation of the camera may be a small rotation, and accordingly, the focal length is also adjusted a small adjustment, so that the target object can be more accurately recorded in the N-frame first image. In particular, the method comprises the steps of,
The two adjacent frames of first images are based on a certain pose p of the camera when the camera rotates the lens to collect the first images along a fixed direction x And focal length f x A first image a obtained by shooting x And directly change to pose p x And focal length f x P of the previous pose x-1 And the last focal length f x-1 A first image a obtained by shooting x-1 Or based on pose p x And focal length f x Directly change to the next pose p x+1 And the next focal length f x+1 A first image a obtained by shooting x+1 . First image a x-1 For the first image a x A first image a x For the first image a x+1 A first image a x+1 For the first image a x The next frame image of (a), the first image a x And the first image a x-1 For the first image of two adjacent frames, the first image a x And the first image a x+1 Is the first image of two adjacent frames.
Because of the distortion of the camera lens, the area of the first image, which is closer to the edge, is more obvious in distortion, and the central position of the first image is least influenced by the distortion of the lens. Optionally, in order to further reduce the influence of lens distortion, each time the camera rotates, the area corresponding to the middle M rows/columns of pixels of the first image of the previous frame acquired by the camera can be adjacent to the area corresponding to the middle M rows/columns of pixels of the first image of the next frame. Therefore, the influence of the distortion of the camera lens on the edge of the first image is reduced, and the completeness and accuracy of the second image formed by splicing the subsequent N frames of first images are ensured.
For example, as shown in fig. 3, fig. 3 is a schematic diagram of a relationship between first images of two adjacent frames, and an example of M being equal to 1 is illustrated. The camera shoots a first region based on a first pitch angle and a first focal length to obtain a first image of a kth frame, then the camera rotates to a second pitch angle, the focal length is adjusted to a second focal length (the second focal length is larger than the first focal length), and shooting is performed based on the second pitch angle and the second focal length to obtain a first image of a kth+1 frame. The first subarea and the second subarea are two adjacent subareas on the road. In the first image of the kth frame, the first sub-region of the road is located in the middle line of the first image of the kth frame, and the second sub-region is located in the upper line of the middle line of the first image of the kth frame. In the k+1st frame first image, the second sub-region is located in the middle line of the k+1st frame first image, and the second sub-region is located in the next line of the middle line of the k+1st frame first image.
Of course, each time the camera rotates, the area corresponding to the M rows/columns of pixels at the lowest/upper part of the first image of the previous frame acquired by the camera may be an adjacent area, and the application is not limited to this.
Wherein M can be a positive integer of 1, 3, 5, 8 or 10, etc. The value of M may be determined according to the actual situation, for example, the longer the target object is, or when the requirement of the detection efficiency is greater, the value of M may be appropriately adjusted, which is not limited in this application.
The N frames of first images may be images in a world coordinate system. In particular, the camera has been camera calibrated, i.e. the internal and external reference matrices of the camera are known, before the target object is photographed. When each first image is shot by the camera, the rotation amount and focal length of the lens for shooting the corresponding first image relative to the previous pose are recorded. The rotation amount is substituted into the external reference matrix as external reference, and the focal length is substituted into the internal reference matrix as internal reference, so that a first image under the world coordinate system can be obtained. And the photos shot by the camera based on different postures and focal distances are identical to one absolute coordinate system (namely, a world coordinate system), so that the subsequent processing based on the first image of N frames is convenient. The first image may also be a distortion corrected image. Specifically, in the process of calibrating the camera, distortion parameters can be obtained, the distortion parameters are substituted into a distortion correction formula, the original image acquired by the camera is operated by using the distortion correction formula, and a first image after distortion correction can be output.
Generally, public places such as traffic roads and building clusters are covered with the spherical cameras, and the application is carried out by utilizing the deployed spherical cameras to collect building objects such as roads and walls so as to reduce the cost of input material resources.
S102: the camera sends N frames of first images to the computing device.
After the camera acquires the N frames of first images, the N frames of first images are sent to the computing device. Or in the process of acquiring N frames of first images by the camera, the first images are transmitted to the computing equipment while being acquired, so that the efficiency of acquiring the first images by the computing equipment can be improved, and the storage requirement of the camera is reduced.
S103: the computing device splices the part of each image in the N frames of first images to obtain a second image.
Because the distortion of the camera lens can cause the distortion of the edge area of the first image, after the camera collects the N frames of first images, the computing equipment respectively intercepts part of pixels of the N frames of first images, and splices part of pixels of each image in the N frames of first images to obtain a second image. The region of the target object corresponding to the truncated partial pixels in each first image is also adjacent to the region of the target object corresponding to the truncated partial pixels of the adjacent first image. It can be ensured that the target object in the second image obtained by stitching is a complete object.
In some embodiments, the partial pixels may be M rows of pixels of the first image. The M rows of pixels intercepted from each first image can ensure that the target object in the spliced second image is complete, and the positions of the M rows of pixels in the first image can be not limited.
In order to further improve the accuracy of the second image and improve the stitching efficiency of the second image, the middle M rows of pixels of each first image may be truncated and stitched into the second image. Whether or not the first image is distortion corrected, the effect of camera distortion on the middle M rows of pixels in the first image is minimal, so that the second image obtained by stitching the middle M rows of pixels of each first image is more accurate. Moreover, it is more efficient to intercept a fixed number of pixels from a fixed location of each first image.
In other embodiments, the second image may also be obtained by clipping pixels corresponding to the target object with the same area from each first image. For example, the target object is a road with a width of 5 m and a length of 100 m, and the number of the first images is 200, then pixels corresponding to the road with a width of 5 m and a length of 0.5 m in each first image may be cut, and pixels corresponding to the road with a width of 5 m and a length of 0.5 m in each first image in 200 first images may be spliced to obtain the second image. Since the capturing from each first image is based on different camera pose and focal length, the number of pixels occupied in different first images may be different or not identical, even though the partial areas of the target object are the same area, and thus the number of pixels captured from each first image may be different or not identical.
In still other embodiments, the second image may also be obtained by truncating P pixel tiles in each first image. The P pixels intercepted from each first image can ensure that the target object in the spliced second image is complete, and the positions of the P pixels in the first image can be not limited. P is an integer greater than or equal to 1.
Since the first image is an image in the world coordinate system, the second image stitched based on the first image is also an image in the world coordinate system. And, because different areas of the target object, no matter far or near from the camera, are recorded in the first image of at least one frame clearly along with the adjustment of the focal length, therefore the second image is a high definition, no distortion/little distortion and no perspective effect image of one frame.
As shown in fig. 4, fig. 4 is a schematic diagram of a second image provided in the present application. In fig. 4, the road is taken as an example of the target object. Therefore, the N frames of first images are obtained through shooting based on different poses and focal distances, and the second images obtained by splicing partial pixels of each image in the N frames of first images are images without perspective effect and distortion.
S104: the computing device detects a crack of the target object based on the second image.
After the N frames of first images are spliced into the second image with high definition, no distortion/minimal distortion and no perspective effect, the computing equipment can accurately detect whether the target object has cracks or not based on the second image.
In particular, to improve the accuracy of crack detection and to improve detection efficiency, the computing device may pre-process the second image.
In general, there may be greenbelts or other building objects in or around the road, wall, etc., and these greenbelts and other building objects may be taken together in a first image and also stitched to a second image. If an object other than the target object exists in the image during crack detection, the detection efficiency is reduced, and the crack detection of the target object is interfered. Therefore, only the region where the target object is located in the second image can be detected, thereby improving the crack detection efficiency and the detection accuracy. The preprocessing may then comprise extracting the target object in the second image, resulting in a target image comprising only the target object. The target object in the second image may be extracted by the computing device identifying the target object through a trained neural network, or may be extracted from the second image according to a manually selected region of interest (a region corresponding to the target object). Of course, the preprocessing may not include the process of extracting the target object from the second image to obtain the target image, and for convenience of description, the preprocessing will be described below by taking the example of extracting the target image.
Optionally, due to differences in the material quality, illumination variation and aging degree of the building object, the preprocessing may further include noise reduction processing on the target image to reduce errors introduced by differences in the material quality and illumination, and differences in brightness of the target object due to wear aging. The noise reduction process is specifically, for example, a median filter process or a gaussian filter process.
The preprocessing may further include performing a blocking process on the target image to obtain J first image blocks. J may be an integer greater than or equal to 1. When J is equal to 1, the first image block is the target image. When J is greater than or equal to 2, the target image is divided into a plurality of first image blocks, so that the influence caused by uneven brightness of the non-target object in the target image can be further reduced. Hereinafter, an example in which J is 2 or more will be described.
The blocking processing of the target image may specifically be that the computing device blocks the target image according to a preset size and a size of the target image. The predetermined size is, for example, 64 pixels by 64 pixels, 128 pixels by 128 pixels, 128 pixels by 256 pixels, 256 pixels by 256 pixels, or 512 pixels by 512 pixels. This application is not limited thereto. Taking the preset size of 64 pixels by 64 pixels and the target image size of 3200 pixels by 6400 pixels as an example for illustration, the target image can be divided into 5000 first image blocks.
The computing equipment respectively classifies and binarizes each image block to obtain J second image blocks. When a sub-crack exists in a certain second image block, the target pixel of the sub-crack in the second image block is marked. Wherein the sub-slit in the second image block is part of the slit because the slit is relatively slender and the second image block is smaller in size. The target pixel is, for example, a pixel having a first gray value, and pixels other than the target pixel in the second image block have a second gray value, the first gray value being different from the second gray value. For example, the first gray value may be 0 and the second gray value may be 255. Alternatively, the first gray value is 255 and the second gray value is 0.
Specifically, the computing device classifies pixels in the first image block using a region growing algorithm to obtain K sets of pixels. Each pixel set comprises at least one pixel, and K is an integer greater than or equal to 1. The region growing algorithm may combine pixels with similar features into one set, for example, when there is a sub-crack in a certain first image block and there is a target object surface that is not a crack, then some pixels in the first image block correspond to the sub-crack and some pixels correspond to the target object surface, and the region growing algorithm is able to generally classify the two types of pixels into at least two sets of pixels, one set of pixels representing the sub-crack and one set of pixels representing the target object surface.
The computing device converts the first image block into a three-dimensional point cloud. The XY axis plane of the coordinate axis of the three-dimensional point cloud is the pixel coordinate of the first image block, and the Z axis is the gray value of the pixel. For ease of calculation, the gray value of the pixel is normalized to [0,1] as the Z-axis coordinate of the pixel. The computing equipment traverses the three-point cloud to obtain the neighborhood radius of each three-dimensional point in the three-dimensional point cloud. The neighborhood radius is an average value of Euclidean distances between a target three-dimensional point and L neighborhood three-dimensional points in the three-dimensional point cloud, and the L neighborhood three-dimensional points are three-dimensional points corresponding to L adjacent pixels corresponding to the pixel corresponding to the target three-dimensional point. L is an integer of 4 or more, specifically, for example, 4, 8, 10 or 12. And the computing equipment calculates the searching radius of the three-dimensional point cloud according to the field radius of each three-point, wherein the searching radius is the average value of the neighborhood radiuses of all three-dimensional points in the three-dimensional point cloud. Each first image block determines the searching radius according to the self characteristics, so that when the searching is carried out on the three-dimensional point cloud by utilizing the searching radius, the three-dimensional point cloud can be more accurately classified.
The computing equipment classifies three-dimensional points in the three-dimensional point cloud according to the search radius to obtain K pixel sets corresponding to the K three-dimensional point sets. Specifically, one three-dimensional point which is not classified in the three-dimensional point cloud is determined as a seed point. And classifying the three-dimensional points positioned in the searching radius of the seed point into a target three-dimensional point set where the seed point is positioned by taking the seed point as the center. And determining one three-dimensional point which is not used as a seed point in the target three-dimensional point set as a seed point, and marking the three-dimensional point which is positioned in the searching radius of the seed point into the target three-dimensional point set corresponding to the seed point by taking the seed point as the center. And (3) determining one unclassified three-dimensional point in the three-dimensional point cloud as a seed point again until the three-dimensional point does not exist in the target three-dimensional point set and classifying the seed point and the three-dimensional point growing based on the seed point into a new target three-dimensional point set. And finishing the classification of the three-dimensional point cloud until the three-dimensional point cloud does not have unclassified three-dimensional points.
After the K pixel sets are obtained through classification, the computing equipment respectively obtains average gray values of the K pixel sets, wherein the average gray values are average values of gray values of pixels in the pixel sets. The computing device determines an average gray value corresponding to a pixel set with the largest number of pixels in the K pixel sets as a gray threshold. Typically, the cracks that appear on the surface of the building object are only a small fraction of the entire surface of the building object, and the surface area of the non-cracks of the building object is much larger than the area of the cracks. Then in the first image block, the larger the area, the more pixels, and the number of pixels corresponding to the surface of the target object is greater than the number of pixels corresponding to the crack. The pixel set with the largest number of pixels in the K pixel sets can be determined to be the set of pixels corresponding to the surface of the target object. Because the depth of the crack is deeper than the surface of the target object, the shape is thin and narrow, and the light entering the crack is less, the crack is darker than the surface of the target object. In the first image block, the gray value of the pixel corresponding to the sub-slit is lower than the gray value of the pixel corresponding to the surface of the target object. According to this feature, the pixels corresponding to the cracks in the first image block can be distinguished from the pixels corresponding to the target object surface. Specifically, the computing device sets the gray value of the pixel in the pixel set with the average gray value smaller than the gray threshold value as a first gray value, and sets the gray value of the pixel in the pixel set with the average gray value larger than or equal to the gray threshold value as a second gray value, so as to obtain a binarized second image block.
And after J second image blocks are obtained, the computing equipment merges the J second image blocks to obtain a third image. The third image is the target image after binarization processing. The computing device extracts a target pixel in the third image to extract a crack in the third image.
In order to enable the sub-cracks in the third image block to be combined into continuous cracks, the third image may be further subjected to a closing operation before the cracks in the third image are extracted. The closed operation can fuse target pixels with fine connection relations in the third image block, namely, some non-target pixels are converted into target pixels, so that sub-cracks distributed in different second image blocks can be connected to form complete crack outlines.
Optionally, to reduce the false detection rate of the crack, the computing device further calculates the length of the crack in pixels in the third image after extracting the crack. And comparing the crack with a length threshold, determining that the crack below the length threshold is not reported by false detection, and determining that the crack above or equal to the length threshold is a true crack.
Optionally, after detecting the crack in the third image, geographical location information of the crack in the real world may also be acquired. Specifically, the computing device obtains a reference image carrying geographic location information, the reference image also being an image in a world coordinate system, pixels in the reference image having a mapping relationship with the geographic location information in the real world. The reference image is, for example, a high-precision city map in a digital twin city system or a smart city system. The digital twin city is a digital twin city matched with the physical world, and the building geographical structure of the original twin city comprises geographical data such as the height, coordinates and the like of important facilities such as roads, buildings and the like. Because the third image is obtained by carrying out binarization conversion based on the second image, the third image is also an image under a world coordinate system, pixels in the third image and pixels in the reference image have a one-to-one correspondence, and the computing equipment acquires the target geographic position information of the target pixels according to the reference image so as to acquire the geographic position information of the crack. Therefore, a worker can accurately know the position of the crack, and further can quickly confirm and repair and maintain.
In this embodiment, by taking a picture of the building object by using the cameras already deployed around the building object and detecting the crack of the building object based on the picture, the crack detection cost can be reduced without manual detection and without adding additional material resources. And the camera can be called at any time and used for shooting a target object, and the computing equipment can also detect at any time, so that the current situation of a building object can be timely obtained, and cracks can be timely found. The plurality of cameras can be called simultaneously to collect the first images of the cracks of the plurality of building objects, and then the computing equipment performs splicing detection based on the first images, so that the detection efficiency is improved.
The computing equipment is used for completing image stitching and crack detection of the target object, so that the computational power requirement on the camera can be reduced, more cameras can be used for measuring cracks of the building object, and the crack detection cost is reduced.
The above embodiment is that the detection of the crack is completed by the cooperation of the camera and the computing device, and in some other embodiments, the detection of the crack may also be completed independently by the camera.
Fig. 5 is a schematic flow chart of a second embodiment of the crack detection method provided in the present application. The embodiment comprises the following steps:
S501: the camera collects N frames of first images, the N frames of first images are obtained by shooting different areas of a target object through a preset operation, the preset operation is that the camera rotates a lens of the camera and adjusts the focal length of the lens, and distances between the different areas and the camera are different.
This step is similar to S101, and will not be described here again.
S502: and the camera performs stitching on partial pixels of each image in the N frames of first images to obtain a second image.
The step is similar to S102, except that the execution body is a camera, but the specific stitching method is consistent, so that the description is omitted here.
S503: the camera detects a crack of the target object based on the second image.
The step is similar to S103, except that the execution body is a camera, but the specific detection method is consistent, so that the description is omitted here.
The above first embodiment of the crack detection method is implemented by a crack detection device. Fig. 6 is a schematic structural diagram of an embodiment of a crack detection device provided in the present application, as shown in fig. 6. The crack detection device 600 comprises an acquisition module 601, a splicing module 602 and a detection module 603.
The acquisition module 601 is configured to acquire N frames of first images, where the N frames of first images are obtained by photographing different areas of a target object by a camera through a preset operation, the preset operation is to rotate a lens of the camera and adjust a focal length of the lens, distances between the different areas and the camera are different, and N is an integer greater than or equal to 2.
And the stitching module 602 is configured to stitch a portion of each image in the N frames of first images to obtain a second image.
A detection module 603 for detecting a crack of the target object based on the second image.
In some possible implementations, the second image is a perspective-less image. Because the perspective phenomenon of the camera lens can cause the effect that the target object shot by the camera is far, small, near and large, the detail of the target object at a distance is lost, the crack cannot be accurately detected, the N frames of first images are spliced to obtain the second image without perspective effect, the detail of the target object at a distance can be clearly presented, and the crack detection based on the second image can be more accurate.
In some possible implementations, the stitching module 602 is specifically configured to: m rows of pixels of the first image of N frames are respectively intercepted, wherein M is an integer greater than or equal to 1. And splicing M rows of pixels of the N frames of the first image to obtain a second image.
In some possible implementations, the stitching module 602 is specifically configured to: and respectively cutting out partial pixels of each image in the N frames of first images, wherein the number of the cut pixels of each image is different. And stitching partial pixels of each image in the N frames of first images to obtain a second image.
In some possible implementations, the stitching module 602 is specifically configured to: p pixels of the first image of the N frames are respectively intercepted, wherein P is an integer greater than or equal to 1. And splicing P pixels of the N frames of the first image to obtain a second image.
In some possible implementations, the detection module 603 is specifically configured to: and preprocessing the second image to obtain J first image blocks. The first image block includes a portion of the target object, and J is an integer greater than or equal to 1. The preprocessing may be blocking, filtering or blocking. The detection module 603 processes each first image block to obtain J second image blocks, where the second image blocks are marked with target pixels corresponding to the sub-cracks. The sub-fracture is a portion of the fracture. The camera merges the J second image blocks to obtain a third image. The detection module 603 extracts the target pixel in the third image to obtain a crack. The second image is segmented and the image blocks are processed respectively, so that the influence of the external environment on the extraction of cracks in the target object can be reduced, and the accuracy of crack detection is improved.
In one possible implementation, the target pixel is a pixel with a gray value of a first gray value, and the detection module 603 is specifically configured to classify the pixels in the first image block by using a region growing algorithm, so as to obtain K pixel sets. Wherein each pixel set comprises at least one pixel, and K is an integer greater than or equal to 1. The detection module 603 obtains average gray values of the K pixel sets, where the average gray value is an average of gray values of pixels in the pixel set. The detection module 603 determines an average gray value corresponding to a pixel set with the largest number of pixels in the K pixel sets as a gray threshold. The detection module 603 sets the gray value of the pixel in the pixel set with the average gray value smaller than the gray threshold to be a first gray value, and sets the gray value of the pixel in the pixel set with the average gray value larger than or equal to the gray threshold to be a second gray value, so as to obtain a second image block. Wherein the first gray value may be 0 and the second gray value may be 266. Alternatively, the first gray value is 266 and the second gray value is 0. And screening out pixels corresponding to the cracks in the image block, and then assigning the gray value of the pixels as a first gray value, thereby finishing binarization of the image block, namely marking the pixels corresponding to the cracks.
In one possible implementation, the detection module 603 is specifically further configured to convert the image block into a three-dimensional point cloud. The detection module 603 obtains a neighborhood radius for each three-dimensional point in the three-dimensional point cloud. The neighborhood radius is the average value of Euclidean distances between a target three-dimensional point and L neighborhood three-dimensional points in the three-dimensional point cloud, and the L neighborhood three-dimensional points are three-dimensional points corresponding to L adjacent pixels corresponding to the pixel corresponding to the target three-dimensional point. L is an integer greater than or equal to 4. The detection module 603 obtains a search radius of the three-dimensional point cloud, where the search radius is an average value of the neighborhood radii of all three-dimensional points in the three-dimensional point cloud. The detection module 603 classifies three-dimensional points in the three-dimensional point cloud according to the search radius to obtain K pixel sets corresponding to the K three-dimensional point sets. Each image block obtains a corresponding searching radius according to the characteristics of the three-dimensional point cloud, so that the three-dimensional points in the image block can be more accurately classified, and the three-dimensional points corresponding to the cracks in the image block are segmented.
In one possible implementation, the detection module 603 is specifically further configured to determine one of the three-dimensional points that is not classified as a seed point. The detection module 603 classifies three-dimensional points located within a search radius of a seed point into a set of target three-dimensional points where the seed point is located, centered around the seed point. The detection module 603 determines that one three-dimensional point which is not used as a seed point in the target three-dimensional point set is used as a seed point, and returns to the step of taking the seed point as a center, and the detection module 603 marks the three-dimensional point which is positioned in the searching radius of the seed point into the target three-dimensional point set corresponding to the seed point. Until there is no three-dimensional point in the target three-dimensional point set that is not used as a seed point, the detection module 603 returns to the step of determining one of the three-dimensional points that is not classified as a seed point. Until all three-dimensional points in the three-dimensional point cloud are classified.
In one possible implementation, the pixels in the third image have a mapping relationship with the pixels in the reference image, the pixels in the reference image carrying the geographical location information. The crack detection device 600 further comprises an acquisition module 604. The obtaining module 604 obtains the target geographical location information of the target pixel according to the reference image to obtain the geographical location information of the crack. After the crack is detected, the geographical position information of the crack is further acquired to locate the crack.
The crack detection device 600 may be a camera, specifically a camera provided with a pan-tilt and zoom lens, such as a ball-type camera. Alternatively, the acquisition module 601 belongs to a camera, and the stitching module 602 and the detection module 603 belong to a computing device.
The above second embodiment of the crack detection method is implemented by a crack detection system. As shown in fig. 7, fig. 7 is a schematic structural diagram of an embodiment of a crack detection system provided in the present application. The crack detection system 700 includes a camera 701 and a computing device 702, the camera 701 and the computing device 702 being connected.
The camera 701 is configured to collect N frames of first images, where the N frames of first images are obtained by photographing different areas of a target object by the camera through a preset operation, the preset operation is that the camera rotates a lens of the camera and adjusts a focal length of the lens, distances between the different areas and the camera are different, and N is an integer greater than or equal to 2.
The camera 701 is also configured to send the N frames of the first image to the computing device 702.
The computing device 702 is configured to stitch a portion of each of the N frames of the first images to obtain a second image.
The computing device 702 is further configured to detect a crack of the target object based on the second image.
In some possible implementations, the computing device 702 is specifically configured to: m rows of pixels of the first image of N frames are respectively intercepted, wherein M is an integer greater than or equal to 1. M rows of pixels of the N frames of the first image are stitched into a second image.
In some possible implementations, the computing device 702 is specifically configured to: and preprocessing the second image to obtain J first image blocks. The first image block includes a portion of the target object, and J is an integer greater than or equal to 1. The preprocessing may be blocking, filtering or blocking. The computing device 702 processes each first image block to obtain J second image blocks, where the second image blocks are marked with target pixels corresponding to the sub-cracks. The sub-fracture is a portion of the fracture. The camera merges the J second image blocks to obtain a third image. The computing device 702 extracts the target pixel in the third image resulting in a crack. The second image is segmented and the image blocks are processed respectively, so that the influence of the external environment on the extraction of cracks in the target object can be reduced, and the accuracy of crack detection is improved.
In one possible implementation, the target pixel is a pixel having a gray value of a first gray value, and the computing device 702 is specifically configured to classify the pixels in the first image block by using a region growing algorithm to obtain K pixel sets. Wherein each pixel set comprises at least one pixel, and K is an integer greater than or equal to 1. The computing device 702 obtains the average gray values of the K sets of pixels, respectively, the average gray value being the average of the gray values of the pixels in the sets of pixels. The computing device 702 determines an average gray value corresponding to a pixel set having the largest number of pixels among the K pixel sets as a gray threshold. The computing device 702 sets the gray values of the pixels in the set of pixels having an average gray value less than the gray threshold to a first gray value and sets the gray values of the pixels in the set of pixels having an average gray value greater than or equal to the gray threshold to a second gray value, resulting in a second image block. Wherein, the first gray value may be 0 and the second gray value may be 255. Alternatively, the first gray value is 255 and the second gray value is 0. And screening out pixels corresponding to the cracks in the image block, and then assigning the gray value of the pixels as a first gray value, thereby finishing binarization of the image block, namely marking the pixels corresponding to the cracks.
In one possible implementation, the computing device 702 is specifically further configured to convert image blocks into a three-dimensional point cloud. The computing device 702 obtains a neighborhood radius for each three-dimensional point in the three-dimensional point cloud. The neighborhood radius is the average value of Euclidean distances between a target three-dimensional point and L neighborhood three-dimensional points in the three-dimensional point cloud, and the L neighborhood three-dimensional points are three-dimensional points corresponding to L adjacent pixels corresponding to the pixel corresponding to the target three-dimensional point. L is an integer greater than or equal to 4. The computing device 702 obtains a search radius of the three-dimensional point cloud, the search radius being an average of neighborhood radii of all three-dimensional points in the three-dimensional point cloud. The computing device 702 classifies three-dimensional points in the three-dimensional point cloud according to the search radius, resulting in K sets of pixels corresponding to the K sets of three-dimensional points. Each image block obtains a corresponding searching radius according to the characteristics of the three-dimensional point cloud, so that the three-dimensional points in the image block can be more accurately classified, and the three-dimensional points corresponding to the cracks in the image block are segmented.
In one possible implementation, the computing device 702 is specifically further configured to determine one of the three-dimensional points in the three-dimensional point cloud that is not classified as a seed point. The computing device 702 sorts three-dimensional points located within a search radius of a seed point into a set of target three-dimensional points where the seed point is located, centered around the seed point. The computing device 702 determines that one of the set of target three-dimensional points that is not a seed point is a seed point, and returns to the step of centering on the seed point, the computing device 702 strokes the three-dimensional point located within the search radius of the seed point into the set of target three-dimensional points corresponding to the seed point. Until there are no three-dimensional points in the target three-dimensional point set that are not seed points, the computing device 702 returns to the step of determining one of the three-dimensional point clouds that is not classified as a seed point. Until all three-dimensional points in the three-dimensional point cloud are classified.
In one possible implementation, the pixels in the third image have a mapping relationship with the pixels in the reference image, the pixels in the reference image carrying the geographical location information. The computing device 702 also obtains target geographic location information for the target pixel from the reference image to obtain geographic location information for the crack. After the crack is detected, the geographical position information of the crack is further acquired to locate the crack.
Fig. 8 is a schematic structural diagram of an embodiment of a crack detection device provided in the present application. The crack detection device 800 in this embodiment comprises a processor 801 and a memory 802. The processor 801 is coupled to the memory 802, and the processor 801 is configured to execute the first embodiment or the second embodiment of the crack detection method described above based on instructions stored in the memory 802.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a computer, implements the crack detection method flow of any of the above method embodiments.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (RAM, random access memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Claims (17)
1. A method of crack detection, the method comprising:
the method comprises the steps that a camera collects N frames of first images, wherein the N frames of first images are obtained by shooting different areas of a target object through a preset operation, the preset operation is to rotate a lens of the camera and adjust the focal length of the lens, the distances between the different areas and the camera are different, and N is an integer greater than or equal to 2;
The camera sending the N frames of first images to a computing device;
the computing equipment splices part of each image in the N frames of first images to obtain a second image;
the computing device detects a crack of the target object based on the second image.
2. The method of claim 1, wherein the computing device stitching portions of each of the N frames of first images to obtain a second image comprises:
the computing equipment intercepts M rows of pixels of the N frames of first images respectively, wherein M is an integer greater than or equal to 1;
the computing device stitch together M rows of pixels of the N frames of first images to obtain the second image.
3. The method of any of claims 1 or 2, wherein the computing device detecting a crack of the target object based on the second image comprises:
the computing equipment pre-processes the second image to obtain J first image blocks, wherein the first image blocks comprise parts of the target object, and J is an integer greater than or equal to 1;
the computing equipment processes each first image block to obtain J second image blocks, target pixels corresponding to sub-cracks are marked in the second image blocks, and the sub-cracks are part of the cracks;
The computing equipment merges the J second image blocks to obtain a third image;
the computing device extracts the target pixel in the third image, resulting in the crack.
4. A method according to claim 3, wherein the target pixel is a pixel having a gray value of a first gray value, and wherein the computing device processing each of the first image blocks to obtain J second image blocks comprises:
the computing equipment classifies pixels in the first image block by utilizing a region growing algorithm to obtain K pixel sets, wherein each pixel set comprises at least one pixel, and K is an integer greater than or equal to 1;
the computing equipment respectively acquires average gray values of the K pixel sets, wherein the average gray values are average values of gray values of pixels in the pixel sets;
the computing equipment determines the average gray value corresponding to the pixel set with the largest number of pixels in the K pixel sets as a gray threshold value;
the computing device sets the gray value of the pixel in the pixel set with the average gray value smaller than the gray threshold value as a first gray value, and sets the gray value of the pixel in the pixel set with the average gray value larger than or equal to the gray threshold value as a second gray value, so that the second image block is obtained.
5. The method of claim 4, wherein the computing device classifying pixels in the image block using the region growing algorithm to obtain K sets of pixels comprises:
the computing device converting the image block into a three-dimensional point cloud;
the computing equipment acquires a neighborhood radius of each three-dimensional point in the three-dimensional point cloud, wherein the neighborhood radius is an average value of Euclidean distances between a target three-dimensional point in the three-dimensional point cloud and L neighborhood three-dimensional points, the L neighborhood three-dimensional points are three-dimensional points corresponding to L pixels adjacent to the pixel corresponding to the target three-dimensional point, and L is an integer greater than or equal to 4;
the computing equipment acquires a search radius of the three-dimensional point cloud, wherein the search radius is an average value of the neighborhood radiuses of all three-dimensional points in the three-dimensional point cloud;
and classifying the three-dimensional points in the three-dimensional point cloud according to the search radius by the computing equipment to obtain K pixel sets corresponding to the K three-dimensional point sets.
6. A method of crack detection, the method comprising:
the method comprises the steps that a camera collects N frames of first images, wherein the N frames of first images are obtained by shooting different areas of a target object through a preset operation, the preset operation is to rotate a lens of the camera and adjust the focal length of the lens, the distances between the different areas and the camera are different, and N is an integer greater than or equal to 2;
The camera splices the part of each image in the N frames of first images to obtain a second image;
the camera detects a crack of the target object based on the second image.
7. The method of claim 6, wherein the stitching the portion of each of the N frames of first images into a second image by the camera comprises:
the camera intercepts M rows of pixels of the N frames of first images respectively, wherein M is an integer greater than or equal to 1;
and the camera splices M rows of pixels of the N frames of first images to obtain the second image.
8. The method of claim 6 or 7, wherein the camera detecting a crack of the target object based on the second image comprises:
the camera pre-processes the second image to obtain J first image blocks, wherein the first image blocks comprise parts of the target object, and J is an integer greater than or equal to 1;
the camera processes each first image block to obtain J second image blocks, target pixels corresponding to sub-cracks are marked in the second image blocks, and the sub-cracks are part of the cracks;
the camera merges the J second image blocks to obtain a third image;
And the camera extracts the target pixel in the third image to obtain the crack.
9. The method of claim 8, wherein the target pixel is a pixel having a gray value of a first gray value, and wherein the processing each of the first image blocks by the camera to obtain J second image blocks comprises:
the camera classifies pixels in the first image block by using a region growing algorithm to obtain K pixel sets, wherein each pixel set comprises at least one pixel, and K is an integer greater than or equal to 1;
the camera respectively acquires average gray values of the K pixel sets, wherein the average gray values are average values of gray values of pixels in the pixel sets;
the camera determines the average gray value corresponding to the pixel set with the largest number of pixels in the K pixel sets as a gray threshold value;
and the camera sets the gray value of the pixel in the pixel set with the average gray value smaller than the gray threshold value as the first gray value, and sets the gray value of the pixel in the pixel set with the average gray value larger than or equal to the gray threshold value as the second gray value, so as to obtain the second image block.
10. The method of claim 9, wherein the camera classifying pixels in the image block using the region growing algorithm to obtain K sets of pixels comprises:
the camera converts the first image block into a three-dimensional point cloud;
the camera acquires a neighborhood radius of each three-dimensional point in the three-dimensional point cloud, wherein the neighborhood radius is an average value of Euclidean distances between a target three-dimensional point in the three-dimensional point cloud and L neighborhood three-dimensional points, the L neighborhood three-dimensional points are three-dimensional points corresponding to L pixels adjacent to the pixel corresponding to the target three-dimensional point, and L is an integer greater than or equal to 4;
the camera acquires the searching radius of the three-dimensional point cloud, wherein the searching radius is the average value of the neighborhood radiuses of all three-dimensional points in the three-dimensional point cloud;
and classifying the three-dimensional points in the three-dimensional point cloud by the camera according to the search radius to obtain K pixel sets corresponding to the K three-dimensional point sets.
11. A crack detection device, the device comprising:
the acquisition module is used for acquiring N frames of first images, wherein the N frames of first images are obtained by shooting different areas of a target object through a preset operation, the preset operation is to rotate a lens of the camera and adjust the focal length of the lens, the distances between the different areas and the camera are different, and N is an integer greater than or equal to 2;
The splicing module is used for splicing the part of each image in the N frames of first images to obtain a second image;
and the detection module is used for detecting the crack of the target object based on the second image.
12. The apparatus of claim 11, wherein the stitching module is specifically configured to:
respectively intercepting M rows of pixels of the N frames of first images, wherein M is an integer greater than or equal to 1;
and splicing M rows of pixels of the N frames of first images to obtain the second image.
13. The apparatus according to claim 11 or 12, wherein the detection module is specifically configured to:
preprocessing the second image to obtain J first image blocks, wherein the first image blocks comprise parts of the target object, and J is an integer greater than or equal to 1;
processing each first image block to obtain J second image blocks, wherein target pixels corresponding to sub-cracks are marked in the second image blocks, and the sub-cracks are part of the cracks;
combining the J second image blocks to obtain a third image;
and extracting the target pixel in the third image to obtain the crack.
14. The apparatus of claim 13, wherein the detection module is further specifically configured to:
Classifying pixels in the first image block by using a region growing algorithm to obtain K pixel sets, wherein each pixel set comprises at least one pixel, and K is an integer greater than or equal to 1;
respectively obtaining average gray values of the K pixel sets, wherein the average gray values are average values of gray values of pixels in the pixel sets;
determining the average gray value corresponding to the pixel set with the largest number of pixels in the K pixel sets as a gray threshold value;
and setting the gray value of the pixel in the pixel set with the average gray value smaller than the gray threshold value as the first gray value, and setting the gray value of the pixel in the pixel set with the average gray value larger than or equal to the gray threshold value as a second gray value, so as to obtain the second image block.
15. The apparatus of claim 14, wherein the detection module is further specifically configured to:
converting the first image block into a three-dimensional point cloud;
obtaining a neighborhood radius of each three-dimensional point in the three-dimensional point cloud, wherein the neighborhood radius is an average value of Euclidean distances between a target three-dimensional point in the three-dimensional point cloud and L neighborhood three-dimensional points, the L neighborhood three-dimensional points are three-dimensional points corresponding to L adjacent pixels corresponding to the pixel corresponding to the target three-dimensional point, and L is an integer greater than or equal to 4;
Acquiring a search radius of the three-dimensional point cloud, wherein the search radius is an average value of the neighborhood radiuses of all three-dimensional points in the three-dimensional point cloud;
and classifying the three-dimensional points in the three-dimensional point cloud according to the search radius to obtain K pixel sets corresponding to the K three-dimensional point sets.
16. A crack detection device, characterized in that the device comprises a processor and a memory, the processor being coupled to the memory, the processor being configured to perform the crack detection method according to any of claims 1-5 based on instructions stored in the memory.
17. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program that is executed by a processor to implement the crack detection method of any one of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111417479.5A CN116188348A (en) | 2021-11-25 | 2021-11-25 | Crack detection method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111417479.5A CN116188348A (en) | 2021-11-25 | 2021-11-25 | Crack detection method, device and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116188348A true CN116188348A (en) | 2023-05-30 |
Family
ID=86433127
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111417479.5A Pending CN116188348A (en) | 2021-11-25 | 2021-11-25 | Crack detection method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116188348A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117115665A (en) * | 2023-10-17 | 2023-11-24 | 深圳市城市交通规划设计研究中心股份有限公司 | Static influence parameter analysis method based on pavement crack analysis method |
-
2021
- 2021-11-25 CN CN202111417479.5A patent/CN116188348A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117115665A (en) * | 2023-10-17 | 2023-11-24 | 深圳市城市交通规划设计研究中心股份有限公司 | Static influence parameter analysis method based on pavement crack analysis method |
CN117115665B (en) * | 2023-10-17 | 2024-02-27 | 深圳市城市交通规划设计研究中心股份有限公司 | Static influence parameter analysis method based on pavement crack analysis method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104951775B (en) | Railway highway level crossing signal region security intelligent identification Method based on video technique | |
CN107704844B (en) | Power transmission line icing thickness identification method based on binocular parallax images of unmanned aerial vehicle | |
CN108919838B (en) | Binocular vision-based automatic tracking method for power transmission line of unmanned aerial vehicle | |
CN110142785A (en) | A kind of crusing robot visual servo method based on target detection | |
CN108038415B (en) | Unmanned aerial vehicle automatic detection and tracking method based on machine vision | |
CN111476314B (en) | Fuzzy video detection method integrating optical flow algorithm and deep learning | |
CN111476785B (en) | Night infrared light-reflecting water gauge detection method based on position recording | |
EP2124194B1 (en) | Method of detecting objects | |
CN110634138A (en) | Bridge deformation monitoring method, device and equipment based on visual perception | |
CN113378754B (en) | Bare soil monitoring method for construction site | |
CN109544535B (en) | Peeping camera detection method and system based on optical filtering characteristics of infrared cut-off filter | |
CN112419261B (en) | Visual acquisition method and device with abnormal point removing function | |
CN114972177A (en) | Road disease identification management method and device and intelligent terminal | |
CN109492647A (en) | A kind of power grid robot barrier object recognition methods | |
CN110288623A (en) | The data compression method of unmanned plane marine cage culture inspection image | |
CN116612192A (en) | Digital video-based pest and disease damage area target positioning method | |
CN116188348A (en) | Crack detection method, device and equipment | |
CN112197705A (en) | Fruit positioning method based on vision and laser ranging | |
CN109635679A (en) | A kind of real-time target sheet positioning and loop wire recognition methods | |
CN112488022B (en) | Method, device and system for monitoring panoramic view | |
CN110826364A (en) | Stock position identification method and device | |
CN113096016A (en) | Low-altitude aerial image splicing method and system | |
CN109902607B (en) | Urban automatic optimization modeling system based on oblique camera | |
CN116071323A (en) | Rain intensity measuring method based on camera parameter normalization | |
CN115097836A (en) | Power transmission line inspection method and system based on image registration and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication |