CN117392068A - Ladder detection method for ladder compartment in coal mine air shaft - Google Patents
Ladder detection method for ladder compartment in coal mine air shaft Download PDFInfo
- Publication number
- CN117392068A CN117392068A CN202311273515.4A CN202311273515A CN117392068A CN 117392068 A CN117392068 A CN 117392068A CN 202311273515 A CN202311273515 A CN 202311273515A CN 117392068 A CN117392068 A CN 117392068A
- Authority
- CN
- China
- Prior art keywords
- crack
- pixel
- point
- image
- pixel point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 239000003245 coal Substances 0.000 title claims abstract description 33
- 238000001514 detection method Methods 0.000 title claims abstract description 20
- 238000001914 filtration Methods 0.000 claims abstract description 62
- 239000013598 vector Substances 0.000 claims description 41
- 238000005070 sampling Methods 0.000 claims description 40
- 238000000034 method Methods 0.000 claims description 34
- 238000007689 inspection Methods 0.000 claims description 21
- 230000002146 bilateral effect Effects 0.000 claims description 15
- 230000007704 transition Effects 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 12
- 239000010883 coal ash Substances 0.000 abstract description 21
- 235000002566 Capsicum Nutrition 0.000 abstract description 13
- 239000006002 Pepper Substances 0.000 abstract description 13
- 235000016761 Piper aduncum Nutrition 0.000 abstract description 13
- 235000017804 Piper guineense Nutrition 0.000 abstract description 13
- 235000008184 Piper nigrum Nutrition 0.000 abstract description 13
- 150000003839 salts Chemical class 0.000 abstract description 13
- 244000203593 Piper nigrum Species 0.000 abstract 1
- 239000000428 dust Substances 0.000 description 26
- 241000722363 Piper Species 0.000 description 12
- 208000037656 Respiratory Sounds Diseases 0.000 description 5
- 239000004071 soot Substances 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000012423 maintenance Methods 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 239000000945 filler Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 229910000831 Steel Inorganic materials 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000005336 cracking Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000011152 fibreglass Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000007921 spray Substances 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
The invention relates to the field of unmanned detection of coal mine air shafts, in particular to a ladder detection method for a ladder compartment in a coal mine air shaft; according to the invention, filtering is performed according to the characteristics of a large amount of salt and pepper noise in an image shot by a ladder compartment, the definition of an image characteristic boundary is ensured while a filtering function is realized, then a close-range image of the surface of the ladder is obtained through a filtering window, a binary image is obtained, then the emission distance of pixels in a foreground area in the binary image is calculated, so that the distinction between crack characteristics and coal ash characteristics is realized, then the pixels of the coal ash part are removed, finally the crack characteristics which are covered by the coal ash and are broken are filled, so that the size of the complete crack characteristics is obtained, the accuracy of the crack characteristics output by a detection result is ensured, and finally the crack image containing the crack characteristics and the crack width is output; the problem that some coal ash areas in a ladder room of a coal mine air shaft easily influence the system to accurately identify the size of cracks through images is solved.
Description
Technical Field
The invention relates to the field of unmanned detection of coal mine air shafts, in particular to a ladder detection method for a ladder compartment in a coal mine air shaft.
Background
After the mine wind shaft is formally put into production, the mine wind shaft is provided with the special purpose of total air return of the whole mine, the environment is extremely harsh, few people can participate in the mine wind shaft in daily life, so that the condition of safety facilities in the mine wind shaft cannot be controlled, but a ladder room in the wind shaft is used as a final evacuation channel of a major safety accident, and the observation and maintenance of the well wall structure and the condition of the internal ladder room are required to be enhanced.
Since almost no person steps down in the ladder room of the coal mine air shaft for years, black dust particles on the coal mine or the rock wall are easy to adhere to the surface of the ladder, so that black coal ash areas (shown in fig. 1) are formed on the surface of the ladder, and images are identified at a later stage to judge whether cracks appear on the surface of the ladder, crack characteristics are generally extracted through pixel values, but since the pixel values of the coal ash are extremely close to the pixel values of the cracks, when the cracks on the surface of the ladder are identified through the images, the black areas formed by the coal ash are easy to be regarded as the cracks on the surface of the ladder, or when the black areas are just adhered to the surface of the cracks, the identification of the sizes of the cracks is easy to be influenced, and the damage degree of the ladder is misjudged by maintenance personnel.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a ladder detection method for a ladder room in a coal mine air shaft, which solves the problem that some coal ash areas in the ladder room of the coal mine air shaft in the prior art are easy to influence a system to accurately identify the size of cracks through images.
In order to achieve the above purpose, the present invention provides the following technical solutions:
a ladder detection method for a ladder compartment in a coal mine wind shaft comprises the following steps:
s1, acquiring a current image of a ladder compartment through inspection equipment, sequentially establishing a filtering window of each pixel point to be filtered according to the gradient of pixel values among the pixel points, and carrying out bilateral filtering by taking the pixel median of the filtering window as the pixel value of the pixel point to be filtered so as to obtain a filtering image;
s2, setting a filter window according to the shooting distance of a shooting lens of the inspection equipment, and intercepting image features in the filter window in the filter image to obtain a standard image;
s3, judging whether crack features exist in the standard image;
if yes, enter step S4;
if not, outputting prompt information of no crack on the surface of the ladder;
s4, carrying out binary processing on the standard image to obtain a binary image, calculating the emission distance from each pixel point on the foreground region of the binary image to the plane where the background region is located along the normal vector of the pixel point, and obtaining a crack image according to the emission distance;
s5, determining a crack boundary and a crack skeleton of the crack characteristics in the crack image, and calculating the crack width of the crack characteristics along the crack skeleton;
s6, selecting crack images with a plurality of crack characteristics, calculating filling frameworks among the crack frameworks with the plurality of crack characteristics, and acquiring filling boundaries corresponding to the filling frameworks according to the crack widths;
and S7, outputting a crack image with only one crack characteristic and actual position coordinates of the crack characteristic as a ladder detection result.
Preferably, in step S1, the method specifically includes the steps of:
s11, selecting any pixel point to be filtered on the current image as a window point, calculating gradient among pixel values of all pixel points around the window point, and selecting a plurality of pixel points with the lowest gradient to establish a filtering window;
s12, obtaining pixel values of all pixel points in a filtering window, and selecting a pixel median value from all pixel values;
s13, taking a pixel median value of the pixel point to be filtered as a pixel value of the pixel point to be filtered, and carrying out bilateral filtering treatment on the pixel point to be filtered; the formula for carrying out bilateral filtering on the pixel points to be filtered is as follows:
wherein,
in the above formula, g (i) is the pixel value of the filtered i point, μ (i) is the correlation, i and j respectively represent the positions of two pixel points in the image, wherein j point belongs to one pixel point in the filter window Q, D (i, j) represents the euclidean distance between the pixel point i and the pixel point j, and M i Is the pixel median value, X of the pixel values of each pixel point in the filter window Q containing the pixel point i j Pixel value, Δ (M i ,X j ) Median M of pixels representing point i i Pixel value X with pixel point j j The gray level difference between the two images, A and B are two constants, the two constants are used for measuring the filtering amount of the images, and e is a natural constant;
s14, repeating the steps S11-S13 to sequentially finish the filtering processing of each pixel point to be filtered on the current image so as to obtain a filtered image;
preferably, in step S2, the method specifically includes the steps of:
s21, setting a filtering radius broad value of a filtering window of a shooting lens of the inspection equipment;
s22, calculating the actual distance between the actual position corresponding to each pixel point on the filtered image and the camera of the inspection equipment;
s23, eliminating pixel points with actual distances larger than the filtered radius broad value to obtain a standard image.
Preferably, in step S4, the method specifically includes the steps of:
s41, performing binarization processing on the standard image, dividing the image subjected to the binarization processing into a plurality of binary images only comprising a foreground region, and acquiring coordinates of a plurality of pixel points of a background region on the binary image;
s42, fitting by taking the plurality of points as a reference to obtain a reference plane, arbitrarily selecting two vertical straight lines on the reference plane to respectively establish an X axis and a Y axis, and positively establishing a Z axis intersecting the X axis and the Y axis at one end which is vertical to the reference plane and faces the camera to obtain a binary coordinate system;
s43, building a foreground three-dimensional model according to the binary coordinate system and pixel point coordinates of a foreground region on the binary image;
s44, calculating the emission distance from each pixel point on the foreground three-dimensional model to the reference plane along the normal vector of the point;
s45, setting a wide range of the emission distance, and selecting pixel points of a foreground region with the emission distance in the wide range of the emission distance on the binary image to obtain a crack image.
Preferably, in step S44, the method specifically includes the steps of:
s441, selecting any pixel point on a foreground three-dimensional model as a sampling point, acquiring a plurality of adjacent pixel points around the sampling point as base points, and acquiring coordinates of the plurality of base points according to a binary coordinate system;
s442, fitting a plurality of base point coordinates to obtain a vertical plane;
s443, vertically projecting the sampling point onto a vertical plane to obtain a projection point, and obtaining a normal vector of the sampling point;
s444, calculating the vertical distance from the sampling point to the reference plane;
s445, calculating the emission distance from the sampling point to the reference plane along the normal vector of the point; the calculation formula of the emission distance is as follows:
in the above formula, D represents the emission distance of the sampling point from the normal vector of the sampling point to the reference plane, D represents the vertical distance of the sampling point from the reference plane, and P is the module length of the normal vector of the sampling point, O z -O' z Representing the height difference between the sampling point and the projection point;
s446, repeating the steps S441 to S445 to sequentially calculate the emission distance of each pixel point on the foreground three-dimensional model.
Preferably, in step S5, the method specifically includes the steps of:
s51, obtaining a crack boundary and a crack skeleton of a crack characteristic in a crack image;
s52, sequentially obtaining coordinates of any pixel point from the crack skeleton, and obtaining two adjacent pixel points from the crack skeleton on two sides of the pixel point to obtain a vertical plane passing through the pixel point and perpendicular to the connecting line of the two adjacent pixel points;
s53, acquiring a plurality of intersection points of the vertical plane and the crack skeleton;
s54, respectively calculating the distance and the vector between the pixel point and each intersection point;
s55, selecting two intersection points with opposite vectors and minimum distance between the intersection points and the pixel points, wherein the width of the crack at the pixel points is the sum of the distance;
preferably, in step S6, the method specifically includes the steps of:
s61, selecting crack images containing a plurality of crack characteristics, establishing a plane coordinate system by using a plane where the crack frameworks are located, calculating distances between endpoints of the crack frameworks of the plurality of crack characteristics, and selecting two crack frameworks with minimum endpoint spacing by a comparison method;
s62, sequentially acquiring Y coordinates of pixel points on the crack frameworks by taking the unit length of the pixel points as a step distance, and respectively establishing data sequences of two crack frameworks according to the X-axis direction;
s63, fitting according to the data sequences of the two crack frameworks to obtain a first track curve, calculating corresponding Y values according to the first track curve and X-axis coordinate values in the data sequences, and sequentially arranging the Y values to obtain a predicted sequence;
s64, sequentially differentiating the predicted sequence and the data sequence, and sequentially arranging the differential values to obtain a difference sequence;
s65, selecting the maximum value and the minimum value in the difference value sequence according to a comparison method, subtracting the minimum value from the maximum value to obtain a difference value interval, and equally dividing the difference value interval into a plurality of state intervals;
s66, establishing a difference probability transition matrix according to the state interval and the difference sequence; the expression of the difference probability transition matrix is:
in the above formula, P represents a difference probability transition matrix, and P mn Representing a probability of a difference value transitioning from the mth state interval to the nth state interval;
s67, calculating pixel point coordinates between two crack frameworks according to the first track curve, the difference probability transition matrix and the state interval to obtain a filling framework; the calculation formula of the pixel point coordinates between the two crack frameworks is as follows:
wherein Y represents a Y-axis coordinate value calculated from X-axis coordinate values of pixel points between two crack skeletons, f 1 Representing a first trajectory curve, P τR Representing residual data from the τ th stateThe probability of the state interval transitioning to the R-th state interval,average value of upper and lower limits of the R-th state section;
s68, repeating the steps S61-S68 until only one crack characteristic exists on the crack image;
and S69, calculating the average width of the cracks on each crack image, and acquiring a filling boundary corresponding to the filling skeleton according to the filling skeleton.
S7, outputting a crack image with crack characteristics and crack widths corresponding to the crack characteristics.
Compared with the prior art, the invention provides the ladder detection method for the ladder compartment in the coal mine air shaft, which has the following beneficial effects:
1. according to the invention, filtering is firstly carried out according to the characteristics of images shot by a ladder room and a large amount of salt and pepper noise, then a close-range image of the surface of the ladder is obtained through a filtering window, the close-range image containing crack characteristics is selected as a standard image, the standard image is converted into a binary image, then the emission distance of foreground region pixel points in the binary image is calculated, so that the crack characteristics and the coal ash characteristics are distinguished, then the pixel points of the coal ash part are removed, and finally the crack characteristics which are covered by the coal ash and are broken are filled, so that the size of the complete crack characteristics is obtained, and the accuracy of the crack characteristics output by a detection result is ensured.
2. According to the invention, the selection of the filtering window is carried out according to the gradient among pixel vectors of the pixel points, and the pixel median of the filtering window is further used for replacing the pixel value of the pixel point to be filtered to carry out bilateral filtering, so that the defect that the conventional bilateral filtering cannot filter salt and pepper noise during filtering is overcome, meanwhile, the definition of the image characteristic boundary can be ensured by acquiring the filtering window in a gradient manner, and the image characteristic boundary is ensured not to change after filtering.
3. According to the invention, by introducing the concept of the emission distance, when the height difference between the crack feature and the coal ash feature in the image is extremely small, the normal vector of the pixel point on the image is greatly different, and the distance that the point reaches the reference plane along the normal vector is calculated, so that the difference between the crack feature and the coal ash feature is further enlarged, and the separation of the crack feature and the coal ash feature is realized.
4. According to the invention, a plurality of sections of crack frameworks with crack features segmented by coal ash features are placed on a plane coordinate system in a function curve mode, and the curve rule of the crack frameworks is counted according to the discrete form of pixel points on the crack frameworks, so that the function of recovering the crack frameworks is realized, and the recovery of the crack features is realized according to the crack width.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 is a schematic illustration of ladder surface soot features and crack features;
FIG. 2 is a flow chart of a ladder detection method of the present invention;
FIG. 3 is a flow chart of a method for calculating the emission distance from a pixel point to a reference plane along a normal vector according to the present invention;
FIG. 4 is a flow chart of a method of calculating a filler boundary according to the crack width and filler skeleton of the present invention;
FIG. 5 is a schematic view of the emission distance of pixel points on cracks and soot according to the present invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description. Therefore, the implementation process of how to apply the technical means to solve the technical problems and achieve the technical effects can be fully understood and implemented.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in a method of implementing the following embodiments may be implemented by a program to instruct related hardware and thus the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The ladder of colliery wind-force well ladder room generally adopts glass steel material, and its material is light and fragile, carries out mutual fixation through rivet or bolt buckle etc. after using for a long time in colliery wind-force well, the ladder can produce the crackle because its brittleness, because the environment in the wind-force well is comparatively complicated, noise in the colliery wind-force well, water spray, dust fog, swirl wind current, environmental factor such as huge difference in temperature can make the image of shooing produce a large amount of salt and pepper noise, influence the definition of image, also can cause the interference to the accurate discernment crackle of system after the colliery dust is attached to the ladder surface simultaneously, be the crackle with colliery dust attachment's area misjudgement easily, perhaps when colliery dust is attached to the crackle, influence the discernment to crackle length and width easily.
Fig. 1 to 5 are diagrams showing an embodiment of the present invention, in order to reduce interference of various complex factors in a coal mine air shaft to ladder image recognition, and accurately recognize cracks on a ladder, and reduce interference of coal mine dust to crack recognition, the embodiment provides a ladder detection method for a ladder compartment in the coal mine air shaft, which includes:
s1, acquiring a current image of a ladder compartment through inspection equipment, sequentially establishing a filtering window of each pixel point to be filtered according to the gradient of pixel values among the pixel points, and carrying out bilateral filtering by taking the pixel median of the filtering window as the pixel value of the pixel point to be filtered so as to obtain a filtering image; the inspection device may be a drone or other mobile device mounted on a fixed track and carrying a camera device.
Because the image shot in the mine contains a large amount of salt and pepper noise, in order to prevent the salt and pepper noise from being just the point to be filtered and causing the bilateral filtering to be invalid, a certain window area is selected from the periphery according to the position of the salt and pepper noise, and the pixel value of the point to be filtered is replaced by the pixel median of the window area to filter so as to ensure the bilateral filtering effect, and in the step S1, the method specifically comprises the following steps:
s11, selecting any pixel point to be filtered on the current image as a window point, calculating gradient among pixel values of all pixel points around the window point, and selecting a plurality of pixel points with the lowest gradient to establish a filtering window;
when the pixel to be filtered is just at the feature boundary, in order to avoid that the selected filtering window containing the pixel to be filtered just crosses the feature boundary, the boundary after filtering is blurred or changed, in step S11, the method specifically includes the following steps:
s111, any pixel point to be filtered on the current image is taken as a window point, and the pixel value of the window point is obtained;
s112, traversing all adjacent pixel points around by taking the window point as a starting point as nodes, and acquiring pixel values of the nodes;
s113, respectively establishing pixel vectors of all pixel points by using the window points and pixel values of all nodes;
M x,y =[R x,y ,G x,y ,B x,y ,G x,y -R x,y ,B x,y -R x,y ,G x,y -B x,y ]
M x',y' =[R x',y' ,G x',y' ,B x',y' ,G x',y' -R x',y' ,B x',y' -R x',y' ,G x',y' -B x',y' ]
in the above formula, M (x, y) and M (x ', y') represent pixel vectors at window points (x, y) and nodes (x ', y') on the present image, respectively, R x,y ,G x,y And B x,y Pixel values of red, green, and blue, respectively, representing window points (x, y) on the present image, and (x ', y') is coordinates of pixel points adjacent to the window point coordinates (x, y), so that there are typically 8 adjacent pixel points, i.e., nodes, around each window point located at a non-edge position of the present image, there are typically 5 adjacent nodes at the edge of the present image, and there are typically 3 adjacent window points located on the present imageIs a node of (a);
s114, respectively calculating gradients between pixel vectors of the window points and pixel vectors of all adjacent nodes; the calculation formula of the gradient is as follows:
in the above, f [ (x, y), (x ', y')]Representing the magnitude of the gradient value of the pixel vector at the window point (x, y) and the adjacent node (x ', y') on the current image, D 2 (≡) shows the square of the difference between two elements in brackets,and->Respectively represent pixel vectors M x,y And M x',y' The value of the i-th element in (a); the higher gradient indicates that the color difference between two pixel points is larger, and the color difference is not suitable as sample data for selecting a pixel median.
S115, screening out a node with the minimum gradient with the window point by using a comparison method, and adding the node into the window point;
s116, repeating the steps S112-S115 to obtain a plurality of connected window points, and taking the window points as filtering windows of initial window points.
According to the method, the pixel median value of a plurality of pixel points with the lowest gradient around the pixel point to be filtered is sequentially obtained, the pixel median value is used for replacing the pixel value of the pixel point to be filtered, when the pixel point to be filtered is located on the characteristic boundary, if the pixel point to be filtered is not a salt and pepper noise point, the pixel median value can be used for perfectly replacing the pixel value of the pixel point to be filtered, and if the pixel point to be filtered is the salt and pepper noise point, the pixel median value of the pixel points on two sides of the characteristic boundary is randomly used for replacing the pixel value of the pixel point to be filtered, so that in practice, the pixel point to be filtered is difficult to judge which side of the characteristic boundary belongs to, but the boundary property of the pixel point adjacent to the characteristic boundary can be well ensured by the method, and in the case that a plurality of salt and pepper noise points are adjacent, salt and pepper noise can be eliminated by taking a filter window for multiple times, meanwhile, the accuracy of the pixel value after the salt and pepper noise point is filtered is not realized by the traditional filtering.
S12, obtaining pixel values of all pixel points in a filtering window, and selecting a pixel median value from all pixel values; a median function may be employed to obtain a median pixel value from a series of pixel data;
s13, taking a pixel median value of the pixel point to be filtered as a pixel value of the pixel point to be filtered, and carrying out bilateral filtering treatment on the pixel point to be filtered;
the formula for carrying out bilateral filtering on the pixel points to be filtered is as follows:
wherein,
in the above formula, g (i) is the pixel value of the filtered i point, μ (i) is the correlation, i and j respectively represent the positions of two pixel points in the image, wherein j point belongs to one pixel point in the filter window Q, D (i, j) represents the euclidean distance between the pixel point i and the pixel point j, and M i Is the pixel median value, X of the pixel values of each pixel point in the filter window Q containing the pixel point i j Each of the pixel values representing the pixel point j, Δ (M i ,X j ) Median M of pixels representing point i i Pixel value X with pixel point j j The gray level difference between the two images, A and B are two constants, the gray level difference is used for measuring the filtering amount of the images, and e is a natural constant.
Thus, when the pixel point i is just a noise point, the pixel median M of the pixel point i is obtained in the filter window Q i To replace the pixel value X at the original pixel point i i Thereby (a)The filtering effect of the pixel point i can be guaranteed, so that noise can be well smoothed, and the protection of characteristic edges in an image can be realized.
S14, repeating the steps S11-S13 to sequentially finish the filtering processing of each pixel point to be filtered on the current image so as to obtain a filtered image.
S2, setting a filter window according to the shooting distance of a shooting lens of the inspection equipment, and intercepting image features in the filter window in the filter image to obtain a standard image;
in order to reduce the interference of complex background images in a ladder compartment, avoid misjudgment caused by feature recognition in the images in the later stage, and improve the accuracy of recognizing the features in the images, in step S2, the method specifically comprises the following steps:
s21, setting a filtering radius broad value of a filtering window of a shooting lens of the inspection equipment;
s22, calculating the actual distance between the actual position corresponding to each pixel point on the filtered image and the camera of the inspection equipment, wherein the calculation formula of the actual distance is as follows:
S i =fD(n,m)
in the above, S i And f is the focal length of the image of the inspection equipment camera when the inspection equipment camera shoots the actual position represented by the pixel point i, n and m are two moments when the inspection equipment camera shoots the actual position represented by the pixel point i, and D (n, m) represents the moving distance of the inspection equipment camera between the n and m moments.
S23, eliminating pixel points with actual distances larger than the filtered radius broad value to obtain a standard image.
Only the inspection track of the inspection equipment is required to be set, the image shot by the inspection equipment can be guaranteed to be a ladder image which is closer to the inspection equipment, and background interference is reduced, so that post-processing of the image is reduced, misjudgment of the image is reduced, and the characteristic information of the surface of a ladder can be better obtained.
S3, judging whether crack features exist in the standard image; the standard image can be identified through a convolutional neural network or an OSTU algorithm and the like, and whether the crack features exist in the standard image is judged based on the difference between the pixel values of the crack features and the pixel values of other features.
If yes, enter step S4;
if not, outputting prompt information of no crack on the surface of the ladder;
however, since the pixel values of the soot and the crack are close, the soot feature may be contained in the standard image containing the crack feature while the image is kept, and thus the step S4 is required to be performed, and the soot feature is further removed by other methods to obtain the complete crack feature.
S4, carrying out binary processing on the standard image to obtain a binary image, calculating the emission distance from each pixel point on the foreground region of the binary image to the plane where the background region is located along the normal vector of the pixel point, and obtaining a crack image according to the emission distance;
because the ladder room can fall with a large amount of dust or coal mine dust on the surface of the ladder room under the condition of no entry for a long time, the dust and the coal mine dust can agglomerate on the surface of the ladder, and the pixel values of the dust and the coal mine dust are very close to those of the cracks of the ladder, so that the crack identification can be interfered, and the dust and the coal mine dust blocks need to be accurately removed to obtain an image of the crack, in the step S4, the method specifically comprises the following steps:
s41, performing binarization processing on the standard image, dividing the image subjected to the binarization processing into a plurality of binary images only comprising a foreground region, and acquiring coordinates of a plurality of pixel points of a background region on the binary image; each foreground region is a coal mine dust block or a crack feature or a feature of fusion of the coal mine dust block and the crack feature, and the foreground region or other features in the image can be marked and extracted through MATLAB or LabelImg and other software;
s42, fitting by taking the plurality of points as a reference to obtain a reference plane, arbitrarily selecting two vertical straight lines on the reference plane to respectively establish an X axis and a Y axis, and positively establishing a Z axis intersecting the X axis and the Y axis at one end which is vertical to the reference plane and faces the camera to obtain a binary coordinate system;
s43, building a foreground three-dimensional model according to the binary coordinate system and pixel point coordinates of a foreground region on the binary image;
s44, calculating the emission distance from each pixel point on the foreground three-dimensional model to the reference plane along the normal vector of the point;
the crack is different from the coal mine dust block in that the two are convex and concave, but the change of the coordinates of the pixel point is not obvious, the distance between the pixel point and the reference plane is smaller, the XY plane of the binary coordinate system is difficult to be just positioned on the plane of a normal ladder, so that whether the pixel point is in a convex state or a concave state is difficult to judge simply through the height of the Z axis of the pixel point, the flatness of the surface of the ladder is relatively poor, the XY plane of the established binary coordinate system is difficult to be completely flush with the surface of the ladder, and the fact that whether the pixel point is positioned on convex coal ash or in concave cracks is more difficult to judge through the Z value of the coordinates of the pixel point.
Through carefully studying the characteristics of the coal ash block and the crack, the normal vector of the pixel point on the crack is nearly parallel to the XY plane, and the normal vector of the pixel point on the coal mine dust block image is nearly perpendicular to the XY plane, so that the concept of the emission distance is introduced, the pixel point on the coal mine dust block image and the crack image is obtained, the distance from the pixel point to the reference plane along the normal vector of the point is calculated as the emission distance, and the emission distance of the pixel point on the crack image is far greater than the emission distance of the pixel point on the coal mine dust block image, so that the coal mine dust block image and the crack image are distinguished, and in step S44, the method specifically comprises the following steps:
s441, selecting any pixel point on a foreground three-dimensional model as a sampling point, acquiring a plurality of adjacent pixel points around the sampling point as base points, and acquiring coordinates of the plurality of base points according to a binary coordinate system;
s442, fitting a plurality of base point coordinates to obtain a vertical plane; the purpose of fitting a plurality of base points into a vertical plane can be achieved through MATLAB and other software or RANSAC and other algorithms.
S443, vertically projecting the sampling point onto a vertical plane to obtain a projection point, and obtaining a normal vector of the sampling point; the expression of the normal vector of the sampling point is:
P=[O x -O’ x ,O y -O’ y ,O z -O’ z ]
in the above formula, P is the normal vector of the sampling point, O x ,O y And O z X, Y and Z-axis coordinates of the sample points, O' x ,O' y And O' z The coordinates of the projection point X, Y and the Z axis;
s444, calculating the vertical distance from the sampling point to the reference plane; the sampling point can be projected onto the reference plane first, and then the vertical distance between the sampling point and the projection point can be obtained by calculating the distance between the sampling point and the reference plane, wherein the calculation formula of the vertical distance is as follows:
S=D(O z ,O’ z )
in the above formula, S represents the vertical distance between the sampling point and the reference plane, O z Represents the sampling point, O' z Representing the projected point of the sample point on the reference plane, and D (+) represents the euclidean distance of the element in the calculation brackets.
S445, calculating the emission distance from the sampling point to the reference plane along the normal vector of the point; the calculation formula of the emission distance is as follows:
in the above formula, D represents the emission distance of the sampling point from the normal vector of the sampling point to the reference plane, D represents the vertical distance of the sampling point from the reference plane, and P is the module length of the normal vector of the sampling point, O z -O' z Representing the height difference between the sampling point and the projection point;
s446, repeating the steps S441 to S445 to sequentially calculate the emission distance of each pixel point on the foreground three-dimensional model.
S45, setting a wide range of the emission distance, and selecting pixel points of a foreground region with the emission distance within the wide range of the emission distance on the binary image to obtain a crack image; the expression of the emission distance broad range is:
D 0 =[-∞,D 1 ]∪[D 2 ,+∞]
in the above, D 0 Represents the range of the emission distance and D 1 And D 2 Is two constants, D 1 Is generally not greater than-2, and D 2 The value of (2) is generally not less than 2.
S5, obtaining a crack boundary and a crack skeleton of the crack characteristics in the crack image, and calculating the crack width of the crack characteristics along the crack skeleton;
in order to calculate the width of the crack skeleton more accurately, a slope approximation method is adopted to obtain a tangent line of a point on the crack skeleton, a perpendicular line of the tangent line is obtained, an intersection point of the perpendicular line and crack boundaries on two sides of the crack skeleton is obtained, and the width of the point of the crack on the crack skeleton can be obtained by calculating the two intersection points, in the step S5, the method specifically comprises the following steps:
s51, obtaining a crack boundary and a crack skeleton of a crack characteristic in a crack image; the boundary of the crack characteristic can be rapidly extracted through cracking maltab;
s52, sequentially obtaining coordinates of any pixel point from the crack skeleton, and obtaining two adjacent pixel points from the crack skeleton on two sides of the pixel point to obtain a vertical plane passing through the pixel point and perpendicular to the connecting line of the two adjacent pixel points; the homeotropic point set expression is as follows:
T={t|K(t,z i )×K(z i +1,z i -1)=-1,z i ∈X i }
in the above, T is the point set of the vertical plane passing through the pixel point and perpendicular to the connecting line of two adjacent pixel points, z i Z is any pixel point in the crack skeleton i +1 and z i -1 are the crack skeleton and the pixel point z i Two adjacent pixel points separated by 1 pixel distance, K (with the expression of,) is used for calculating the slope of the element in the brackets, and t is any pixel point in the crack image;
s53, acquiring a plurality of intersection points of the vertical plane and the crack skeleton; the line and the surface in the image are formed by one pixel point, and the intersection point coordinates can be obtained by respectively obtaining pixel point sets forming a homeotropic surface and a crack skeleton and obtaining the intersection of the two sets.
S54, respectively calculating the distance and the vector between the pixel point and each intersection point; the distance is the Euclidean distance of the two intersection points, and is obtained by calculating the square sum of the coordinate difference values corresponding to the two intersection points and then squaring the square sum.
The calculation formula of the distance from the pixel point to the intersection point is as follows:
the expression of the vector from the pixel point to the intersection point is:
F=[x i -x j ,y i -y j ,z i -z j ]
in the above formula, D represents the distance between the pixel point and the intersection point, F represents the vector between the pixel point and the intersection point, and x i 、y i And z i Respectively representing X, Y of pixel point i and coordinate of Z axis, x j 、y j And z j The X, Y and Z-axis coordinates of the intersection j are shown, respectively.
S55, selecting two intersection points with opposite vectors and minimum pixel point spacing, wherein the width of the crack at the pixel point is the spacing sum;
s6, selecting crack images with a plurality of crack characteristics, calculating filling frameworks among the crack frameworks with the plurality of crack characteristics, and acquiring filling boundaries corresponding to the filling frameworks according to the crack widths;
after only one foreground region exists on one binary image, but after the pixels of the coal mine dust block image in the foreground region are screened out, if the coal mine dust block just covers one section of the crack characteristics, the crack image after the coal mine dust block is removed becomes two ends, and at the moment, the two ends of the crack image need to be butted into one section to restore the original shape, so that maintenance personnel can conveniently and accurately judge the service condition of the ladder, and in the step S6, the method specifically comprises the following steps:
s61, selecting crack images containing a plurality of crack characteristics, establishing a plane coordinate system by using a plane where the crack frameworks are located, calculating distances between endpoints of the crack frameworks of the plurality of crack characteristics, and selecting two crack frameworks with minimum endpoint spacing by a comparison method;
s62, sequentially acquiring Y coordinates of pixel points on the crack frameworks by taking the unit length of the pixel points as a step distance, and respectively establishing data sequences of two crack frameworks according to the X-axis direction; the expression of the data sequence can be expressed as:
C={Y i ,Y i+1 ,Y i+2 ,...,Y i+n }
in the above formula, C represents a data sequence of a crack skeleton, Y i A Y-axis coordinate value representing a crack skeleton pixel point at an X-axis coordinate value i;
s63, fitting according to the data sequences of the two crack frameworks to obtain a first track curve, calculating corresponding Y values according to the first track curve and X-axis coordinate values in the data sequences, and sequentially arranging the Y values to obtain a predicted sequence;
s64, sequentially differentiating the predicted sequence and the data sequence, and sequentially arranging the differential values to obtain a difference sequence;
s65, selecting the maximum value and the minimum value in the difference value sequence according to a comparison method, subtracting the minimum value from the maximum value to obtain a difference value interval, and equally dividing the difference value interval into a plurality of state intervals;
s66, establishing a difference probability transition matrix according to the state interval and the difference sequence; the expression of the difference probability transition matrix is:
in the above formula, P represents a difference probability transition matrix, and P mn Representing the probability of a difference value transitioning from the mth state interval to the nth state interval, and there is the following relationship:
in the above formula: p (P) τR Representing the probability of the residual data transitioning from the τ state interval to the R state interval,the total probability of the transition of the residual data from the τ -th state interval to the total R state intervals is represented, which is 1.
S67, calculating pixel point coordinates between two crack frameworks according to the first track curve, the difference probability transition matrix and the state interval to obtain a filling framework; the calculation formula of the pixel point coordinates on the filling skeleton is as follows:
wherein Y represents a Y-axis coordinate value calculated from X-axis coordinate values of pixel points between two crack skeletons, f 1 Representing a first trajectory curve, P τR Representing the probability of the residual data transitioning from the τ state interval to the R state interval,average value of upper and lower limits of the R-th state section;
s68, repeating the steps S61-S68 until only one crack characteristic exists on the crack image;
s69, calculating the average width of the cracks on each crack image, obtaining a filling boundary corresponding to the crack image according to the filling skeleton, reversely pushing the filling skeleton serving as the center according to the calculation mode of the crack width to obtain the filling boundary corresponding to the filling skeleton, and obtaining the crack width by adopting an approximate alternative method because the crack width of the glass fiber reinforced plastic ladder is relatively average, so that the filling boundary can be conveniently pushed out, and the judgment of the crack on the surface condition of the ladder is not influenced.
S7, outputting a crack image with crack characteristics and crack widths corresponding to the crack characteristics.
When the standard image selected in the step S4 to the step S6 is only provided with the coal ash which is used as the crack characteristic, the coal ash pixel points on the standard image are removed in the step S4 through the emission distance, so that the processed crack image is not provided with the suspected crack characteristic any more, finally, the crack image which does not comprise the crack characteristic can be removed through re-identification of the standard image through a convolution neural network or an OSTU (open-ended time frame) algorithm, when the standard image selected in the step S4 to the step S6 is provided with the coal ash and the crack, the pixel points on the coal ash are removed first, then whether the crack is broken or not is detected, if the crack is broken, the approximate restoration of the crack is realized through the steps S5 and S6, meanwhile, the width of the crack is obtained, and when the standard image selected in the step S4 to the step S6 is provided with the crack only, the image is not changed, but the width of the crack is calculated.
According to the method, in order to ensure the accuracy of crack feature extraction, firstly, the filtering of salt and pepper noise is realized through an improved bilateral filtering algorithm, in order to further ensure the definition of crack feature boundaries in the filtering process, a gradient concept is designed, a plurality of pixel points with similar colors are selected to serve as filtering windows, pixel median values of the filtering windows are used for replacing pixel points of pixel points to be filtered to conduct bilateral filtering, then the concept of the filtering windows is used for removing the background in the image, so that a close-range image of the surface of a ladder is obtained, interference of complex background between the ladder is reduced, then the separation of a coal mine dust image and crack feature images is realized through the concept of design transmission distance, finally, crack features divided into a plurality of coal mine dust images are filled up, so that the crack features are restored as much as possible, and the state of the ladder is convenient for external personnel to accurately judge.
The foregoing embodiments have been presented in a detail description of the invention, and are presented herein with a particular application to the understanding of the principles and embodiments of the invention, the foregoing embodiments being merely intended to facilitate an understanding of the method of the invention and its core concepts; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.
Claims (7)
1. The ladder detection method for the ladder compartment in the coal mine air shaft is characterized by comprising the following steps of:
s1, acquiring a current image of a ladder compartment through inspection equipment, sequentially establishing a filtering window of each pixel point to be filtered according to the gradient of pixel values among the pixel points, and carrying out bilateral filtering by taking the pixel median of the filtering window as the pixel value of the pixel point to be filtered so as to obtain a filtering image;
s2, setting a filter window according to the shooting distance of a shooting lens of the inspection equipment, and intercepting image features in the filter window in the filter image to obtain a standard image;
s3, judging whether crack features exist in the standard image;
if yes, enter step S4;
if not, outputting prompt information of no crack on the surface of the ladder;
s4, carrying out binary processing on the standard image to obtain a binary image, calculating the emission distance from each pixel point on the foreground region of the binary image to the plane where the background region is located along the normal vector of the pixel point, and obtaining a crack image according to the emission distance;
s5, determining a crack boundary and a crack skeleton of the crack characteristics in the crack image, and calculating the crack width of the crack characteristics along the crack skeleton;
s6, selecting crack images with a plurality of crack characteristics, calculating filling frameworks among the crack frameworks with the plurality of crack characteristics, and acquiring filling boundaries corresponding to the filling frameworks according to the crack widths;
s7, outputting a crack image with crack characteristics and crack widths corresponding to the crack characteristics.
2. The ladder detection method according to claim 1, characterized in that in step S1, it specifically comprises the following steps:
s11, selecting any pixel point to be filtered on the current image as a window point, calculating gradient among pixel values of all pixel points around the window point, and selecting a plurality of pixel points with the lowest gradient to establish a filtering window;
s12, obtaining pixel values of all pixel points in a filtering window, and selecting a pixel median value from all pixel values;
s13, taking a pixel median value of the pixel point to be filtered as a pixel value of the pixel point to be filtered, and carrying out bilateral filtering treatment on the pixel point to be filtered; the formula for carrying out bilateral filtering on the pixel points to be filtered is as follows:
wherein,
in the above formula, g (i) is the pixel value of the filtered i point, μ (i) is the correlation, i and j respectively represent the positions of two pixel points in the image, wherein j point belongs to one pixel point in the filter window Q, D (i, j) represents the euclidean distance between the pixel point i and the pixel point j, and M i Is the pixel median value, X of the pixel values of each pixel point in the filter window Q containing the pixel point i j Pixel value, Δ (M i ,X j ) Median M of pixels representing point i i Pixel value X with pixel point j j The gray level difference between the two images, A and B are two constants, the two constants are used for measuring the filtering amount of the images, and e is a natural constant;
s14, repeating the steps S11-S13 to sequentially finish the filtering processing of each pixel point to be filtered on the current image so as to obtain a filtered image.
3. The ladder detection method according to claim 1, characterized in that in step S2, it specifically comprises the following steps:
s21, setting a filtering radius broad value of a filtering window of a shooting lens of the inspection equipment;
s22, calculating the actual distance between the actual position corresponding to each pixel point on the filtered image and the camera of the inspection equipment;
s23, eliminating pixel points with actual distances larger than the filtered radius broad value to obtain a standard image.
4. The ladder detection method according to claim 1, characterized in that in step S4, it specifically comprises the following steps:
s41, performing binarization processing on the standard image, dividing the image subjected to the binarization processing into a plurality of binary images only comprising a foreground region, and acquiring coordinates of a plurality of pixel points of a background region on the binary image;
s42, fitting by taking the plurality of points as a reference to obtain a reference plane, arbitrarily selecting two vertical straight lines on the reference plane to respectively establish an X axis and a Y axis, and positively establishing a Z axis intersecting the X axis and the Y axis at one end which is vertical to the reference plane and faces the camera to obtain a binary coordinate system;
s43, building a foreground three-dimensional model according to the binary coordinate system and pixel point coordinates of a foreground region on the binary image;
s44, calculating the emission distance from each pixel point on the foreground three-dimensional model to the reference plane along the normal vector of the point;
s45, setting a wide range of the emission distance, and selecting pixel points of a foreground region with the emission distance in the wide range of the emission distance on the binary image to obtain a crack image.
5. The method according to claim 4, wherein in step S44, the method specifically comprises the steps of:
s441, selecting any pixel point on a foreground three-dimensional model as a sampling point, acquiring a plurality of adjacent pixel points around the sampling point as base points, and acquiring coordinates of the plurality of base points according to a binary coordinate system;
s442, fitting a plurality of base point coordinates to obtain a vertical plane;
s443, vertically projecting the sampling point onto a vertical plane to obtain a projection point, and obtaining a normal vector of the sampling point;
s444, calculating the vertical distance from the sampling point to the reference plane;
s445, calculating the emission distance from the sampling point to the reference plane along the normal vector of the point; the calculation formula of the emission distance is as follows:
in the above formula, D represents the emission distance of the sampling point from the normal vector of the sampling point to the reference plane, D represents the vertical distance of the sampling point from the reference plane, and P is the module length of the normal vector of the sampling point, O z -O' z Representing the height difference between the sampling point and the projection point;
s446, repeating the steps S441 to S445 to sequentially calculate the emission distance of each pixel point on the foreground three-dimensional model.
6. The ladder detection method according to claim 1, characterized in that in step S5, it specifically comprises the steps of:
s51, determining crack boundaries and crack frameworks of crack characteristics in the crack image;
s52, sequentially obtaining coordinates of any pixel point from the crack skeleton, and obtaining two adjacent pixel points from the crack skeleton on two sides of the pixel point to obtain a vertical plane passing through the pixel point and perpendicular to the connecting line of the two adjacent pixel points;
s53, acquiring a plurality of intersection points of the vertical plane and the crack skeleton;
s54, respectively calculating the distance and the vector between the pixel point and each intersection point;
s55, selecting two intersection points with opposite vectors and minimum distance between the two intersection points and the pixel point, wherein the width of the crack at the pixel point is the sum of the distance.
7. The ladder detection method according to claim 1, characterized in that in step S6, it specifically comprises the steps of:
s61, selecting crack images containing a plurality of crack characteristics, establishing a plane coordinate system by using a plane where the crack frameworks are located, calculating distances between endpoints of the crack frameworks of the plurality of crack characteristics, and selecting two crack frameworks with minimum endpoint spacing by a comparison method;
s62, sequentially acquiring Y coordinates of pixel points on the crack frameworks by taking the unit length of the pixel points as a step distance, and respectively establishing data sequences of two crack frameworks according to the X-axis direction;
s63, fitting according to the data sequences of the two crack frameworks to obtain a first track curve, calculating corresponding Y values according to the first track curve and X-axis coordinate values in the data sequences, and sequentially arranging the Y values to obtain a predicted sequence;
s64, sequentially differentiating the predicted sequence and the data sequence, and sequentially arranging the differential values to obtain a difference sequence;
s65, selecting the maximum value and the minimum value in the difference value sequence according to a comparison method, subtracting the minimum value from the maximum value to obtain a difference value interval, and equally dividing the difference value interval into a plurality of state intervals;
s66, establishing a difference probability transition matrix according to the state interval and the difference sequence; the expression of the difference probability transition matrix is:
in the above formula, P represents a difference probability transition matrix, and P mn Representing a probability of a difference value transitioning from the mth state interval to the nth state interval;
s67, calculating pixel point coordinates between two crack frameworks according to the first track curve, the difference probability transition matrix and the state interval to obtain a filling framework; the calculation formula of the pixel point coordinates between the two crack frameworks is as follows:
wherein Y represents a Y-axis coordinate value calculated from X-axis coordinate values of pixel points between two crack skeletons, f 1 Representing a first trajectory curve, P τR Representing the probability of the residual data transitioning from the τ state interval to the R state interval,average value of upper and lower limits of the R-th state section;
s68, repeating the steps S61-S68 until only one crack characteristic exists on the crack image;
and S69, calculating the average width of the cracks on each crack image, and acquiring a filling boundary corresponding to the filling skeleton according to the filling skeleton.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311273515.4A CN117392068B (en) | 2023-09-28 | 2023-09-28 | Ladder detection method for ladder compartment in coal mine air shaft |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311273515.4A CN117392068B (en) | 2023-09-28 | 2023-09-28 | Ladder detection method for ladder compartment in coal mine air shaft |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117392068A true CN117392068A (en) | 2024-01-12 |
CN117392068B CN117392068B (en) | 2024-09-20 |
Family
ID=89435325
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311273515.4A Active CN117392068B (en) | 2023-09-28 | 2023-09-28 | Ladder detection method for ladder compartment in coal mine air shaft |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117392068B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104700056A (en) * | 2015-02-05 | 2015-06-10 | 合肥工业大学 | Method for detecting uniqueness of person entering coal mineral well |
WO2019026287A1 (en) * | 2017-08-04 | 2019-02-07 | 株式会社ソニー・インタラクティブエンタテインメント | Imaging device and information processing method |
CN109658398A (en) * | 2018-12-12 | 2019-04-19 | 华中科技大学 | A kind of surface defects of parts identification and appraisal procedure based on three-dimensional measurement point cloud |
CN109886921A (en) * | 2019-01-16 | 2019-06-14 | 新而锐电子科技(上海)有限公司 | Crack size measure, device and electronic equipment based on digital picture |
CN111256594A (en) * | 2020-01-18 | 2020-06-09 | 中国人民解放军国防科技大学 | Method for measuring physical characteristics of surface state of aircraft skin |
GB2610449A (en) * | 2021-09-06 | 2023-03-08 | Harbin Inst Technology | Efficient high-resolution non-destructive detecting method based on convolutional neural network |
-
2023
- 2023-09-28 CN CN202311273515.4A patent/CN117392068B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104700056A (en) * | 2015-02-05 | 2015-06-10 | 合肥工业大学 | Method for detecting uniqueness of person entering coal mineral well |
WO2019026287A1 (en) * | 2017-08-04 | 2019-02-07 | 株式会社ソニー・インタラクティブエンタテインメント | Imaging device and information processing method |
CN109658398A (en) * | 2018-12-12 | 2019-04-19 | 华中科技大学 | A kind of surface defects of parts identification and appraisal procedure based on three-dimensional measurement point cloud |
CN109886921A (en) * | 2019-01-16 | 2019-06-14 | 新而锐电子科技(上海)有限公司 | Crack size measure, device and electronic equipment based on digital picture |
CN111256594A (en) * | 2020-01-18 | 2020-06-09 | 中国人民解放军国防科技大学 | Method for measuring physical characteristics of surface state of aircraft skin |
GB2610449A (en) * | 2021-09-06 | 2023-03-08 | Harbin Inst Technology | Efficient high-resolution non-destructive detecting method based on convolutional neural network |
Also Published As
Publication number | Publication date |
---|---|
CN117392068B (en) | 2024-09-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107679520B (en) | Lane line visual detection method suitable for complex conditions | |
CN107253485B (en) | Foreign matter invades detection method and foreign matter invades detection device | |
CN105373135B (en) | A kind of method and system of aircraft docking guidance and plane type recognition based on machine vision | |
CN111179232A (en) | Steel bar size detection system and method based on image processing | |
CN111292321B (en) | Transmission line insulator defect image identification method | |
CN111126184B (en) | Post-earthquake building damage detection method based on unmanned aerial vehicle video | |
CN107330373A (en) | A kind of parking offense monitoring system based on video | |
CN106156752B (en) | A kind of model recognizing method based on inverse projection three-view diagram | |
CN108364282B (en) | Image mosaic detection method and image mosaic detection system | |
CN110276747B (en) | Insulator fault detection and fault rating method based on image analysis | |
CN112529875A (en) | Photovoltaic module glass burst early warning method and system based on artificial intelligence | |
CN111292228A (en) | Lens defect detection method | |
CN113313107A (en) | Intelligent detection and identification method for multiple types of diseases on cable surface of cable-stayed bridge | |
CN115841633A (en) | Power tower and power line associated correction power tower and power line detection method | |
CN110705553B (en) | Scratch detection method suitable for vehicle distant view image | |
CN111524121A (en) | Road and bridge fault automatic detection method based on machine vision technology | |
CN110826364B (en) | Library position identification method and device | |
CN107301388A (en) | A kind of automatic vehicle identification method and device | |
CN114235814A (en) | Crack identification method for building glass curtain wall | |
CN114332739A (en) | Smoke detection method based on moving target detection and deep learning technology | |
CN117392068B (en) | Ladder detection method for ladder compartment in coal mine air shaft | |
CN117808789A (en) | Method and system for detecting cracks of existing building glass curtain wall | |
CN111160115B (en) | Video pedestrian re-identification method based on twin double-flow 3D convolutional neural network | |
CN114155518B (en) | Highway light shield inclination recognition method based on depth semantic segmentation network and image correction | |
CN110837775A (en) | Underground locomotive pedestrian and distance detection method based on binarization network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |