CN118154615A - Intelligent detection method for quality of plate body of track plate of excavator - Google Patents

Intelligent detection method for quality of plate body of track plate of excavator Download PDF

Info

Publication number
CN118154615A
CN118154615A CN202410586321.8A CN202410586321A CN118154615A CN 118154615 A CN118154615 A CN 118154615A CN 202410586321 A CN202410586321 A CN 202410586321A CN 118154615 A CN118154615 A CN 118154615A
Authority
CN
China
Prior art keywords
pixel point
edge pixel
corrected
target
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410586321.8A
Other languages
Chinese (zh)
Inventor
孙杨
张连伟
焦开航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Juning Machinery Co ltd
Original Assignee
Shandong Juning Machinery Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Juning Machinery Co ltd filed Critical Shandong Juning Machinery Co ltd
Priority to CN202410586321.8A priority Critical patent/CN118154615A/en
Publication of CN118154615A publication Critical patent/CN118154615A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to the field of image data processing, in particular to an intelligent detection method for the quality of a plate body of an excavator track plate, which comprises the following steps: the method comprises the steps of firstly carrying out brightness clustering treatment on an image to be detected corresponding to an excavator track plate, determining an initial paint dropping area, then screening a first edge pixel point to be corrected and a normal edge pixel point based on gray values of the edge pixel point of the initial paint dropping area and a preset field pixel point of the edge pixel point, carrying out position updating on the first edge pixel point to be corrected, confirming a second edge pixel point to be corrected, confirming the edge of a final paint dropping area, and finally calculating the paint dropping area corresponding to the final paint dropping area to judge whether the excavator track plate meets the standard. Compared with the traditional quality detection method, the method solves the problem that the detection of the paint falling area is inaccurate due to illumination and blurring of the edge of the paint falling area, improves the accuracy of the detection of the paint falling area, and further improves the reliability of quality detection of the track shoe of the excavator.

Description

Intelligent detection method for quality of plate body of track plate of excavator
Technical Field
The application relates to the field of image data processing, in particular to an intelligent detection method for the quality of a plate body of an excavator track plate.
Background
An excavator track shoe is a track for excavating machinery equipment, is generally made of high-strength and wear-resistant materials, can protect driving wheels and running gears of an excavator, and simultaneously provides good ground grabbing force and stability. The design of the track shoe of the excavator can be customized according to different working environments and working requirements, for example, the track shoe with good wear resistance is suitable for an efficient excavator working in a severe environment for a long time, and the track shoe with softer is suitable for an excavator with higher working efficiency on loose ground or muddy ground. The track shoe of the excavator is an important component of the track system of the excavator, quality problems can have great influence on the performance and safety of the excavator, and if the quality of the track shoe is problematic, the track system can be damaged, so that the normal work of the excavator is influenced, and even accidents are caused. It is important to detect the paint-free areas of the excavator track plate body, as paint-free may expose the risk of corrosion of the metal parts in harsh environments.
The traditional detection excavator track shoe plate body paint dropping area adopts image detection, namely the image acquisition is carried out on the excavator track shoe plate body, then the detection of the paint dropping area is carried out according to the acquired image, and in the process of determining the paint dropping area on the excavator track shoe plate body, the complete paint dropping area on the track shoe plate body is not easy to detect due to illumination and the reason of blurring of the edges of the paint dropping area, so that the quality detection accuracy is lower.
Disclosure of Invention
In view of the above, it is necessary to provide an intelligent detection method for the quality of a plate body of an excavator track shoe, which improves the detection accuracy of a paint dropping area and further improves the quality detection accuracy to be lower than that of a traditional intelligent detection method for the quality of the plate body of the excavator track shoe.
The application provides an intelligent detection method for the quality of an excavator track shoe, which is applied to the field of detection of the quality of the excavator track shoe, and comprises the following steps: the method comprises the steps of firstly carrying out brightness clustering processing on an image to be detected corresponding to an excavator track plate, determining an initial paint dropping area, then screening a first edge pixel point to be corrected and a normal edge pixel point based on gray values of the edge pixel point of the initial paint dropping area and the preset field pixel point of the edge pixel point, carrying out position updating on the first edge pixel point to be corrected according to gradient amplitude differences of the first edge pixel point to be corrected and the corresponding surrounding pixel points, confirming a second edge pixel point to be corrected, then confirming the edge of a final paint dropping area based on the normal edge pixel point and the second edge pixel point to be corrected, and finally calculating the paint dropping area corresponding to the final paint dropping area through the edge of the final paint dropping area to judge whether the excavator track plate meets the standard.
In one embodiment, the performing brightness clustering processing on the image to be detected corresponding to the track shoe of the excavator to determine an initial paint dropping area specifically includes: dividing the image to be detected by a preset clustering algorithm according to brightness values of different areas in the image to be detected corresponding to the track shoe of the excavator, and confirming suspected paint dropping areas; and filtering the suspected paint-dropping area through a preset area threshold, removing irrelevant normal areas, and confirming an initial paint-dropping area.
In one embodiment, the screening the first edge pixel point to be corrected and the normal edge pixel point based on the gray values of the edge pixel point of the initial paint-dropping area and the preset field pixel point thereof specifically includes: calculating a correction judgment value corresponding to the target edge pixel point according to the gray value of the target edge pixel point of the initial paint dropping area and the gray value of the preset field pixel point; comparing the corrected judgment value corresponding to the target edge pixel point with a preset judgment threshold value, and confirming that the target edge pixel point is a first edge pixel point to be corrected or a normal edge pixel point.
In one embodiment, the calculating a correction judgment value corresponding to the target edge pixel point according to the gray value of the target edge pixel point of the initial paint-dropping area and the preset field pixel point of the initial paint-dropping area specifically includes:
Wherein, For the target edge pixel/>Corresponding correction judgment value,/>For the target edge pixel/>Is used for the gray-scale value of (c),Target edge pixel dot/>Gray value of pixel points in preset area adjacent to one side,/>For the target edge pixel/>Gray values of the preset domain pixel points adjacent to the other side.
In one embodiment, the comparing the correction judgment value corresponding to the target edge pixel point with a preset judgment threshold value, and confirming that the target edge pixel point is a first edge pixel point to be corrected or a normal edge pixel point specifically includes: when the correction judgment value corresponding to the target edge pixel point is greater than or equal to a preset judgment threshold value, confirming that the target edge pixel point is a normal edge pixel point; and when the correction judgment value corresponding to the target edge pixel point is smaller than a preset judgment threshold value, confirming that the target edge pixel point is a first edge pixel point to be corrected.
In one embodiment, the updating the position of the first edge pixel to be corrected according to the gradient amplitude difference between the first edge pixel to be corrected and the corresponding surrounding pixels, and determining the second edge pixel to be corrected specifically includes: calculating gradient amplitude differences between the first target to-be-corrected edge pixel point and the corresponding surrounding pixel points according to the gradient amplitudes between the first target to-be-corrected edge pixel point and the corresponding surrounding pixel points, wherein the surrounding pixel points corresponding to the first target to-be-corrected edge pixel point refer to pixel points in the direction of the vertical edge where the first target to-be-corrected edge pixel point is located and the gray value gradually decreases; inputting the gradient amplitude difference between the first edge pixel point to be corrected of the target and the corresponding surrounding pixel points into a preset possibility evaluation calculation formula, and calculating the possibility evaluation value of the surrounding pixel points corresponding to the first edge pixel point to be corrected of the target;
And comparing the possibility evaluation value of the surrounding pixel points corresponding to the first edge pixel point to be corrected of the target with a preset standard evaluation threshold value so as to update the position of the first edge pixel point to be corrected of the target and confirm the second edge pixel point to be corrected.
In one embodiment, the inputting the gradient magnitude difference between the target first edge pixel to be corrected and the corresponding surrounding pixels into a preset likelihood evaluation calculation formula, and calculating the likelihood evaluation value of the surrounding pixels corresponding to the target first edge pixel to be corrected specifically includes:
Wherein, For the first edge pixel to be corrected of the target/>Corresponding/>Probability evaluation value of each surrounding pixel point,/>For the first edge pixel to be corrected of the target/>Corresponding gradient amplitude,/>For the first edge pixel to be corrected of the target/>Corresponding/>Gradient amplitude of each surrounding pixel point,/>For the first edge pixel to be corrected of the target/>Corresponding to the first/>Gradient magnitude differences for the surrounding pixels.
In one embodiment, the comparing the probability evaluation value of the surrounding pixel points corresponding to the target first edge pixel point to be corrected with a preset standard evaluation threshold value to update the position of the target first edge pixel point to be corrected, and determining the second edge pixel point to be corrected specifically includes: when the probability evaluation value of the surrounding pixel points corresponding to the first edge pixel point to be corrected of the target is smaller than a preset standard evaluation threshold value, determining the surrounding pixel points corresponding to the first edge pixel point to be corrected of the target as second edge pixel points to be corrected; and when the probability evaluation value of the surrounding pixel points corresponding to the target first edge pixel point to be corrected is larger than or equal to a preset standard evaluation threshold value, updating the surrounding pixel points corresponding to the target first edge pixel point to be corrected to be new first edge pixel points to return to the initial step for performing a new round of operation.
In one embodiment, the determining the edge of the final paint-dropping area based on the normal edge pixel point and the second edge pixel point to be corrected specifically includes: judging whether the target second edge pixel point to be corrected is a difference pixel point or not according to the pixel distance between the target second edge pixel point to be corrected and the corresponding adjacent pixel point; removing the difference pixel points, combining the rest normal edge pixel points with the second edge pixel points to be corrected, and confirming the edge of the final paint dropping area; correspondingly, according to the pixel distance between the target second edge pixel point to be corrected and the corresponding adjacent pixel point, judging whether the target second edge pixel point to be corrected is a difference pixel point or not, specifically comprising:
Wherein, For the target second edge pixel/>Corresponding distance evaluation index,/>Representing a normalization function,/>For the target second edge pixel/>Adjacent pixel points corresponding thereto/>The distance between the two plates is set to be equal,For the target second edge pixel/>Adjacent pixel points corresponding thereto/>The distance between the two plates is set to be equal,For the target second edge pixel/>Adjacent pixel points corresponding thereto/>And adjacent pixel pointsA distance therebetween; when the target second edge pixel point to be corrected/>Corresponding distance evaluation index/>When the target second edge pixel point to be corrected is larger than a preset distance evaluation threshold value, confirming the target second edge pixel point to be corrected/>Is a difference pixel.
In one embodiment, the calculating, by using the edge of the final paint-removing area, a paint-removing area corresponding to the final paint-removing area to determine whether the track shoe of the excavator meets the standard specifically includes: calculating the paint dropping area corresponding to the final paint dropping area according to the edge of the final paint dropping area; when the area ratio of the paint falling area corresponding to the final paint falling area to the image to be detected is greater than or equal to a preset area threshold value, confirming that the track shoe of the excavator does not accord with the standard; and when the area ratio of the paint falling area corresponding to the final paint falling area to the image to be detected is smaller than a preset area threshold value, confirming that the track shoe of the excavator meets the standard.
The embodiment of the application provides an intelligent plate quality detection method for an excavator track plate, which comprises the steps of firstly carrying out brightness clustering treatment on an image to be detected corresponding to the excavator track plate, determining an initial paint dropping area, then screening a first edge pixel point to be corrected and a normal edge pixel point based on gray values of the edge pixel point of the initial paint dropping area and a preset area pixel point of the edge pixel point, carrying out position updating on the first edge pixel point to be corrected according to gradient amplitude differences of the first edge pixel point to be corrected and surrounding pixel points corresponding to the first edge pixel point to be corrected, confirming a second edge pixel point to be corrected, confirming the edge of a final paint dropping area based on the normal edge pixel point and the second edge pixel point to be corrected, and finally calculating the paint dropping area corresponding to the final paint dropping area through the edge of the final paint dropping area to judge whether the excavator track plate meets the standard or not. Compared with the traditional quality detection method, the method solves the problem that the detection of the paint dropping area is inaccurate due to illumination and blurring of the edge of the paint dropping area by correcting the position of the edge of the initial paint dropping area and removing noise pixel points, improves the detection accuracy of the paint dropping area of the track shoe of the excavator, and further improves the quality detection reliability of the track shoe of the excavator.
Drawings
Fig. 1 is a schematic flow chart of an intelligent detection method for the quality of a track shoe of an excavator according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a first sub-flowchart of an intelligent detection method for the quality of a track shoe of an excavator according to an embodiment of the present application.
Fig. 3 is a second sub-flowchart of an intelligent detection method for the quality of an excavator track shoe according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a third sub-flowchart of an intelligent detection method for a track shoe of an excavator according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a fourth sub-flowchart of an intelligent detection method for the quality of a track shoe of an excavator according to an embodiment of the present application.
Fig. 6 is a schematic diagram of a fifth sub-flowchart of an intelligent detection method for the quality of an excavator track shoe according to an embodiment of the present application.
Fig. 7 is a schematic diagram of a sixth sub-flowchart of an intelligent detection method for a track shoe of an excavator according to an embodiment of the present application.
Fig. 8 is a schematic diagram of a seventh sub-flowchart of a method for intelligently detecting a plate body quality of an excavator track shoe according to an embodiment of the present application.
Detailed Description
In describing embodiments of the present application, words such as "exemplary," "or," "such as," and the like are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary," "or," "such as," and the like are intended to present related concepts in a concrete fashion.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. It is to be understood that, unless otherwise indicated, a "/" means or. For example, A/B may represent A or B. The "and/or" in the present application is merely one association relationship describing the association object, indicating that three relationships may exist. For example, a and/or B may represent: a exists alone, A and B exist simultaneously, and B exists alone. "at least one" means one or more. "plurality" means two or more than two. For example, at least one of a, b or c may represent: seven cases of a, b, c, a and b, a and c, b and c, a, b and c.
It should be further noted that the terms "first" and "second" in the description and claims of the present application and the accompanying drawings are used for respectively similar objects, and are not used for describing a specific order or sequence. The method disclosed in the embodiments of the present application or the method shown in the flowchart, including one or more steps for implementing the method, may be performed in an order that the steps may be interchanged with one another, and some steps may be deleted without departing from the scope of the claims.
The embodiment of the application firstly provides an intelligent detection method for the quality of a track shoe of an excavator, which is applied to the field of quality detection of the track shoe of the excavator, and referring to the attached figure 1, the method comprises the following steps of:
S101, carrying out brightness clustering processing on an image to be detected corresponding to the track shoe of the excavator, and determining an initial paint dropping area.
In the process of detecting the quality of the plate body of the track shoe of the excavator, whether the quality of the plate body of the track shoe meets the standard is judged mainly according to the area of the paint falling area on the surface of the track shoe. Due to illumination and the reason of blurring edges of the paint falling areas, the paint falling areas are not easy to accurately detect by using threshold segmentation, and judgment of the quality of the track shoe plate is affected.
The image to be detected corresponding to the track shoe of the excavator refers to image information acquired through a camera which is arranged in advance, the corresponding image information is converted into image information of a gray space, so that the corresponding image to be detected is obtained, specifically, the acquired image is converted into an LAB color space from an RGB color space, a brightness channel is extracted, the image information of the gray space is obtained, and the brightness range of the brightness image is stretched to [0,255] through a gray stretching algorithm to be used as the image to be detected. The brightness clustering processing refers to dividing different areas in the image to be detected according to different brightness according to a preset clustering method and an area denoising method, and denoising according to the size of the area. The preset clustering method can be a K-means clustering algorithm. The initial paint-removing area refers to one of a preset number of different areas obtained by clustering an image to be detected according to brightness, preferably, three different areas are obtained by clustering the brightness, wherein the three different areas comprise a normal area, a drilling area and a suspected paint-removing area, and the suspected paint-removing area is subjected to a denoising method according to a preset area to obtain a final initial paint-removing area.
Specifically, referring to fig. 2, the performing brightness clustering processing on the image to be detected corresponding to the track shoe of the excavator to determine an initial paint dropping area specifically includes:
S201, dividing the image to be detected by a preset clustering algorithm according to brightness values of different areas in the image to be detected corresponding to the track shoe of the excavator, and confirming a suspected paint dropping area;
And when the image to be detected is divided into a preset clustering algorithm, determining one area with the highest average gray value in the obtained three areas as a suspected paint dropping area.
In some preferred embodiments, the preset clustering algorithm is a K-means clustering algorithm, and then the pixels in the image to be detected are clustered by using the K-means clustering algorithm, the number of clusters is determined to be 3, the pixels with similar brightness values in the image are classified into one cluster, the average value of the brightness values of all the pixel points in all the clusters is calculated respectively, and the cluster with the largest average value is taken as the suspected paint dropping area. The specific clustering process scheme is not further limited, and can be realized by referring to the prior art.
S202, filtering the suspected paint-dropping area through a preset area threshold, removing irrelevant normal areas, and confirming an initial paint-dropping area.
In the suspected paint-dropping area, there may be sub-areas with too large and too small areas, and the areas are not the composition of the paint-dropping area with high probability, so that the normal area which is not wanted can be identified. And after the suspected paint-dropping area is obtained, carrying out area screening on the subareas in the suspected paint-dropping area by setting a maximum area threshold and a minimum area threshold, confirming the subareas with areas larger than the maximum area threshold and the subareas with areas smaller than the minimum area threshold as irrelevant normal areas, removing the normal areas, and confirming the rest areas as initial paint-dropping areas. Preferably, the maximum area threshold and the minimum area threshold may be 0.001 times the area of the image to be measured and 0.5 times the area of the image to be measured.
S102, screening a first edge pixel point to be corrected and a normal edge pixel point based on gray values of the edge pixel point of the initial paint dropping area and a preset neighborhood pixel point of the initial paint dropping area.
It should be noted that, because the edge of the initial paint-removing area shows a characteristic of blurring and discontinuous paint-removing situations, the gray value of a single pixel point or a local position is deviated, that is, the area of the specific paint-removing area cannot be accurately described due to the special situation of the edge of the initial paint-removing area, and the edge of the initial paint-removing area needs to be processed.
The method comprises the steps of firstly obtaining pixel points corresponding to the edges of an initial paint dropping area, namely the edge pixel points of the initial paint dropping area, after obtaining the edge pixel points, calling a preset neighborhood size, and defining the pixel points on two sides adjacent to the edge pixel points in the obtained neighborhood pixels as neighborhood pixel points. And acquiring gray values of the edge pixel points of the initial paint dropping area and the preset neighborhood pixel points, and confirming that the current edge pixel point is the first edge pixel point to be corrected or the normal edge pixel point according to the magnitude relation of the gray values of the edge pixel points and the preset neighborhood pixel point. The first edge pixel point to be corrected refers to an edge pixel point which possibly needs to be updated in position, and the normal edge pixel point refers to an edge pixel point which is the final paint dropping area.
S103, according to the gradient amplitude difference between the first edge pixel point to be corrected and the corresponding surrounding pixel points, updating the position of the first edge pixel point to be corrected, and confirming a second edge pixel point to be corrected.
The surrounding pixels corresponding to the first to-be-corrected edge pixels refer to pixels in a direction in which the current first to-be-corrected edge pixel is vertical to the edge and the gray value gradually decreases, that is, pixels in a direction along the tangential line of the edge in which the current first to-be-corrected edge pixel is located and in which the gray value gradually decreases. And calculating the gradient amplitude difference between the first edge pixel point to be corrected and the corresponding surrounding pixel points according to the gray values of the first edge pixel point to be corrected and the corresponding surrounding pixel points, and further confirming the specific position of the current first edge pixel point to be corrected according to the gradient amplitude difference of the first edge pixel point to be corrected and the corresponding surrounding pixel points, namely, judging whether the current first edge pixel point to be corrected needs to be subjected to position updating or not, so as to confirm the second edge pixel point to be corrected. It should be noted that if the gradient magnitude difference between the current first edge to be corrected pixel point and the surrounding pixel points is large, the current first edge to be corrected pixel point may be a real edge, and if the gradient magnitude difference between the current first edge to be corrected pixel point and the surrounding pixel points is small, an error may be caused by blurring.
And S104, confirming the edge of the final paint dropping area based on the normal edge pixel point and the second edge pixel point to be corrected.
After the normal edge pixel point and the second edge pixel point to be corrected are obtained, screening out difference pixel points in the second edge pixel point to be corrected according to the position relation between the periphery of the second edge pixel point to be corrected and other normal edge pixel points, wherein the difference pixel points in the second edge pixel point to be corrected refer to noise pixel points, namely noise pixel points caused by errors or screening errors, and are not edge pixel points of a final paint dropping area. And removing the normal edge pixel points and the second edge pixel points to be corrected which are left after the difference pixel points are removed, namely the edges of the final paint-removing area.
S105, calculating the paint falling area corresponding to the final paint falling area through the edge of the final paint falling area so as to judge whether the track shoe of the excavator meets the standard.
After the edge of the paint dropping area is obtained, the area corresponding to the final paint dropping area can be calculated according to the position of the edge of the paint dropping area, namely the paint dropping area corresponding to the final paint dropping area of the track shoe of the excavator, after the paint dropping area corresponding to the final paint dropping area is obtained, the duty ratio of the paint dropping area corresponding to the final paint dropping area and the total area of the image to be detected of the track shoe of the excavator is confirmed, and whether the track shoe of the excavator meets the standard is confirmed according to the duty ratio.
The embodiment of the application provides an intelligent plate quality detection method for an excavator track plate, which comprises the steps of firstly carrying out brightness clustering treatment on an image to be detected corresponding to the excavator track plate, determining an initial paint dropping area, then screening a first edge pixel point to be corrected and a normal edge pixel point based on gray values of the edge pixel point of the initial paint dropping area and a preset area pixel point of the edge pixel point, carrying out position updating on the first edge pixel point to be corrected according to gradient amplitude differences of the first edge pixel point to be corrected and surrounding pixel points corresponding to the first edge pixel point to be corrected, confirming a second edge pixel point to be corrected, confirming the edge of a final paint dropping area based on the normal edge pixel point and the second edge pixel point to be corrected, and finally calculating the paint dropping area corresponding to the final paint dropping area through the edge of the final paint dropping area to judge whether the excavator track plate meets the standard or not. Compared with the traditional quality detection method, the method solves the problem that the detection of the paint dropping area is inaccurate due to illumination and blurring of the edge of the paint dropping area by correcting the position of the edge of the initial paint dropping area and removing noise pixel points, improves the detection accuracy of the paint dropping area of the track shoe of the excavator, and further improves the quality detection reliability of the track shoe of the excavator.
In one embodiment of the present application, and referring to fig. 3, the step S102: the screening of the first edge pixel point to be corrected and the normal edge pixel point based on the gray values of the edge pixel point of the initial paint dropping area and the preset area pixel point comprises the following steps:
s301, calculating a correction judgment value corresponding to the target edge pixel point according to the gray value of the target edge pixel point of the initial paint dropping area and the gray value of the preset field pixel point.
The preset field pixel points corresponding to the target edge pixel points of the initial paint dropping area refer to pixel points located at the left and right sides of the preset field of the target edge pixel points, whether the target edge pixel points are final edge pixel points of the paint dropping area can be judged according to the gray value difference between the target edge pixel points of the initial paint dropping area and the preset field pixel points of the target edge pixel points, and the correction judgment value corresponding to the target edge pixel points is a judgment value showing whether the current target edge pixel points need to be corrected further or not because the left and right sides of the pixel points of the paint dropping area correspond to the pixel points of the paint dropping area and the normal area or the pixel points of the drilling area, and the gray value difference between the pixel points of the paint dropping area and the normal area or the pixel points of the drilling area is relatively large.
Specifically, the calculating a correction judgment value corresponding to the target edge pixel point according to the gray value of the target edge pixel point of the initial paint-dropping area and the preset field pixel point of the initial paint-dropping area specifically includes:
Wherein, For the target edge pixel/>Corresponding correction judgment value,/>For the target edge pixel/>Is used for the gray-scale value of (c),Target edge pixel dot/>Gray value of pixel points in preset area adjacent to one side,/>For the target edge pixel/>Gray values of the preset domain pixel points adjacent to the other side.
S302, comparing the correction judgment value corresponding to the target edge pixel point with a preset judgment threshold value, and confirming that the target edge pixel point is a first edge pixel point to be corrected or a normal edge pixel point.
The normal edge pixel points are edge pixel points of a final paint dropping area, and the first edge pixel point to be corrected refers to edge pixel points possibly needing to be corrected. When the correction judgment value corresponding to the target edge pixel point is obtained, comparing the correction judgment value with a preset judgment threshold value, and confirming that the target edge pixel point is a first edge pixel point to be corrected or a normal edge pixel point according to a comparison result.
Specifically, referring to fig. 4, the comparing the correction judgment value corresponding to the target edge pixel point with a preset judgment threshold value confirms that the target edge pixel point is the first edge pixel point to be corrected or the normal edge pixel point.
S401, when a correction judgment value corresponding to the target edge pixel point is greater than or equal to a preset judgment threshold value, confirming that the target edge pixel point is a normal edge pixel point;
And S402, when the correction judgment value corresponding to the target edge pixel point is smaller than a preset judgment threshold value, confirming that the target edge pixel point is a first edge pixel point to be corrected.
The correction judgment value corresponding to the target edge pixel point is preferably 0, when the correction judgment value corresponding to the target edge pixel point is greater than 0 and the value is greater, the gray value deviation of the left and right sides of the position of the target edge pixel point is larger, and the larger the probability of the edge pixel point in the paint dropping area is, the correction is not needed, namely the normal edge pixel point. When the correction judgment value corresponding to the target edge pixel point is smaller than or equal to 0, the corresponding target edge pixel point is the first edge pixel point to be corrected.
In one embodiment of the present application, referring to fig. 5, step S103, according to a gradient magnitude difference between the first edge pixel to be corrected and the corresponding surrounding pixels, performs a position update on the first edge pixel to be corrected, and determines a second edge pixel to be corrected, which specifically includes:
S501, calculating the gradient amplitude difference between the first target to-be-corrected edge pixel point and the corresponding surrounding pixel point according to the gradient amplitude of the first target to-be-corrected edge pixel point and the corresponding surrounding pixel point, wherein the surrounding pixel point corresponding to the first target to-be-corrected edge pixel point is the pixel point in the direction of the vertical edge where the first target to-be-corrected edge pixel point is located and the gray value is gradually reduced.
The gradient amplitude of the first edge pixel to be corrected and the surrounding pixels corresponding to the first edge pixel to be corrected refers to the change amplitude of the gray value of the first edge pixel to be corrected and the surrounding pixels corresponding to the first edge pixel to be corrected in a certain direction, and the surrounding pixels corresponding to the first edge pixel to be corrected refer to the pixels in the direction of the vertical edge where the first edge pixel to be corrected is located and the gray value of the first edge pixel to be corrected is gradually reduced.
S502, inputting gradient amplitude differences between the first edge pixel point to be corrected of the target and surrounding pixel points corresponding to the first edge pixel point to be corrected into a preset possibility evaluation calculation formula, and calculating a possibility evaluation value of the surrounding pixel points corresponding to the first edge pixel point to be corrected of the target;
after the gradient amplitude difference between the first edge pixel point to be corrected and the corresponding surrounding pixel points is obtained, the gradient amplitude difference is input into a preset probability evaluation calculation formula, and the probability evaluation value of the surrounding pixel points corresponding to the first edge pixel point to be corrected is further calculated. The probability evaluation value of the surrounding pixel points corresponding to the first edge pixel point to be corrected refers to the probability value that the surrounding pixel points corresponding to the first edge pixel point to be corrected are edges of the paint dropping area.
Specifically, inputting the gradient magnitude difference between the target first edge pixel to be corrected and the corresponding surrounding pixels into a preset probability evaluation calculation formula, and calculating a probability evaluation value of the surrounding pixels corresponding to the target first edge pixel to be corrected, including:
Wherein, For the first edge pixel to be corrected of the target/>Corresponding/>Probability evaluation value of each surrounding pixel point,/>For the first edge pixel to be corrected of the target/>Corresponding gradient amplitude,/>For the first edge pixel to be corrected of the target/>Corresponding/>Gradient amplitude of each surrounding pixel point,/>For the first edge pixel to be corrected of the target/>Corresponding to the first/>Gradient magnitude differences for the surrounding pixels. The first edge pixel point to be corrected of the target/>Corresponding/>Probability evaluation value/>, of each surrounding pixelRefers to the/>The surrounding pixels may be probability values for the edge pixels of the final paint-drop area. Because the gray value of the pixel points of the final paint-dropping area is larger than that of other areas, and the surrounding pixel points are the pixel points in the direction of the vertical edge where the pixel points of the first target to-be-corrected are located and the gray value is gradually reduced, the pixel points of the first target to-be-corrected can pass through the pixel points/>Corresponding gradient amplitude/>And the first edge pixel point to be corrected of the target/>Corresponding/>Gradient amplitude/>, of each surrounding pixelCan further judge the difference of the firstPeripheral pixel points and target first edge pixel point to be corrected/>Whether the pixel points are the edge pixel points of the final paint dropping area or not can be judgedWhether the peripheral pixel points can be used as the edge pixel points of the final paint dropping area.
S503, comparing the possibility evaluation value of the surrounding pixel points corresponding to the target first edge pixel point to be corrected with a preset standard evaluation threshold value, so as to update the position of the target first edge pixel point to be corrected and confirm the second edge pixel point to be corrected.
After the likelihood evaluation value of the surrounding pixel points corresponding to the first edge pixel point to be corrected of the target is obtained, the likelihood evaluation value is compared with a preset standard evaluation threshold value to judge whether the first edge pixel point to be corrected of the target needs to be subjected to position updating or not, and a second edge pixel point to be corrected is confirmed.
Specifically, referring to fig. 6, the probability evaluation value of the surrounding pixel points corresponding to the target first edge pixel point to be corrected is compared with a preset standard evaluation threshold value, so as to update the position of the target first edge pixel point to be corrected, and confirm the second edge pixel point to be corrected.
S601, when a probability evaluation value of surrounding pixel points corresponding to the first edge pixel point to be corrected of the target is smaller than a preset standard evaluation threshold value, determining the surrounding pixel points corresponding to the first edge pixel point to be corrected of the target as second edge pixel points to be corrected;
S602, when the probability evaluation value of the surrounding pixel points corresponding to the target first edge pixel point to be corrected is greater than or equal to a preset standard evaluation threshold value, updating the surrounding pixel points corresponding to the target first edge pixel point to be corrected to be new first edge pixel points to return to the initial step for performing a new round of operation.
In some preferred embodiments, the likelihood evaluation value of the surrounding pixel points corresponding to the target first edge pixel point to be corrected is preferably 1, and when the likelihood evaluation value of the surrounding pixel points corresponding to the target first edge pixel point to be corrected is less than 1, it is proved that the surrounding pixel points corresponding to the target first edge pixel point to be corrected is the final paint-dropping edge pixel point. When the probability evaluation value of the surrounding pixel points corresponding to the first edge pixel point to be corrected of the target is larger than or equal to 1, proving that the surrounding pixel points corresponding to the first edge pixel point to be corrected of the target are not the final paint-dropping edge pixel points, and the surrounding pixel points corresponding to the first edge pixel point to be corrected of the target are required to be used as new first edge pixel points to be corrected of the target, returning to an initial step, performing a new round of operation until the probability evaluation value is smaller than 1, and confirming the second edge pixel points to be corrected.
In an embodiment of the present application, referring to fig. 7, step S104, the determining the edge of the final paint-removing area based on the normal edge pixel point and the second edge pixel point to be corrected specifically includes:
S701, judging whether the target second edge pixel point to be corrected is a difference pixel point according to the pixel distance between the target second edge pixel point to be corrected and the corresponding adjacent pixel point.
The adjacent pixel points corresponding to the target second edge pixel point to be corrected refer to a normal edge pixel point adjacent to the edge where the target second edge pixel point to be corrected is located and other second edge pixel points to be corrected in the extending direction of the edge where the target second edge pixel point to be corrected is located. Through the pixel distance between the target second edge pixel point to be corrected and the corresponding adjacent pixel point, whether the target second edge pixel point to be corrected is continuous with the normal edge pixel point or other second edge pixel points to be corrected can be judged, and whether the target second edge pixel point to be corrected is a difference pixel point can be further judged. The difference pixel points are noise pixels caused by noise or errors, and are not edge pixel points of the final paint dropping area.
S702, eliminating the difference pixel points, merging the rest normal edge pixel points with the second edge pixel points to be corrected, and confirming the edge of the final paint dropping area; corresponding to:
The step of judging whether the target second edge pixel point to be corrected is a difference pixel point according to the pixel distance between the target second edge pixel point to be corrected and the corresponding adjacent pixel point, specifically includes:
Wherein, For the target second edge pixel/>Corresponding distance evaluation index,/>Representing a normalization function,/>For the target second edge pixel/>Adjacent pixel points corresponding thereto/>The distance between the two plates is set to be equal,For the target second edge pixel/>Adjacent pixel points corresponding thereto/>The distance between the two plates is set to be equal,For the target second edge pixel/>Adjacent pixel points corresponding thereto/>And adjacent pixel pointsA distance therebetween; /(I)Is the target second edge pixel to be modified/>Average value of distances from two adjacent pixels beside,/>Is the target second edge pixel to be modified/>Average distance from three adjacent pixels beside, namely target second edge pixel to be corrected/>Corresponding distance evaluation index/>According to the pixel point/>, of the second edge to be corrected of the targetAverage distance between the second pixel point and the adjacent pixel point, and judging the second edge pixel point to be corrected of the targetAnd if not, the pixel points are proved to be noise pixel points, namely difference pixel points.
When the target second edge pixel point to be correctedCorresponding distance evaluation index/>When the target second edge pixel point to be corrected is larger than a preset distance evaluation threshold value, confirming the target second edge pixel point to be corrected/>Is a difference pixel. When the target second edge pixel point to be corrected/>Corresponding distance evaluation index/>When the target second edge pixel point to be corrected is smaller than or equal to a preset distance evaluation threshold value, confirming the target second edge pixel point to be corrected/>Is a difference pixel. In some preferred embodiments, the preset distance evaluation threshold value is 0.5.
Further, for the difference pixel point to be corrected, the difference pixel point to be corrected is corrected according to the gray value between the difference pixel point and the adjacent pixel point. Specifically, a first pixel point and a second pixel point which are adjacent to the difference pixel point in the left-right direction are taken, the average value of the gray value sum of the first pixel point and the second pixel point is calculated, and the average value is taken as the gray value after the difference pixel point is corrected.
In one embodiment of the present application, referring to fig. 8, step S105, calculating, by using the edge of the final paint-removing area, a paint-removing area corresponding to the final paint-removing area to determine whether the track shoe of the excavator meets the standard, specifically includes:
S801, calculating a paint dropping area corresponding to the final paint dropping area according to the edge of the final paint dropping area;
s802, when the area ratio of the paint falling area corresponding to the final paint falling area to the image to be detected is greater than or equal to a preset area threshold value, confirming that the track shoe of the excavator does not accord with the standard;
and S803, when the area ratio of the paint falling area corresponding to the final paint falling area to the image to be detected is smaller than a preset area threshold value, confirming that the track shoe of the excavator meets the standard.
After the edge of the final paint-removing area is obtained, calculating the paint-removing area corresponding to the final paint-removing area according to the position of the edge. And calculating the ratio of the paint dropping area corresponding to the final paint dropping area to the area of the image to be tested, and when the ratio of the paint dropping area corresponding to the final paint dropping area to the area of the image to be tested is greater than or equal to a preset area threshold value, confirming that the track shoe of the excavator does not accord with the standard. And when the area ratio of the paint falling area corresponding to the final paint falling area to the image to be detected is smaller than a preset area threshold value, confirming that the track shoe of the excavator meets the standard. In some preferred implementations, the preset area threshold is 0.01.
The embodiment of the application provides an intelligent plate quality detection method for an excavator track plate, which comprises the steps of firstly carrying out brightness clustering treatment on an image to be detected corresponding to the excavator track plate, determining an initial paint dropping area, then screening a first edge pixel point to be corrected and a normal edge pixel point based on gray values of the edge pixel point of the initial paint dropping area and a preset area pixel point of the edge pixel point, carrying out position updating on the first edge pixel point to be corrected according to gradient amplitude differences of the first edge pixel point to be corrected and surrounding pixel points corresponding to the first edge pixel point to be corrected, confirming a second edge pixel point to be corrected, confirming the edge of a final paint dropping area based on the normal edge pixel point and the second edge pixel point to be corrected, and finally calculating the paint dropping area corresponding to the final paint dropping area through the edge of the final paint dropping area to judge whether the excavator track plate meets the standard or not. Compared with the traditional quality detection method, the method solves the problem that the detection of the paint dropping area is inaccurate due to illumination and blurring of the edge of the paint dropping area by correcting the position of the edge of the initial paint dropping area and removing noise pixel points, improves the detection accuracy of the paint dropping area of the track shoe of the excavator, and further improves the quality detection reliability of the track shoe of the excavator.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than that disclosed in the description, and sometimes no specific order exists between different operations or steps. For example, two consecutive operations or steps may actually be performed substantially in parallel, they may sometimes be performed in reverse order, which may be dependent on the functions involved. Each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It will be evident to those skilled in the art that the application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The above-described embodiments of the application are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (10)

1. The intelligent detection method for the quality of the track shoe of the excavator is applied to the field of quality detection of the track shoe of the excavator, and is characterized by comprising the following steps of:
carrying out brightness clustering treatment on an image to be detected corresponding to the track shoe of the excavator, and determining an initial paint dropping area;
Screening a first edge pixel point to be corrected and a normal edge pixel point based on gray values of the edge pixel point of the initial paint dropping area and the preset field pixel point;
According to the gradient amplitude difference between the first edge pixel point to be corrected and the corresponding surrounding pixel points, carrying out position update on the first edge pixel point to be corrected, and confirming a second edge pixel point to be corrected;
confirming the edge of a final paint dropping area based on the normal edge pixel points and the second edge pixel points to be corrected;
And calculating the paint dropping area corresponding to the final paint dropping area through the edge of the final paint dropping area so as to judge whether the track shoe of the excavator meets the standard.
2. The intelligent detection method for the quality of the plate body of the excavator track plate according to claim 1, wherein the step of performing brightness clustering processing on the image to be detected corresponding to the excavator track plate to determine an initial paint dropping area specifically comprises the following steps:
dividing the image to be detected by a preset clustering algorithm according to brightness values of different areas in the image to be detected corresponding to the track shoe of the excavator, and confirming suspected paint dropping areas;
and filtering the suspected paint-dropping area through a preset area threshold, removing irrelevant normal areas, and confirming an initial paint-dropping area.
3. The intelligent detection method for the quality of the track shoe of the excavator according to claim 1, wherein the screening of the first edge pixel point to be corrected and the normal edge pixel point based on the gray values of the edge pixel point of the initial paint dropping area and the preset area pixel point thereof specifically comprises:
Calculating a correction judgment value corresponding to the target edge pixel point according to the gray value of the target edge pixel point of the initial paint dropping area and the gray value of the preset field pixel point;
comparing the corrected judgment value corresponding to the target edge pixel point with a preset judgment threshold value, and confirming that the target edge pixel point is a first edge pixel point to be corrected or a normal edge pixel point.
4. The intelligent detection method for the quality of the track shoe of the excavator according to claim 3, wherein the calculating the correction judgment value corresponding to the target edge pixel point according to the gray value of the target edge pixel point of the initial paint-dropping area and the preset area pixel point comprises the following steps:
Wherein, For the target edge pixel/>Corresponding correction judgment value,/>For the target edge pixel/>Gray value of/>Target edge pixel dot/>Gray value of pixel points in preset area adjacent to one side,/>For the target edge pixel/>Gray values of the preset domain pixel points adjacent to the other side.
5. The intelligent detection method for the plate body quality of the track shoe of the excavator according to claim 3, wherein the comparing the corrected judgment value corresponding to the target edge pixel point with a preset judgment threshold value, and confirming that the target edge pixel point is a first edge pixel point to be corrected or a normal edge pixel point specifically comprises:
When the correction judgment value corresponding to the target edge pixel point is greater than or equal to a preset judgment threshold value, confirming that the target edge pixel point is a normal edge pixel point;
And when the correction judgment value corresponding to the target edge pixel point is smaller than a preset judgment threshold value, confirming that the target edge pixel point is a first edge pixel point to be corrected.
6. The intelligent detection method for the plate body quality of the track shoe of the excavator according to claim 1, wherein the step of updating the position of the first edge pixel point to be corrected according to the gradient amplitude difference between the first edge pixel point to be corrected and the corresponding surrounding pixel points, and the step of confirming the second edge pixel point to be corrected specifically comprises the following steps:
Calculating gradient amplitude differences between the first target to-be-corrected edge pixel point and the corresponding surrounding pixel points according to the gradient amplitudes between the first target to-be-corrected edge pixel point and the corresponding surrounding pixel points, wherein the surrounding pixel points corresponding to the first target to-be-corrected edge pixel point refer to pixel points in the direction of the vertical edge where the first target to-be-corrected edge pixel point is located and the gray value gradually decreases;
Inputting the gradient amplitude difference between the first edge pixel point to be corrected of the target and the corresponding surrounding pixel points into a preset possibility evaluation calculation formula, and calculating the possibility evaluation value of the surrounding pixel points corresponding to the first edge pixel point to be corrected of the target;
And comparing the possibility evaluation value of the surrounding pixel points corresponding to the first edge pixel point to be corrected of the target with a preset standard evaluation threshold value so as to update the position of the first edge pixel point to be corrected of the target and confirm the second edge pixel point to be corrected.
7. The intelligent detection method for the plate body quality of the track shoe of the excavator according to claim 6, wherein the step of inputting the gradient magnitude difference between the target first edge pixel to be corrected and the corresponding surrounding pixels into a preset probability evaluation calculation formula to calculate the probability evaluation value of the surrounding pixels corresponding to the target first edge pixel to be corrected specifically includes:
Wherein, For the first edge pixel to be corrected of the target/>Corresponding/>Probability evaluation value of each surrounding pixel point,/>For the first edge pixel to be corrected of the target/>Corresponding gradient amplitude,/>For the first edge pixel to be corrected of the target/>Corresponding/>Gradient amplitude of each surrounding pixel point,/>For the first edge pixel to be corrected of the target/>Corresponding to the first/>Gradient magnitude differences for the surrounding pixels.
8. The intelligent detection method for the plate body quality of the track shoe of the excavator according to claim 7, wherein the comparing the likelihood evaluation value of the surrounding pixel points corresponding to the target first edge pixel point to be corrected with a preset standard evaluation threshold value to update the position of the target first edge pixel point to be corrected, and determining the second edge pixel point to be corrected specifically includes:
When the probability evaluation value of the surrounding pixel points corresponding to the first edge pixel point to be corrected of the target is smaller than a preset standard evaluation threshold value, determining the surrounding pixel points corresponding to the first edge pixel point to be corrected of the target as second edge pixel points to be corrected;
And when the probability evaluation value of the surrounding pixel points corresponding to the target first edge pixel point to be corrected is larger than or equal to a preset standard evaluation threshold value, updating the surrounding pixel points corresponding to the target first edge pixel point to be corrected to be new first edge pixel points to return to the initial step for performing a new round of operation.
9. The intelligent detection method for the quality of the track shoe of the excavator according to claim 1, wherein the determining the edge of the final paint-dropping area based on the normal edge pixel point and the second edge pixel point to be corrected specifically comprises:
Judging whether the target second edge pixel point to be corrected is a difference pixel point or not according to the pixel distance between the target second edge pixel point to be corrected and the corresponding adjacent pixel point;
Removing the difference pixel points, combining the rest normal edge pixel points with the second edge pixel points to be corrected, and confirming the edge of the final paint dropping area; corresponding to:
Judging whether the target second edge pixel point to be corrected is a difference pixel point or not according to the pixel distance between the target second edge pixel point to be corrected and the corresponding adjacent pixel point, wherein the method specifically comprises the following steps:
Wherein, For the target second edge pixel/>Corresponding distance evaluation index,/>Representing a normalization function,/>For the target second edge pixel/>Adjacent pixel points corresponding thereto/>The distance between the two plates is set to be equal,For the target second edge pixel/>Adjacent pixel points corresponding thereto/>The distance between the two plates is set to be equal,For the target second edge pixel/>Adjacent pixel points corresponding thereto/>And adjacent pixel pointsA distance therebetween;
When the target second edge pixel point to be corrected Corresponding distance evaluation index/>When the target second edge pixel point to be corrected is larger than a preset distance evaluation threshold value, confirming the target second edge pixel point to be corrected/>Is a difference pixel.
10. The intelligent detection method for the quality of the plate body of the excavator track plate according to any one of claims 1 to 9, wherein the calculating the paint dropping area corresponding to the final paint dropping area through the edge of the final paint dropping area to judge whether the excavator track plate meets the standard specifically comprises:
calculating the paint dropping area corresponding to the final paint dropping area according to the edge of the final paint dropping area;
When the area ratio of the paint falling area corresponding to the final paint falling area to the image to be detected is greater than or equal to a preset area threshold value, confirming that the track shoe of the excavator does not accord with the standard;
and when the area ratio of the paint falling area corresponding to the final paint falling area to the image to be detected is smaller than a preset area threshold value, confirming that the track shoe of the excavator meets the standard.
CN202410586321.8A 2024-05-13 2024-05-13 Intelligent detection method for quality of plate body of track plate of excavator Pending CN118154615A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410586321.8A CN118154615A (en) 2024-05-13 2024-05-13 Intelligent detection method for quality of plate body of track plate of excavator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410586321.8A CN118154615A (en) 2024-05-13 2024-05-13 Intelligent detection method for quality of plate body of track plate of excavator

Publications (1)

Publication Number Publication Date
CN118154615A true CN118154615A (en) 2024-06-07

Family

ID=91299396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410586321.8A Pending CN118154615A (en) 2024-05-13 2024-05-13 Intelligent detection method for quality of plate body of track plate of excavator

Country Status (1)

Country Link
CN (1) CN118154615A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1762975A2 (en) * 2005-09-09 2007-03-14 Delphi Technologies, Inc. Histogram equalization method for a vision-based occupant sensing system
JP2012247597A (en) * 2011-05-27 2012-12-13 Seiko Epson Corp Image processing method, image processing device, electro-optic device, and electronic equipment
JP2013061446A (en) * 2011-09-13 2013-04-04 Seiko Epson Corp Image display apparatus
CN110009601A (en) * 2019-02-28 2019-07-12 南京航空航天大学 Large-Scale Equipment irregular contour detection method of surface flaw based on HOG
CN111667467A (en) * 2020-05-28 2020-09-15 江苏大学附属医院 Clustering algorithm-based lower limb vascular calcification index multi-parameter accumulation calculation method
CN112686911A (en) * 2020-12-30 2021-04-20 北京爱奇艺科技有限公司 Control area generation method and device, electronic equipment and storage medium
CN115496760A (en) * 2022-11-17 2022-12-20 澳润(山东)药业有限公司 Donkey-hide gelatin quality identification method
CN116934787A (en) * 2023-07-27 2023-10-24 国信宝威(北京)科技有限公司 Image processing method based on edge detection
CN117197141A (en) * 2023-11-07 2023-12-08 山东远盾网络技术股份有限公司 Method for detecting surface defects of automobile parts
CN117853722A (en) * 2023-12-15 2024-04-09 中铁检验认证(常州)机车车辆配件检验站有限公司 Steel metallographic structure segmentation method integrating superpixel information

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1762975A2 (en) * 2005-09-09 2007-03-14 Delphi Technologies, Inc. Histogram equalization method for a vision-based occupant sensing system
JP2012247597A (en) * 2011-05-27 2012-12-13 Seiko Epson Corp Image processing method, image processing device, electro-optic device, and electronic equipment
JP2013061446A (en) * 2011-09-13 2013-04-04 Seiko Epson Corp Image display apparatus
CN110009601A (en) * 2019-02-28 2019-07-12 南京航空航天大学 Large-Scale Equipment irregular contour detection method of surface flaw based on HOG
CN111667467A (en) * 2020-05-28 2020-09-15 江苏大学附属医院 Clustering algorithm-based lower limb vascular calcification index multi-parameter accumulation calculation method
CN112686911A (en) * 2020-12-30 2021-04-20 北京爱奇艺科技有限公司 Control area generation method and device, electronic equipment and storage medium
CN115496760A (en) * 2022-11-17 2022-12-20 澳润(山东)药业有限公司 Donkey-hide gelatin quality identification method
CN116934787A (en) * 2023-07-27 2023-10-24 国信宝威(北京)科技有限公司 Image processing method based on edge detection
CN117197141A (en) * 2023-11-07 2023-12-08 山东远盾网络技术股份有限公司 Method for detecting surface defects of automobile parts
CN117853722A (en) * 2023-12-15 2024-04-09 中铁检验认证(常州)机车车辆配件检验站有限公司 Steel metallographic structure segmentation method integrating superpixel information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
窦圣昶;赵海;朱宏博;王彬;: "修正与融合的肺部分割算法", 东北大学学报(自然科学版), no. 11, 15 November 2018 (2018-11-15) *
闫琦;李慧;荆林海;唐韵玮;丁海峰;: "结合显著性检测和超像素分割的遥感信息提取算法研究", 计算机应用研究, no. 07, 27 July 2017 (2017-07-27) *

Similar Documents

Publication Publication Date Title
RU2540849C2 (en) Device for detecting three-dimensional object and method of detecting three-dimensional object
EP2919189B1 (en) Pedestrian tracking and counting method and device for near-front top-view monitoring video
CN104469345B (en) A kind of video method for diagnosing faults based on image processing
CN101694718B (en) Method for detecting remote sensing image change based on interest areas
US20160098637A1 (en) Automated Data Analytics for Work Machines
CN101030256A (en) Method and apparatus for cutting vehicle image
JP2006268199A (en) Vehicular image processing system, method, and program, and vehicle
US11662272B2 (en) Tire wear estimation method
CN107392139A (en) A kind of method for detecting lane lines and terminal device based on Hough transformation
CN102279973A (en) Sea-sky-line detection method based on high gradient key points
CN107730521B (en) Method for rapidly detecting ridge type edge in image
CN107563331B (en) Road sign line detection method and system based on geometric relationship
CN110163270B (en) Multi-sensor data fusion method and system
JP2019029897A (en) Image monitor, image monitoring method and image monitoring program
CN112528868A (en) Illegal line pressing judgment method based on improved Canny edge detection algorithm
CN101751669B (en) Static object detection method and device
CN116542969A (en) Road asphalt adhesion detection method based on vision technology
CN107392216B (en) Method for quickly identifying circumferential seams of shield tunnel segments based on gray data
CN118154615A (en) Intelligent detection method for quality of plate body of track plate of excavator
CN110869261A (en) Road surface state estimation method and road surface state estimation device
CN103455985A (en) Road crack enhancement method based on Hessian structural analysis
CN108961288B (en) Intelligent identification method for rail web plug pin and lead detection image
CN117036358B (en) Method and system for detecting tool wear of numerical control machine tool
CN113793384A (en) Petroleum drill pipe diameter-changing positioning method and system based on image processing
CN102509265B (en) Digital image denoising method based on gray value difference and local energy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination