CN111630563A - Edge detection method of image, image processing apparatus, and computer storage medium - Google Patents

Edge detection method of image, image processing apparatus, and computer storage medium Download PDF

Info

Publication number
CN111630563A
CN111630563A CN201880087301.9A CN201880087301A CN111630563A CN 111630563 A CN111630563 A CN 111630563A CN 201880087301 A CN201880087301 A CN 201880087301A CN 111630563 A CN111630563 A CN 111630563A
Authority
CN
China
Prior art keywords
edge
edge point
neighborhood
point
detection area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201880087301.9A
Other languages
Chinese (zh)
Other versions
CN111630563B (en
Inventor
阳光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen A&E Intelligent Technology Institute Co Ltd
Original Assignee
Shenzhen A&E Intelligent Technology Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen A&E Intelligent Technology Institute Co Ltd filed Critical Shenzhen A&E Intelligent Technology Institute Co Ltd
Publication of CN111630563A publication Critical patent/CN111630563A/en
Application granted granted Critical
Publication of CN111630563B publication Critical patent/CN111630563B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an edge detection method, equipment and a computer storage medium of an image, wherein the method comprises the following steps: acquiring neighborhood difference of each pixel point in a first detection area of the image, and extracting a first edge point based on the neighborhood difference; determining a second detection area according to the first edge point, wherein the second detection area is smaller than the first detection area; amplifying the neighborhood difference of each pixel point in the second detection area, and extracting a second edge point based on the amplified neighborhood difference; and determining the edge of the image according to the first edge point and the second edge point. By the method, the weak edge in the image can be accurately detected.

Description

Edge detection method of image, image processing apparatus, and computer storage medium [ technical field ] A method for producing a semiconductor device
The present disclosure relates to the field of image detection, and in particular, to an edge detection method for an image, an image processing apparatus, and a computer storage medium.
[ background of the invention ]
The edge detection is a basic operation in image processing, and can be applied to different fields, for example, in the industrial field, the detection of the surface quality of a workpiece is performed by using the edge detection, specifically, an image of the surface of the workpiece is acquired, and then the edge detection is performed on the image of the surface of the workpiece to detect whether the surface of the workpiece has a scratch. In specific applications, the surface scratches of the workpiece are often shallow, and the edge contrast of the image of the corresponding workpiece surface is weak and is not easy to detect.
[ summary of the invention ]
The application provides an image edge detection method, an image processing device and a computer storage medium, which aim to solve the problem that weak edges in an image are difficult to detect in the prior art.
In order to solve the above technical problem, the present application provides an edge detection method for an image, including: acquiring neighborhood difference of each pixel point in a first detection area of the image, and extracting a first edge point based on the neighborhood difference; determining a second detection area according to the first edge point, wherein the second detection area is smaller than the first detection area; amplifying the neighborhood difference of each pixel point in the second detection area, and extracting a second edge point based on the amplified neighborhood difference; and determining the edge of the image according to the first edge point and the second edge point.
In order to solve the technical problem, the present application provides an image processing apparatus, which includes a processor and a memory, wherein the memory stores a computer program, and the processor is used for executing the computer program to realize the method.
In order to solve the above technical problem, the present application provides a computer storage medium for storing a computer program, which can be executed to implement the above method.
When the method is used for detecting the edge of the image, firstly, the neighborhood difference of each pixel point in a first detection area of the image is obtained, and a first edge point is extracted based on the neighborhood difference; then, determining a second detection area smaller than the first detection area according to the first edge point so as to further extract the edge point in a local area, amplifying the neighborhood difference of each pixel in the second detection area, and extracting a second edge point based on the amplified neighborhood difference; the extraction of the whole edge point is mainly divided into two processes, firstly, for a larger first detection area, the first edge point is extracted according to the neighborhood difference, the neighborhood difference of the edge point which is not extracted is weaker, therefore, a smaller second detection area is determined, the neighborhood difference is amplified to extract the second edge point, and then the edge of the image is determined according to the first edge point and the second edge point.
[ description of the drawings ]
FIG. 1 is a schematic flowchart of an embodiment of an edge detection method for an image according to the present application;
FIG. 2 is a schematic flowchart of another embodiment of an edge detection method for an image according to the present application;
FIG. 3 is a schematic diagram of the embodiment of FIG. 2 in which a first edge point block is obtained in one manner on the image;
FIG. 4 is a schematic diagram of another way of obtaining a first edge point packet on an image in the embodiment of FIG. 2;
FIG. 5 is a flowchart illustrating the second edge point extraction in the embodiment shown in FIG. 2;
FIG. 6 is a schematic diagram of extracting a second edge point on the image in the embodiment of FIG. 2;
FIG. 7 is a schematic illustration of the weighting window in the embodiment of FIG. 2;
FIG. 8 is a schematic flow chart illustrating the extraction of the third edge point in the embodiment shown in FIG. 2;
FIG. 9 is a schematic view of the third edge point extracted on the image in the embodiment of FIG. 2;
FIG. 10 is a schematic flow chart diagram illustrating an embodiment of a method for inspecting a surface of a workpiece according to the present application;
FIG. 11 is a schematic structural diagram of an embodiment of an image processing apparatus according to the present application;
FIG. 12 is a schematic structural diagram of an embodiment of a computer storage medium according to the present application.
[ detailed description ] embodiments
The technical solutions of the present application will be described clearly and completely with reference to the embodiments of the present application and the accompanying drawings, and it is to be understood that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. Based on the embodiments in the present application, all other embodiments obtained by a person of ordinary skill in the art without any creative effort belong to the protection scope of the present application.
The method mainly comprises two extraction steps, namely, firstly extracting the edge points based on the neighborhood difference, then amplifying the neighborhood difference, and extracting the edge points based on the amplified neighborhood difference; to enable detection of weak edges in the image.
Referring to fig. 1 in detail, fig. 1 is a schematic flow chart of an embodiment of an image edge detection method according to the present application, and the image edge detection method of the present embodiment includes the following steps.
S11: and acquiring neighborhood difference of each pixel point in a first detection area of the image, and extracting a first edge point based on the neighborhood difference.
In this embodiment, the detection area on the image is detected in step S11, the detection area may be the entire image or a certain area in the image, and the detection area in this step is referred to as the first detection area in order to be distinguished from the detection area in the following step.
In step S11, neighborhood disparity of each pixel in the first detection area is obtained, where the neighborhood disparity represents a difference between a pixel and an adjacent pixel, and may be a difference of pixel values, a first-order gradient value, a second-order gradient value, or the like; then, the edge point is extracted based on the neighborhood difference, and the pixel point with the obvious difference from the adjacent pixel point can be extracted as the edge point, the above process is the first extraction of the edge point in this embodiment, and the edge point extracted in step S11 is referred to as a first edge point.
The first edge point is extracted according to the neighborhood difference, the edge point with obvious difference is extracted more accurately according to the process of the step S11, and the edge point with unobvious difference is generally difficult to extract according to the difference, so the following steps are adopted in the embodiment to further extract the edge point.
S12: and determining a second detection area according to the first edge point.
After a part of edge points are acquired according to the neighborhood difference in the step S11, a second detection region is determined according to the first edge point, and according to the determined first edge point, the position of the edge line corresponding to the edge point in the image can be approximately known, and when the second detection is performed, the region where the edge line is located in the image is detected, so that in the step S12, the second detection region is determined according to the first edge point, that is, the region where the edge line may exist is determined according to the first edge point, wherein the second detection region is smaller than the first detection region, and the second detection region is in the first detection region.
Specifically, for example, after the entire image is used as the first detection area and the first edge points are obtained, the minimum area including all the first edge points is used as the second detection area, and the edge of the second detection area follows the edge of the first detection area and has the same shape as the first detection area. Such as the dashed box shown in fig. 3.
S13: and amplifying the neighborhood difference of each pixel point in the second detection area, and extracting a second edge point based on the amplified neighborhood difference.
In step S13, edge points not found in step S11 are further extracted, after the second detection area is determined, neighborhood disparity of each pixel in the second detection area is amplified, and the amplification process can amplify the disparity between the pixel and its neighboring pixels, so as to highlight edge points with insignificant disparity, and then extract second edge points based on the neighborhood disparity after the amplification process, and after the neighborhood disparity is amplified, it can be ensured that the extraction of the second edge points is more accurate.
S14: and determining the edge of the image according to the first edge point and the second edge point.
After the first edge point and the second edge point are extracted through the above steps, the edge of the image can be determined according to the edge points extracted twice, for example, all the edge points are subjected to fitting operation, so as to determine the edge line in the image.
In this embodiment, the edge points are extracted for the first time according to the neighborhood difference, then the neighborhood difference is amplified, the edge points are extracted for the second time according to the amplified neighborhood difference, the edge points with weak difference are found out, and the edges of the image are determined according to the edge points extracted for the two times, so that the detection of the weak edges in the image is realized.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating another embodiment of the method for detecting an edge of an image according to the present application.
S21: and acquiring neighborhood difference of each pixel point in a first detection area of the image, and extracting a first edge point based on the neighborhood difference.
This step S21 is similar to step S11 in the above embodiment, and the same parts are not described again. In this embodiment, the neighborhood disparity may be a difference value, a first gradient value, or a second gradient value of the pixel value, and when the first edge point is extracted, a threshold is set to screen the difference value, the first gradient value, or the second gradient value, so as to extract the first edge point. The process can be implemented by using the existing image edge algorithm, such as canny algorithm, sobel algorithm, etc.
S22: and classifying the first edge points according to the neighborhood difference and/or the position relation of the first edge points to obtain at least two first edge point blocks.
In step S22, the first edge points are classified, and each of the obtained first edge point blocks includes at least one first edge point. In the classification process, the edge points that cannot be classified may be discarded, so as to remove the interference points in the first edge points extracted in step S21; for the classification operation, it is also possible to use it as a determination of the second detection region from the first edge point packet block in step S23 described below, so that the determination of the second detection region is more targeted and more efficient.
There are various classification methods, all of which classify the first edge point according to the feature of the first edge point, and in this embodiment, classify the first edge point according to the neighborhood difference and/or the position relationship of the first edge point, which specifically includes the following classification methods.
The first classification mode is that neighborhood difference of a first edge point is compared with a plurality of preset difference threshold sections, and the first edge point with the neighborhood difference falling in the same difference threshold section is used as a class to form a primary classification packet block; and taking the primary classification packet block as a first edge point packet block.
The classification method may be that after the first edge point is extracted in step S21, the first edge point is classified according to a plurality of preset difference threshold segments, so as to obtain a first edge point block. It may also be implemented in step S21 that, when extracting the first edge point according to the neighborhood difference of the pixel point, a plurality of difference threshold segments are set, and the first edge point is filtered according to the difference threshold segments, and the first edge point whose neighborhood difference falls within the same difference threshold segment is used as a class to form the first edge point packet.
For the first classification, as can be understood in conjunction with fig. 3, fig. 3 is a schematic diagram of the embodiment shown in fig. 2 in which a first edge point packet is obtained in one way on the image. Wherein, classify the first edge point according to a plurality of difference threshold value segments, obtain 4 first edge point packet blocks: (a)1,a2,a3)、(b1,b2)、(c1,c2)、(d1,d2,d3,d4)。
The second classification method is as follows: comparing the neighborhood difference of the first edge point with a plurality of preset difference threshold sections, and taking the first edge point with the neighborhood difference falling in the same difference threshold section as a class to form a primary classification packet block; and further classifying the primary classification blocks, the section difference between the corresponding difference threshold sections of which is smaller than a preset section difference threshold value and the shortest distance between the primary classification blocks and the adjacent primary classification blocks is smaller than a preset distance threshold value, into secondary classification blocks, and taking the secondary classification blocks as first edge point blocks. The shortest distance here means the shortest distance between once-classified packet blocks.
In the classification method, first, a first edge point whose neighborhood disparity falls within the same disparity threshold segment is used as a class to form a primary classification block, for example, 4 primary classification blocks formed in fig. 3: (a)1,a2,a3)、(b1,b2)、(c1,c2)、(d1,d2,d3,d4)。
Then, the primary classification packet blocks are further classified into secondary classification packet blocks, and the standard for judging whether the plurality of primary classification packet blocks are classified into one class is as follows: if a plurality of adjacent primary packet blocks exist, namely the shortest distance exists between each primary packet block and each primary packet block, the minimum distance is used as the shortest distance for judgment.
The difference between the segments is the difference between the maximum values, or the difference between the minimum values, or the difference between the central values in the two difference threshold segments. The shortest distance is the shortest distance between two adjacent primary classification blocks, i.e. the distance between the nearest pixel points in two adjacent primary classification blocks, such as primary classification block (a)1,a2,a3) And its adjacent one-time classification packet block (b)1,b2) The shortest distance between the two is the pixel point a3And pixel b1The distance between them; one-time classification packet block (c)1,c2) The primary classification packet block (b) adjacent thereto1,b2) Shortest of betweenDistance being a pixel point c1And b2The distance between them.
For the first classification packet blocks of which the inter-segment difference is smaller than the preset inter-segment difference threshold and the shortest distance to the adjacent first classification packet block is smaller than the distance threshold, the first classification packet blocks can be further classified into second classification packet blocks.
For the present classification, it can be understood in conjunction with fig. 4, and fig. 4 is a schematic diagram of another way of obtaining the first edge point packet on the image in the embodiment shown in fig. 2. Wherein the primary classification packet block (a)1,a2,a3)、(b1,b2)、(c1,c2)、(d1,d2,d3,d4) Further classified into secondary classification blocks (a)1,a2,a3,b1,b2,c1,c2)、(d1,d2) Wherein the packet blocks (c) are sorted once1,c2) And a first classification packet block (d)1,d2) The shortest distance between is c2To d1The distance between them is greater than the threshold value, and thus the packet blocks (d) will be sorted once1,d2) Grouping into secondary classified packet blocks; using the secondary classification packet blocks as first edge point packet blocks to obtain 2 first edge point packet blocks (a)1,a2,a3,b1,b2,c1,c2)、(d1,d2). In addition, fig. 4 is a diagram showing the pixel d discarded from fig. 33And d4That is, in the second classification method, the interference points in the extracted first edge points may also be removed
In step S22, after the first edge point is classified to obtain the first edge point block, the second detection area is determined in the subsequent step according to the first edge block.
S23: and determining a second detection area according to the first edge point packet block.
In this step, a second detection area may be determined between two adjacent first edge points in the first edge point packet, specifically, a distance between the first edge points in the first edge point packet may be first calculated, and an area between two first edge points whose distance exceeds a set range may be determined as the second detection area. For example, a first edge point that is relatively distant is set as an end point of the second detection region, and the second detection region is determined based on the end point and the shape of the first detection region, and the shape of the second detection region is the same as the shape of the first detection region.
The second detection area may be determined between two adjacent first edge points of the adjacent first edge point blocks, and for the two adjacent first edge point blocks, the area between the two adjacent first edge points is determined as the second detection area. For example, two adjacent first edge points are used as end points of the second detection region, and the second detection region is determined based on the end points and the shape of the first detection region, and the shape of the second detection region is the same as the shape of the first detection region.
The second detection region is determined by directly determining the second detection region from the first edge points, and the second detection region is determined by determining the distances between all the first edge points, whereas the second detection region is determined by calculating the distances between the first edge points in the first edge point packet in step S23, so that the step S23 is more efficient.
For example, in FIG. 4, the first edge point block is (a)1,a2,a3,b1,b2,c1,c2) And (d)1,d2) Through the second detection area determined in step S23, the distance between two adjacent first edge points within the first edge point packet exceeds the set range: pixel point a1、a2Region between, pixel b1、b2Region between, pixel point c1、c2The area between the two adjacent first edge points of the adjacent first edge point blocks: pixel point c2、d1The area in between. After the second detection area is determined, the extraction of the edge points in the second detection area is started.
S24: and amplifying the neighborhood difference of each pixel point in the second detection area, and extracting a second edge point based on the amplified neighborhood difference.
In step S24, the edge points not detected in step S21 are further extracted, in this step, the neighborhood difference of the pixel point is amplified to further highlight the difference between the pixel point and the neighboring pixel point, and then a second edge point is extracted based on the amplified neighborhood difference.
Referring to fig. 5, fig. 5 is a flow chart illustrating the second edge point extraction process in the embodiment shown in fig. 2, wherein the second edge point extraction process includes the following steps.
S241: a weighting window is set in the second detection area.
In step S241, setting a weighting window on the second detection region means that a detection window is first divided when detecting the second detection region, that is, only the pixels within the coverage of the weighting window are analyzed and detected in each analysis and detection, and the weighting window defines the size and range of the analyzed data, so that if the detection window is large, the analyzed and detected data are comprehensive, but the analysis and detection time is long in each analysis and detection; correspondingly, if the weighting window is smaller, the analysis and detection time is shorter, but the analysis and detection data are not comprehensive enough.
The weighting window may be a rectangular window, a circular window, a fan-shaped window, etc., and in this embodiment, the pixel points in the second detection area need to be analyzed, so a rectangular window is generally adopted, the weighting window in this embodiment may be a rectangular window corresponding to (2n +1) × (2n +1) pixel points, n is an integer greater than or equal to 1, there is a central point in the weighting window of the rectangle at this time, and there is a central pixel point in the pixel point corresponding to the coverage area.
S242: and weighting the pixel points covered by the weighting window at the current position.
After the weighting window is set in the second detection area, the pixels covered by the weighting window at the current position may be weighted, wherein the weighted value corresponding to the central pixel among the pixels covered by the weighting window is greater than the weighted values corresponding to other pixels, so that the difference between the central pixel and other pixels can be enlarged after the weighting is completed in step S242.
Further, it can be set that the weighted values corresponding to other pixels farther from the center pixel are smaller, and because the edge line in the image generally corresponds to not only one row of pixels but also multiple rows of pixels, that is, the pixel farther from the edge point is, the difference between the edge point and the edge point is larger. Therefore, when detecting an edge point, it is not desirable to amplify the difference between the edge point and the pixel point immediately adjacent to the edge point too much, and if the amplification is too much, the difference may not be in accordance with the actual difference, which affects the determination. Therefore, for the weighting window, the weighting values corresponding to other pixels farther away from the central pixel are set to be smaller.
Such as the weighting window shown in fig. 6, and fig. 6 is a schematic illustration of the weighting window in the embodiment shown in fig. 2. The weighting window is a 5 × 5 rectangular window, the weight of the corresponding central pixel in the weighting window is 5, the weight of the pixel farthest from the central pixel is 1, and the weight of the pixel closer to the central pixel is 2 or 3. In the step, the pixel points covered by the weighting window are weighted, namely, the central pixel point is multiplied by the weight 5, the pixel point closer to the central pixel point is multiplied by the weight 2 or 3, and the pixel point farther from the central pixel point is multiplied by the weight 1. The weight in the weighting window is fixed in this embodiment, that is, when the weighting window moves relative to the detection area in the subsequent step, other pixel points are used as central pixel points, the central pixel points are multiplied by the weight 5, and the surrounding pixel points are sequentially multiplied by 1, 2, and 3 according to the distance from the central pixel point to the peripheral pixel points.
S243: and calculating the neighborhood difference between the weighted central pixel point and other pixel points to serve as the neighborhood difference after the amplification processing of the central pixel point.
Weighting the pixel points, i.e. multiplying the pixel values of the pixel points by the weight, wherein for the pixel points of the edge line, the pixel values are larger, the pixel values of the adjacent pixel points are smaller, when the pixel points of the edge line are used as the central pixel points in the weighting window, the pixel points of the edge line are multiplied by the larger weight, and the adjacent pixel points are multiplied by the smaller weight, so that the difference between the pixel points of the edge line and the adjacent pixel points can be amplified, then after the pixel points covered by the weighting window are weighted, the neighborhood difference between the central pixel points and other pixel points is calculated, and the neighborhood difference calculated in the step S243 is the neighborhood difference after the amplification processing of the central pixel points. The calculation of the neighborhood difference is similar to that in step S21, and details are not repeated. In addition, the maximum difference value in the difference values of the weighted central pixel point and other pixel points can be used as the neighborhood difference degree.
S244: and relatively moving the second detection area and the weighting window, and returning to the step S242 until the weighting window traverses the second detection area.
Corresponding to the above steps S241-S244, a weighting window can only correspondingly complete the neighborhood difference amplification of a central pixel point. And for the second detection area, each pixel point needs to be detected, so that each pixel point in the second detection area needs to be subjected to neighborhood difference amplification processing, and a second edge point is extracted. Therefore, in the present step S244, the second detection area and the weighting window are relatively moved, and the steps S242-S243 are performed again until the second detection area is traversed to complete the neighborhood difference amplification of all pixel points in the second detection area.
In this embodiment, the relative movement between the second detection region and the weighting window may be to move the weighting window relative to the second detection region along the row direction or the column direction of the pixel with one pixel point as a step length. Specifically, the weighting window is moved relative to the row direction of the pixel points of the second detection area, neighborhood difference amplification is sequentially performed on one row of pixel points, then the step length of one pixel point is moved relative to the row direction of the pixel points of the second detection area, the weighting window is moved relative to the row direction of the pixel points of the second detection area, neighborhood difference amplification is sequentially performed on the next row of pixel points, S-shaped relative movement is performed between the weighting window and the second detection area, and when the weighting window traverses the second detection area, neighborhood difference amplification of the pixel points in the second detection area is finished.
The second edge point can be extracted according to the amplified neighborhood disparity, wherein the pixel point with the larger neighborhood disparity is considered as the second edge point, and the second edge point can be extracted in various ways, for example, in the following step S245 or step S246.
S245: and comparing the amplified neighborhood difference with a preset difference threshold, and screening out pixel points corresponding to the neighborhood difference larger than the difference threshold as the second edge points.
In step S245, the second edge point is extracted by setting a difference threshold, and if the neighborhood disparity after the amplification is greater than the difference threshold, the corresponding pixel point is considered as the second edge point.
S246: and sorting the amplified neighborhood difference degrees from large to small, and screening out pixel points corresponding to the neighborhood difference degrees sorted in the front by a preset number as second edge points.
In step S246, after the neighborhood difference values of all the pixel points in the second detection area are amplified, the amplified neighborhood difference values are sorted from large to small, and pixel points corresponding to a preset number of neighborhood difference values sorted in the front are screened out as second edge points, where the preset number can be set according to specific situations, and if the preset number is too large, the edge points cannot be screened out; if the setting is too small, the edge points are easily lost.
Step S245 and step S246 are not executed after the neighborhood difference amplification process of all the pixel points in the second detection area, where step S245 may be executed after the neighborhood difference amplification process of one pixel point is completed by using a weighting window, and the neighborhood difference after the neighborhood difference amplification process of the pixel point is compared with a preset difference threshold to determine whether the pixel point is an edge point. And then relatively moving the second detection area and the weighting window to amplify the neighborhood difference of the next pixel point and judge whether the next pixel point is an edge point.
As for the step S24 of extracting the second edge point, it can be understood in conjunction with fig. 7, and fig. 7 is a schematic diagram of extracting the second edge point on the image in the embodiment shown in fig. 2.
Based on the second detection area determined in the above step S23,in connection with the example of fig. 4, 4 second detection areas are determined in step S23: pixel point a1、a2Region between, pixel b1、b2Region between, pixel point c1、c2Region between, pixel point c2、d1The area in between.
In step S24, the second edge point is extracted from the 4 second detection areas, and the pixel point a is located1、a2The second edge point e is extracted from the region in between1At pixel point b1、b2The second edge point e is extracted from the region in between2At pixel point c1、c2The second edge point e is extracted from the region in between3At pixel point c2、d1The second edge point e is extracted from the region in between4
After the step S24 is completed, and more edge points are extracted, the first edge point and the second edge point may be fitted to determine an edge in the image. If the edge points are not enough to accurately determine the edge when the image is processed, the following steps may be further adopted to extract the edge points in this embodiment.
S25: and determining a third detection area according to the second edge point.
This step S25 may be based on the principle described in the above embodiments in steps S12, S23 to determine a third detection area based on the second edge point, and the third detection area is also smaller than the second detection area. For example, corresponding to fig. 7, 3 third detection regions may be determined: pixel point a1、e1Region between, pixel e2、b2Region between, pixel e4、d1The area in between. After the third detection area is determined, the extraction of the edge points in the third detection area is started.
S26: and carrying out mean value processing on the neighborhood difference of each pixel point in the third detection area, and extracting a third edge point based on the neighborhood difference after the mean value processing.
In step S26, the edge points not detected in steps S21 and S24 are further extracted, in this step, the neighborhood difference of the pixel point is averaged, and then the third edge point is extracted based on the amplified neighborhood difference.
Referring to fig. 8, fig. 8 is a schematic flow chart illustrating the third edge point extraction in the embodiment shown in fig. 2, wherein the third edge point extraction includes the following steps.
S261: and setting a mean value window in the third detection area.
Step S261 is substantially similar to step S241, and is not described in detail.
S262: and carrying out mean processing on the neighborhood difference of the pixel points covered by the mean window at the current position, and taking the neighborhood difference as the neighborhood difference after mean processing of the central pixel point in the pixel points covered by the mean window.
And after setting a mean value window in the third detection area, carrying out mean value processing on the neighborhood difference of the pixel points covered by the mean value window at the current position, namely, averaging the neighborhood difference of all the pixel points in the mean value window, and then taking the calculated mean value as the processed neighborhood difference of the central pixel point in the mean value window.
S263: and relatively moving the third detection area and the mean value window, and returning to the step S262 until the mean value window traverses the third detection area.
Similarly to step S244, the third detection area and the average window are relatively moved, and step S262 is returned to perform neighborhood difference average processing on all the pixel points in the third detection area. And extracting the third edge point according to the neighborhood difference after the mean value processing, wherein the pixel point with the larger neighborhood difference is considered as the third edge point. Similar to step S245 or step S246, the following two ways may also be adopted to extract the third edge point according to the neighborhood difference after the mean processing.
S264: and comparing the neighborhood difference degree after the average processing with a preset difference threshold value, and screening out pixel points corresponding to the neighborhood difference degree larger than the difference threshold value as third edge points.
S265: and sorting the neighborhood difference degrees after the average processing from large to small, and screening out pixel points corresponding to the neighborhood difference degrees sorted in the front by a preset number as third edge points.
Similarly, step S264 and step S265 are not executed after the neighborhood difference average of all the pixel points in the third detection area is processed, wherein step S264 may be executed after the neighborhood difference average of one pixel point is processed by using an average window, and the neighborhood difference after the neighborhood difference is processed by the pixel point is compared with a preset difference threshold to determine whether the pixel point is an edge point. And then relatively moving the third detection area and the mean value window to perform neighborhood difference mean value processing on the next pixel point and judge whether the next pixel point is an edge point.
As for the step S26 of extracting the third edge point, it can be understood in conjunction with fig. 9, and fig. 9 is a schematic diagram of extracting the third edge point on the image in the embodiment shown in fig. 2.
Based on the third detection regions determined in the above step S25, in conjunction with the example of fig. 7, 3 third detection regions are determined in step S25: pixel point a1、e1Region between, pixel e2、b2Region between, pixel e4、d1The area in between.
In step S26, the third edge point is extracted from the 3 third detection regions, and the third edge point f is extracted from the region between the pixels a1 and e11Pixel point e2、b2The third edge point f is extracted from the region therebetween2Pixel point e4、d1The third edge point f is extracted from the region therebetween3And f4
S27: and determining the edge of the image according to the first edge point, the second edge point and the third edge point.
After the first edge point, the second edge point and the third edge point are extracted through the steps, the edge of the image can be determined according to the edge points extracted three times, for example, fitting operation is performed on all the edge points, and thus the edge line in the image is determined.
In the embodiment, the first extraction of the edge points is carried out according to the neighborhood difference, then the neighborhood difference is amplified, the second extraction of the edge points is carried out according to the amplified neighborhood difference, and the edge points with weak difference are found out; averaging the neighborhood difference again, extracting the edge points for the third time according to the neighborhood difference after mean processing, and further finding out the edge points with weak difference; and determining the edge of the image according to the edge points extracted three times, thereby realizing the detection of the weak edge in the image.
The method for detecting the edge of the image can be applied to the detection of the surface of the workpiece, and therefore, the present application further provides a method for detecting the surface of the workpiece, please refer to fig. 10, where fig. 10 is a schematic flow chart of an embodiment of the method for detecting the surface of the workpiece according to the present application, and this embodiment can achieve the detection of the quality of the surface of the workpiece, for example, detect whether there is a scratch on the surface of the workpiece. The detection method specifically comprises the following steps.
S31: an inspection image of the surface of the workpiece is acquired.
In the step, the surface of the workpiece is shot to obtain the image of the surface of the workpiece, the surface of the workpiece can be shot for multiple times to obtain a plurality of images of the surface of the workpiece, and then the plurality of images are superposed and averaged to obtain a detection image, so that noise in the detection image is reduced.
S32: and carrying out edge detection on the detected image.
S33: detection of workpiece surface based on edge detection of detection image
The edge detection method is adopted to carry out edge detection on the detection image so as to determine the edge in the detection image, and the detection of the surface of the workpiece can be correspondingly realized according to the edge detection result.
In the embodiment, an edge detection algorithm of the image is applied to the surface detection of the workpiece, so that the weak scratch on the surface of the workpiece can be detected.
The above methods can be implemented by a detection device, and the logical processes are represented by a computer program, specifically implemented by an image processing device.
Referring to fig. 11, fig. 11 is a schematic structural diagram of an embodiment of an image processing apparatus according to the present application. The image processing apparatus 100 of the present embodiment includes a processor 11 and a memory 12. The memory 12 has stored therein a computer program, and the processor is adapted to execute the computer program to implement the above-described method.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a computer storage medium according to an embodiment of the present application. The present embodiment computer storage medium 200. The computer storage medium 200 stores therein a computer program capable of being executed to implement the method of the above-described embodiment, and the computer storage medium 200 may be a usb disk, an optical disk, a server, or the like.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (12)

  1. A method for edge detection of an image, the method comprising:
    acquiring neighborhood difference of each pixel point in a first detection area of the image, and extracting a first edge point based on the neighborhood difference;
    determining a second detection area according to the first edge point, wherein the second detection area is smaller than the first detection area;
    amplifying the neighborhood difference of each pixel in the second detection area, and extracting a second edge point based on the amplified neighborhood difference;
    and determining the edge of the image according to the first edge point and the second edge point.
  2. The method of claim 1, wherein the step of determining a second detection region from the first edge point comprises:
    classifying the first edge points according to the neighborhood difference and/or the position relation of the first edge points to obtain at least two first edge point blocks, wherein each first edge point block comprises at least one first edge point;
    and determining the second detection area according to the first edge point packet block.
  3. The method according to claim 2, wherein the step of classifying the first edge point according to the neighborhood disparity and/or the positional relationship of the first edge point comprises:
    comparing the neighborhood difference of the first edge point with a plurality of preset difference threshold sections, and taking the first edge point with the neighborhood difference falling in the same difference threshold section as a class to form a primary classification packet block;
    determining the first edge point packet block based on the primary classification packet block.
  4. The method of claim 3, wherein the step of determining the first edge point block based on the once-classified block comprises:
    taking the primary classification packet block as the first edge point packet block; or
    And further classifying the primary classification packet blocks, of which the inter-segment difference between the corresponding difference threshold segments is smaller than a preset inter-segment difference threshold and the shortest distance between the primary classification packet blocks and the adjacent primary classification packet blocks is smaller than a preset distance threshold, into secondary classification packet blocks, and taking the secondary classification packet blocks as the first edge point packet blocks.
  5. The method of claim 2, wherein the step of determining the second detection region according to the first edge point packet:
    determining the second detection area between the adjacent first edge points in each first edge point packet;
    and/or determining the second detection area between adjacent first edge points in adjacent first edge point blocks.
  6. The method according to claim 1, wherein the step of amplifying the neighborhood disparity of each pixel point in the second detection area and extracting a second edge point based on the amplified neighborhood disparity comprises:
    setting a weighting window in the second detection area;
    weighting the pixel points covered by the weighting window at the current position, wherein the weighted value corresponding to the central pixel point in the pixel points covered by the weighting window is larger than the weighted values corresponding to other pixel points;
    calculating neighborhood difference between the weighted central pixel point and the other pixel points to serve as the neighborhood difference of the central pixel point after amplification processing;
    and relatively moving the second detection area and the weighting window, and returning to the step of weighting the pixel points covered by the weighting window at the current position until the weighting window traverses the second detection area.
  7. The method of claim 6, wherein the weighting values corresponding to the other pixels farther from the center pixel are smaller.
  8. The method according to claim 1, wherein the step of amplifying the neighborhood disparity of each pixel point in the second detection area and extracting a second edge point based on the amplified neighborhood disparity comprises:
    comparing the amplified neighborhood difference with a preset difference threshold, and screening out pixel points corresponding to the neighborhood difference larger than the difference threshold as second edge points;
    or sorting the amplified neighborhood disparity degrees from large to small, and screening out pixel points corresponding to the neighborhood disparity degrees sorted in the front preset number as the second edge points.
  9. The method of claim 1, wherein the step of determining the edge of the image from the first edge point and the second edge point comprises:
    determining a third detection area according to the second edge point, wherein the third detection area is smaller than the second detection area;
    carrying out mean value processing on the neighborhood disparity of each pixel point in the third detection area, and extracting a third edge point based on the neighborhood disparity after mean value processing;
    determining an edge of the image according to the first edge point, the second edge point, and the third edge point.
  10. The method according to claim 9, wherein the step of averaging the neighborhood disparity of each pixel point in the third detection region and extracting a third edge point based on the averaged neighborhood disparity includes:
    setting a mean value window in the third detection area;
    carrying out mean processing on the neighborhood difference of the pixel points covered by the mean window at the current position, and taking the neighborhood difference as the new neighborhood difference of the central pixel point in the pixel points covered by the mean window;
    and relatively moving the third detection area and the mean value window, and returning to the step of performing mean value processing on the neighborhood difference of the pixel points covered by the mean value window at the current position until the mean value window traverses the third detection area.
  11. An image processing apparatus, characterized in that the apparatus comprises a processor and a memory, in which a computer program is stored, the processor being adapted to execute the computer program to implement the method of any of claims 1-10.
  12. A computer storage medium for storing a computer program executable to implement the method of any one of claims 1-10.
CN201880087301.9A 2018-09-10 2018-09-10 Edge detection method of image, image processing apparatus, and computer storage medium Active CN111630563B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/104892 WO2020051746A1 (en) 2018-09-10 2018-09-10 Image edge detection method, image processing device, and computer storage medium

Publications (2)

Publication Number Publication Date
CN111630563A true CN111630563A (en) 2020-09-04
CN111630563B CN111630563B (en) 2022-02-18

Family

ID=69776940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201880087301.9A Active CN111630563B (en) 2018-09-10 2018-09-10 Edge detection method of image, image processing apparatus, and computer storage medium

Country Status (2)

Country Link
CN (1) CN111630563B (en)
WO (1) WO2020051746A1 (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115294141B (en) * 2022-10-10 2023-03-10 惠智赋能(滨州)信息科技服务有限公司 Deep sea fishing net detection method based on sonar image
CN115564767B (en) * 2022-11-10 2023-04-07 深圳市岑科实业有限公司 Inductance winding quality monitoring method based on machine vision
CN115908429B (en) * 2023-03-08 2023-05-19 山东歆悦药业有限公司 Method and system for detecting grinding precision of foot soaking powder
CN115984271B (en) * 2023-03-20 2023-06-30 山东鑫科来信息技术有限公司 Metal burr identification method based on corner detection
CN116168025B (en) * 2023-04-24 2023-07-07 日照金果粮油有限公司 Oil curtain type fried peanut production system
CN116188462B (en) * 2023-04-24 2023-08-11 深圳市翠绿贵金属材料科技有限公司 Noble metal quality detection method and system based on visual identification
CN116228772B (en) * 2023-05-09 2023-07-21 聊城市检验检测中心 Quick detection method and system for fresh food spoilage area
CN116523901B (en) * 2023-06-20 2023-09-19 东莞市京品精密模具有限公司 Punching die detection method based on computer vision
CN116630312B (en) * 2023-07-21 2023-09-26 山东鑫科来信息技术有限公司 Visual detection method for polishing quality of constant-force floating polishing head
CN116703892B (en) * 2023-08-01 2023-11-14 东莞市京品精密模具有限公司 Image data-based lithium battery cutter abrasion evaluation and early warning method
CN116682107B (en) * 2023-08-03 2023-10-10 山东国宏生物科技有限公司 Soybean visual detection method based on image processing
CN116824516B (en) * 2023-08-30 2023-11-21 中冶路桥建设有限公司 Road construction safety monitoring and management system
CN116883401B (en) * 2023-09-07 2023-11-10 天津市生华厚德科技有限公司 Industrial product production quality detection system
CN116993628B (en) * 2023-09-27 2023-12-08 四川大学华西医院 CT image enhancement system for tumor radio frequency ablation guidance
CN116993731B (en) * 2023-09-27 2023-12-19 山东济矿鲁能煤电股份有限公司阳城煤矿 Shield tunneling machine tool bit defect detection method based on image
CN117474977B (en) * 2023-12-27 2024-03-22 山东旭美尚诺装饰材料有限公司 Quick detection method and system for European pine plate pits based on machine vision

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9014265B1 (en) * 2011-12-29 2015-04-21 Google Inc. Video coding using edge detection and block partitioning for intra prediction
CN104700421A (en) * 2015-03-27 2015-06-10 中国科学院光电技术研究所 Adaptive threshold edge detection algorithm based on canny
CN104809800A (en) * 2015-04-14 2015-07-29 深圳怡化电脑股份有限公司 Preprocessing method for extracting banknote splicing mark, spliced banknote recognition method and device
CN107292897A (en) * 2016-03-31 2017-10-24 展讯通信(天津)有限公司 Image edge extraction method, device and terminal for YUV domains

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9014265B1 (en) * 2011-12-29 2015-04-21 Google Inc. Video coding using edge detection and block partitioning for intra prediction
CN104700421A (en) * 2015-03-27 2015-06-10 中国科学院光电技术研究所 Adaptive threshold edge detection algorithm based on canny
CN104809800A (en) * 2015-04-14 2015-07-29 深圳怡化电脑股份有限公司 Preprocessing method for extracting banknote splicing mark, spliced banknote recognition method and device
CN107292897A (en) * 2016-03-31 2017-10-24 展讯通信(天津)有限公司 Image edge extraction method, device and terminal for YUV domains

Also Published As

Publication number Publication date
WO2020051746A1 (en) 2020-03-19
CN111630563B (en) 2022-02-18

Similar Documents

Publication Publication Date Title
CN111630563B (en) Edge detection method of image, image processing apparatus, and computer storage medium
US10068343B2 (en) Method and apparatus for recognizing moving target
CN113109368B (en) Glass crack detection method, device, equipment and medium
US9846823B2 (en) Traffic lane boundary line extraction apparatus and method of extracting traffic lane boundary line
KR101609303B1 (en) Method to calibrate camera and apparatus therefor
JP6654849B2 (en) Method for detecting surface cracks in concrete
US8712184B1 (en) Method and system for filtering noises in an image scanned by charged particles
Heydari et al. An industrial image processing-based approach for estimation of iron ore green pellet size distribution
JP6572411B2 (en) Rail detector
JP2018506046A (en) Method for detecting defects on the tire surface
US10074551B2 (en) Position detection apparatus, position detection method, information processing program, and storage medium
CN111630565B (en) Image processing method, edge extraction method, processing apparatus, and storage medium
JP2016058085A (en) Method and device for detecting shielding of object
JP2009259036A (en) Image processing device, image processing method, image processing program, recording medium, and image processing system
CN106529551B (en) Intelligent recognition counting detection method for round-like objects in packaging industry
CN109102507A (en) Screw thread detection method and device
JP6199799B2 (en) Self-luminous material image processing apparatus and self-luminous material image processing method
US20220126345A1 (en) Stamping line defect quality monitoring systems and methods of monitoring stamping line defects
KR101630264B1 (en) Range-Doppler Clustering Method
KR102133330B1 (en) Apparatus and method for recognizing crack in sturcutre
WO2020175666A1 (en) Color filter inspection device, inspection device, color filter inspection method, and inspection method
US11796481B2 (en) Inspection device and inspection method
EP2302582B1 (en) Correcting defects in an image
JP6114559B2 (en) Automatic unevenness detector for flat panel display
JP5346304B2 (en) Appearance inspection apparatus, appearance inspection system, and appearance inspection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant