CN116152248A - Appearance defect detection method and device, storage medium and computer equipment - Google Patents

Appearance defect detection method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN116152248A
CN116152248A CN202310424855.6A CN202310424855A CN116152248A CN 116152248 A CN116152248 A CN 116152248A CN 202310424855 A CN202310424855 A CN 202310424855A CN 116152248 A CN116152248 A CN 116152248A
Authority
CN
China
Prior art keywords
target
array
processed
area
defect detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310424855.6A
Other languages
Chinese (zh)
Other versions
CN116152248B (en
Inventor
王艺陵
张武杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Casi Vision Technology Luoyang Co Ltd
Casi Vision Technology Beijing Co Ltd
Original Assignee
Casi Vision Technology Luoyang Co Ltd
Casi Vision Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Casi Vision Technology Luoyang Co Ltd, Casi Vision Technology Beijing Co Ltd filed Critical Casi Vision Technology Luoyang Co Ltd
Priority to CN202310424855.6A priority Critical patent/CN116152248B/en
Publication of CN116152248A publication Critical patent/CN116152248A/en
Application granted granted Critical
Publication of CN116152248B publication Critical patent/CN116152248B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The application relates to the technical field of image data processing, and particularly discloses a detection method and device for appearance defects, a storage medium and computer equipment, wherein the method comprises the following steps: obtaining a target image corresponding to the appearance to be detected, wherein the target image comprises a plurality of areas to be processed; each region to be processed is respectively identified from the target image, and a target defect detection region is determined based on a target operation type and the plurality of regions to be processed; marking the target defect detection area on the target image, and detecting the appearance defect of the marked target defect detection area. The method and the device can detect the target defect detection area at one time, greatly shorten the detection time and improve the detection efficiency. In addition, the marked target defect detection area is displayed in the target image, so that the user can observe conveniently, and the size and the position of the target defect detection area can be adjusted directly and conveniently.

Description

Appearance defect detection method and device, storage medium and computer equipment
Technical Field
The present invention relates to the field of image data processing technologies, and in particular, to a method and apparatus for detecting an appearance defect, a storage medium, and a computer device.
Background
The optical detection is a technology for detecting by utilizing an optical principle, has the advantages of non-contact, high precision, high sensitivity and the like, and is widely applied to the field of industrial appearance defect detection, so that defective products are reduced from flowing into the market, and the rights and interests of consumers are protected.
In the prior art, when the optical inspection is applied to the field of industrial appearance defect inspection, each inspection area is generally determined from an industrial product appearance image, and defect inspection is performed on each inspection area. However, this method is relatively inefficient in detection and is prone to repeated detection; on the other hand, the subsequent adjustment of the size and the position of the detection area in the appearance image of the industrial product is not facilitated, so that the defect detection effect is affected.
Disclosure of Invention
In view of the above, the present application provides a method and apparatus for detecting an appearance defect, a storage medium, and a computer device, where each area to be processed is operated according to a target operation type to obtain a target defect detection area, and the target defect detection area is marked in a target image, and when appearance defect detection is performed subsequently, appearance defect detection is performed only on the marked target defect detection area, on one hand, the target defect detection area can be detected at one time, separate detection of multiple detection areas is avoided, detection time is greatly shortened, and detection efficiency is improved; on the other hand, the marked target defect detection area is displayed in the target image, so that the user can observe conveniently, and the size and the position of the target defect detection area can be adjusted directly and conveniently.
According to one aspect of the present application, there is provided a method for detecting an appearance defect, including:
obtaining a target image corresponding to the appearance to be detected, wherein the target image comprises a plurality of areas to be processed;
each to-be-processed area is respectively identified from the target image, and a target defect detection area is determined based on a target operation type and the to-be-processed areas, wherein the target operation type is one of area merging operation, area intersection operation and area difference operation;
marking the target defect detection area on the target image, and detecting the appearance defect of the marked target defect detection area.
According to another aspect of the present application, there is provided a detection apparatus for an appearance defect, including:
the image acquisition module is used for acquiring a target image corresponding to the appearance to be detected, wherein the target image comprises a plurality of areas to be processed;
the region identification module is used for respectively identifying each region to be processed from the target image, and determining a target defect detection region based on a target operation type and the plurality of regions to be processed, wherein the target operation type is one of region merging operation, region intersection operation and region difference operation;
And the defect detection module is used for marking the target defect detection area on the target image and detecting the appearance defect of the marked target defect detection area.
According to still another aspect of the present application, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described method of detecting an appearance defect.
According to still another aspect of the present application, there is provided a computer device including a storage medium, a processor, and a computer program stored on the storage medium and executable on the processor, the processor implementing the method for detecting an appearance defect described above when executing the program.
By means of the technical scheme, the appearance defect detection method and device, the storage medium and the computer equipment can obtain the target image corresponding to the appearance to be detected. The target image may include a plurality of areas to be processed. One region to be processed can be identified from the target image at a time. When all the areas to be processed are identified, each area to be processed can be independently input into the target image, so that the target image can comprise the position information of each area to be processed. Then, the target operation type can be determined according to the actual defect detection requirement, and the target defect detection area is finally determined in the target image based on the target operation type and a plurality of areas to be processed in the target image. Then, the target defect detection area can be marked on the target image, and appearance defect detection can be controlled to be carried out on the marked target defect detection area only. According to the embodiment of the application, the target defect detection areas are obtained by carrying out operation on each area to be processed according to the target operation type, the target defect detection areas are marked in the target image, and when appearance defect detection is carried out subsequently, only the marked target defect detection areas are subjected to appearance defect detection, on one hand, the target defect detection areas can be detected once, the separate detection of a plurality of detection areas is avoided, the detection time is greatly shortened, and the detection efficiency is improved; on the other hand, the marked target defect detection area is displayed in the target image, so that the user can observe conveniently, and the size and the position of the target defect detection area can be adjusted directly and conveniently.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is a schematic flow chart of a method for detecting an appearance defect according to an embodiment of the present application;
fig. 2 is a schematic flow chart of another method for detecting an appearance defect according to an embodiment of the present application;
fig. 3 is a schematic diagram of a target image of an appearance to be detected according to an embodiment of the present application;
fig. 4 is a schematic flow chart of another method for detecting an appearance defect according to an embodiment of the present application;
fig. 5 shows a schematic structural diagram of an appearance defect detection device according to an embodiment of the present application.
Detailed Description
The present application will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
In this embodiment, a method for detecting an appearance defect is provided, as shown in fig. 1, and the method includes:
step 101, obtaining a target image corresponding to the appearance to be detected, wherein the target image comprises a plurality of areas to be processed.
The detection method for the appearance defects can be applied to the fields of industrial appearance defect detection and the like, and particularly can be applied to the machine vision technology. When appearance defect detection is performed, first, a target image corresponding to the appearance to be detected may be acquired. Here, the appearance to be detected refers to the appearance of an object to be subjected to surface defect detection, and the appearance to be detected is shot by using an image shooting device, so that a target image can be correspondingly obtained. The target image may include a plurality of areas to be processed, where the areas to be processed may be areas that may be subsequently used for defect detection, or may be areas that may be subsequently removed.
Step 102, identifying each to-be-processed region from the target image, and determining a target defect detection region based on a target operation type and the to-be-processed regions, wherein the target operation type is one of region merging operation, region intersection operation and region difference operation.
In this embodiment, the area to be processed may be plural, and one area to be processed may be identified from the target image at a time. When all the areas to be processed are identified, each area to be processed can be independently input into the target image, so that the target image can comprise the position information of each area to be processed. Then, the target operation type can be determined according to the actual defect detection requirement, and the target defect detection area is finally determined in the target image based on the target operation type and a plurality of areas to be processed in the target image. The target operation type may be a region merging operation, a region intersection operation, a region difference operation, or the like.
And 103, marking the target defect detection area on the target image, and detecting the appearance defect of the marked target defect detection area.
In this embodiment, after the target defect detection area is determined in the target image, the target defect detection area may be marked on the target image, that is, the target defect detection area is displayed in fusion with the target image. Specifically, the target defect detection area may be circled in the target image using a colored line or the like, so that the user can intuitively and clearly see the target defect detection area from the target image. It should be noted that the target defect detection area may be an entire area or may be composed of a plurality of scattered areas, and when the target defect detection area is a plurality of scattered areas, each of the scattered areas may be marked with a colored line or the like. Then, when appearance defect detection is performed, control is performed to detect appearance defects only in the marked target defect detection area.
By applying the technical scheme of the embodiment, firstly, a target image corresponding to the appearance to be detected can be acquired. The target image may include a plurality of areas to be processed. One region to be processed can be identified from the target image at a time. When all the areas to be processed are identified, each area to be processed can be independently input into the target image, so that the target image can comprise the position information of each area to be processed. Then, the target operation type can be determined according to the actual defect detection requirement, and the target defect detection area is finally determined in the target image based on the target operation type and a plurality of areas to be processed in the target image. Then, the target defect detection area can be marked on the target image, and appearance defect detection can be controlled to be carried out on the marked target defect detection area only. According to the embodiment of the application, the target defect detection areas are obtained by carrying out operation on each area to be processed according to the target operation type, the target defect detection areas are marked in the target image, and when appearance defect detection is carried out subsequently, only the marked target defect detection areas are subjected to appearance defect detection, on one hand, the target defect detection areas can be detected once, the separate detection of a plurality of detection areas is avoided, the detection time is greatly shortened, and the detection efficiency is improved; on the other hand, the marked target defect detection area is displayed in the target image, so that the user can observe conveniently, and the size and the position of the target defect detection area can be adjusted directly and conveniently.
Further, as a refinement and extension of the specific implementation manner of the foregoing embodiment, in order to fully describe the specific implementation process of the embodiment, another method for detecting an appearance defect is provided, as shown in fig. 2, where the method includes:
step 201, obtaining a target image corresponding to the appearance to be detected, wherein the target image comprises a plurality of areas to be processed.
Step 202, identifying a region to be processed from the target image each time based on the gray value of each pixel point in the target image.
In this embodiment, the gray values of the pixel points corresponding to the different regions may also be different in the target image. Therefore, one region to be processed can be identified from the target image at a time according to the gray value of the pixel point. The target image shown in fig. 3 includes two speaking holes, an upper hole and a lower hole. According to the characteristic that the gray value of the speaking hole area is smaller than the background, the upper hole can be identified from the target image for the first time, and the lower hole can be identified from the target image for the second time, wherein the upper hole and the lower hole can be the areas to be processed.
The process of extracting the region to be processed by using the pixel gray value is the same as that of the prior art, and specifically, the region to be processed can be extracted by using a region growing method and other methods, which are not required.
And 203, representing each region to be processed by an array, wherein the array comprises pixel information of each pixel point in the region to be processed, and the pixel information comprises an abscissa and an ordinate of the pixel point.
In this embodiment, after identifying the respective to-be-processed regions in the target image, each to-be-processed region may be represented by an array, and the size and position of each to-be-processed region may be represented by a set of points. The array may include abscissa information and ordinate information of each pixel point in the region to be processed. From the pixel information of each pixel point in the array, a region to be processed corresponding to the array can be determined in the target image. According to the method and the device, each region is represented through the array, the representation of the region can be more accurate and convenient, and the region can be conveniently and correspondingly operated according to the target operation type.
And 204, inputting an array corresponding to each region to be processed into the target image.
In this embodiment, an array corresponding to each region to be processed may be input into the target image, and each region to be processed may be determined in the target image by the arrays.
Step 205, determining a target processing operation based on the target operation type, and processing the arrays corresponding to the multiple to-be-processed areas according to the target processing operation to obtain a target processed array; and determining the target defect detection area in the target image according to the target processed array.
In this embodiment, each target operation type may correspond to a target processing operation, and after the target operation type is determined, the corresponding target processing operation is also determined. Then, the array corresponding to each to-be-processed area can be processed according to the target processing operation, so that a target processed array can be obtained, and the target processed array can comprise pixel point information of the residual pixel points after the target processing operation. And determining the target defect detection area by the array after target processing.
In this embodiment, optionally, the determining the target defect detection area in the target image according to the target processed array in step 205 includes: when the area to be processed is an area to be detected, determining the target defect detection area based on pixel points in the array after target processing; when the area to be processed is the area to be removed, an original array corresponding to the target image is obtained, the array after the target processing is removed from the original array, a residual array is obtained, and the target defect detection area is determined based on pixel points in the residual array.
In this embodiment, the target processed array may indicate two regions, one being the region to be detected (i.e., when the region to be processed is the region to be detected) and one being the region to be culled (i.e., when the region to be processed is the region to be culled). If the target processed array indicates the region to be detected, the target defect detection region can be determined directly according to the pixel point information of each pixel point in the target processed array; if the array indicates the area to be removed after the target processing, the area to be removed is removed from the target image, and then the target defect detection area can be determined. Specifically, when the area to be removed is removed from the target image, an original array of the target image may be obtained first, the original array may include pixel information of each pixel point in the target image, then, pixel points included in the array after the target processing are removed from each pixel point in the original array, and a remaining array is formed by remaining pixel points and corresponding pixel information. And finally, determining a target defect detection area in the target image according to the residual array. In the application process, whether the area to be processed is the area to be detected or the area to be removed can be determined according to the calculation convenience of the appearance defect detection area and the calculation convenience of the background area of the target image. For example, if the appearance defect detection area is more convenient to calculate than the background area, the area to be processed may be regarded as the area to be detected at this time; if the background area is more convenient to calculate than the appearance defect detection area, the area to be processed can be used as the area to be removed at the moment. In the embodiment of the application, the area to be processed can be the area to be detected or the area to be removed, and can be specifically determined according to actual conditions, so that the determination of the target defect detection area is more flexible, and the determination efficiency of the target defect detection area is improved.
And 206, marking the target defect detection area on the target image, and detecting the appearance defect of the marked target defect detection area.
In this embodiment, the target defect detection area may be marked on the target image, and the appearance defect detection may be controlled to be performed only on the marked target defect detection area.
In the embodiment of the present application, optionally, the "determining a target processing operation based on the target operation type" in step 205 includes: when the target operation type is the region merging operation, determining the target processing operation as an array merging operation; when the target operation type is the region intersection operation, determining that the target processing operation is a group intersection operation; and when the target operation type is the regional difference operation, determining the target processing operation as a plurality of difference operation operations.
In this embodiment, the target operation type may include a region merging operation, a region intersection operation, a region difference operation, and the like. The corresponding target processing operations are also different for different target operation types. For example, the target processing operation corresponding to the region merging operation is an array merging operation; the target processing operation corresponding to the region intersection operation is an array intersection operation; the target processing operation corresponding to the region difference operation is an array difference operation. According to the method and the device for processing the region difference, the target operation type is set, operation including region merging, region intersection and region difference is achieved, each region to be processed can be input independently, and operation flexibility is improved.
In this embodiment of the present application, optionally, when the target processing operation is the array merging operation, the "processing the arrays corresponding to the multiple areas to be processed according to the target processing operation to obtain a target processed array" in step 205 includes: judging whether the arrays corresponding to any two areas to be processed contain repeated first pixel points or not; when the target processing method comprises the steps of carrying out combination processing on the arrays corresponding to the multiple areas to be processed, and eliminating the first pixel point from the combined arrays to obtain the target processed arrays; and when the target processing array does not contain the target processing array, merging the arrays corresponding to the plurality of to-be-processed areas to obtain the target processed array.
In this embodiment, if the target processing operation is an array union operation, the array corresponding to the region to be processed may be union-operated. When determining the target processed array, firstly, arrays corresponding to any two to-be-processed areas in the plurality of to-be-processed areas can be compared, whether repeated pixel points exist between the two arrays is determined, and the repeated pixel points are called as first pixel points. Then, the arrays corresponding to the areas to be processed can be combined to obtain the combined arrays. If the first pixel point is found to exist through judgment, the first pixel point can be removed from the combined array, so that a target processed array is obtained; if the first pixel point is found to be absent through judgment, the combined array can be directly used as the target processed array.
Assuming that the area to be processed includes two arrays, the corresponding arrays are the array a and the array B, respectively, when the two arrays are subjected to the array combining operation, the two arrays can be expressed by the following formula: S=A = { x|x epsilon A or x epsilon B }, wherein S is a combined set of the array A and the array B, x represents pixel points in the combined set, the formula represents that all pixel points belonging to the array A or the array B are put into a new array, repeated pixel points are removed, and the obtained new array is the union set of the array A and the array B. For the array parallel operation, there are three cases as follows. The first is that the to-be-processed area corresponding to the array a completely contains the to-be-processed area corresponding to the array B, and the combination result of the array a and the array B can be written as a u b=a by using a mathematical expression. Second, there is intersection between the area to be processed corresponding to the array a and the area to be processed corresponding to the array B, the combination result of the array a and the array B is the sum of the array a and the array B minus the intersection of the array a and the array B, and the mathematical expression can be written as a u b=a+b- (a n B). Thirdly, the to-be-processed area corresponding to the array a is separated from the to-be-processed area corresponding to the array B, the combination result of the array a and the array B is the sum of the array a and the array B, and the mathematical expression can be written as a ∈b=a+b.
In this embodiment, optionally, when the target processing operation is the array intersection operation, the processing the arrays corresponding to the multiple areas to be processed according to the target processing operation in step 205 to obtain a target processed array includes: judging whether the array corresponding to the multiple areas to be processed contains a second pixel point which is repeated together or not; when the target processing array is included, combining the second pixel points to obtain the target processed array; and when the target processing array is not included, determining that the target processing array is empty.
In this embodiment, if the target processing operation is an array interleave operation, an interleave operation may be performed on an array corresponding to the region to be processed. When determining the target processed array, firstly, arrays corresponding to the plurality of to-be-processed areas can be compared to determine whether common repeated pixel points exist in the arrays, and the common repeated pixel points are called second pixel points. Here, the pixel points that are repeated in common refer to pixel points that exist in each array. If the second pixel points are found to exist through judgment, the second pixel points can be combined together, so that a target processed array is obtained, namely, all the second pixel points are included in the target processed array; if the second pixel point is found to be absent through judgment, the target processed array can be directly determined to be empty, namely the target processed array does not comprise any pixel point.
Assuming that the area to be processed includes two arrays, the corresponding arrays are array a and array B, respectively, when performing an array blending operation on the two arrays, the two arrays can be expressed by the following formula: s=aΣb= { x|x e a and x e B }, where S is the set of intersection operations of array a and array B, x represents the pixel point in the set after the intersection operations, and the formula represents that all the pixels belonging to both array a and array B are placed in a new array, and the new array obtained is the intersection of array a and array B. The pixels in the intersection belong to both array A and array B. For the array traffic operation, there are three cases as follows. The first is that the area to be processed corresponding to the array a completely contains the area to be processed corresponding to the array B, the intersection result of the array a and the array B is the array B, and the mathematical expression can be written as a n b=b. Second, the intersection exists between the to-be-processed area corresponding to the array a and the to-be-processed area corresponding to the array B, the intersection result of the array a and the array B is the overlapping part of the areas, and the mathematical expression can be written as a n b=a+b- (a u B). Third, the area to be processed corresponding to the array a is separated from the area to be processed corresponding to the array B, the intersection result of the array a and the array B is null, and the mathematical expression can be written as a n b=0.
In this embodiment, optionally, when the target processing operation is the array difference operation, the processing the arrays corresponding to the multiple areas to be processed according to the target processing operation in step 205 to obtain a target processed array includes: determining a reference region from the plurality of regions to be processed; judging whether an array corresponding to the reference area and an array corresponding to any residual area contain repeated third pixel points or not, wherein the residual area is an area except the reference area in the plurality of areas to be processed; when the target processing method comprises the steps of eliminating the third pixel point from the array corresponding to the reference area to obtain the target processed array; and when the target processing array does not contain the target processing array, taking the array corresponding to the reference area as the target processing array.
In this embodiment, if the target processing operation is an array difference operation, a difference operation may be performed on an array corresponding to the region to be processed. In determining the target processed array, first, a reference region may be determined from a plurality of regions to be processed. After the reference area is determined, the array corresponding to the remaining area may be subjected to a difference operation using the array corresponding to the reference area. Here, the remaining region may be a region other than the reference region among the plurality of regions to be processed. Then, it may be determined whether a repeated third pixel point is included between the array corresponding to the reference region and the array corresponding to each of the other remaining regions. For example, the array corresponding to the reference area is an array L, the array corresponding to the remaining area includes an array M and an array N, and if the array L and the array M include the same pixel 100, the pixel 100 may be referred to as a third pixel even if the array N does not include the pixel 100; similarly, if the array L and the array N include the same pixel 200, the pixel 200 may be referred to as a third pixel even if the array M does not include the pixel 200. That is, the third pixel is common to the array corresponding to the reference region and the array corresponding to any one of the remaining regions. If the third pixel point is found to exist through judgment, the third pixel point can be removed from the array corresponding to the reference area, so that a target processed array is obtained; if the third pixel point is found to be absent through judgment, the array corresponding to the reference area can be directly determined to be the target processed array.
Assuming that the area to be processed includes two arrays, the corresponding arrays are array a and array B, respectively, when performing array difference operation on the two arrays, the two arrays can be expressed by the following formula: s=a-b= { x|x e a and x ∉ B }, where S is a set of difference operations between the array a and the array B, x represents pixels in the set after the difference operation, and the formula represents that all pixels in the array a but belonging to the array B are removed, and the new array is a difference between the array a and the array B, and all pixels in the new array belong to the array a but not to the array B. For the array difference operation, assume that the input first to-be-processed area is taken as a reference area (i.e., the to-be-processed area corresponding to the array a), and perform difference operation on other to-be-processed areas, where there are three cases as follows. The first is that the area to be processed corresponding to the array A completely contains the area to be processed corresponding to the array B, and the difference result between the array A and the array B can be written as A-B by using a mathematical expression. Second, the intersection exists between the to-be-processed area corresponding to the array a and the to-be-processed area corresponding to the array B, the difference between the array a and the array B is the result of subtracting the overlapping part of the array a and the array B from the array a, and the mathematical expression can be written as a-b=a- (a n B). Thirdly, the area to be processed corresponding to the array a is separated from the area to be processed corresponding to the array B, the result of the difference between the array a and the array B is the array a, and the mathematical expression can be written as a-b=a.
In an embodiment of the present application, optionally, the target defect detection area includes at least one closed area; the "marking the target defect detection area on the target image" described in step 206 includes: determining edge pixel points from each closed area, and sequentially connecting the edge pixel points in each closed area by using a preset line to obtain a marked target defect detection area.
In this embodiment, the target defect detection area may include one closed area or may include a plurality of closed areas. For example, as shown in fig. 3, when the merging operation is performed on the upper hole and the lower hole, the target defect detection area is a set of the upper hole area and the lower hole area, that is, the target defect detection area includes both the closed area of the upper hole and the closed area of the lower hole. When marking the target defect detection area on the target image, first, edge pixel points may be determined from each of the closed areas. And then, sequentially connecting the edge pixel points in each closed area by using a preset line, and generating a closed line corresponding to each closed area after connecting. And the framed area in each closed line is the target defect detection area. Here, the preset line may be a line that sets a dotted line type, thickness, color, etc.
In an embodiment of the present application, optionally, after the "marking the target defect detection area on the target image" in step 206, the method further includes: responding to a region adjustment instruction, acquiring adjusted marking data, and updating the target defect detection region based on the adjusted marking data; and detecting the appearance defects of the updated target defect detection area.
In this embodiment, after marking the target defect detection area on the target image, the user may determine whether the target defect detection area is accurate and meets the requirement, and if not, the user may perform corresponding adjustment. At this time, the adjusted marking data may be identified from the region adjustment instruction in response to the region adjustment instruction. For example, if the target defect detection area is marked with a red curve line, the adjusted marking data may also be an adjusted red curve line. Then, the target defect detection area determined before can be updated according to the adjusted marking data, so that an updated target defect detection area is obtained, and defect detection is performed in the updated target defect detection area when appearance defect detection is performed. According to the embodiment of the application, the user can adjust the target defect detection area on the target image according to actual conditions, and then appearance defect detection is carried out on the adjusted target defect detection area, so that the flexibility of determining the target defect detection area is improved.
In an embodiment of the present application, optionally, after the "performing appearance defect detection on the marked target defect detection area" in step 206, the method further includes: generating label information for each detected appearance defect, and inserting the label information into a region corresponding to the appearance defect in the target image, wherein the label information comprises a defect number and/or a defect type.
In this embodiment, after the appearance defect detection is performed on the target defect detection area, when the appearance defect is detected to be included therein, a corresponding tag information may be generated for each appearance defect, where the tag information may include a defect number, a defect type, and the like. In the same target image, the defect number may be unique, and the defect numbers of different appearance defects are different. Further, the defect types may be determined according to the defect sizes, and may be specifically classified into large defects, medium defects, small defects, and the like, each of which corresponds to a set size range. Then, the label information can be inserted into the target image in the area corresponding to the appearance defects, so that a user can directly determine the relevant information of each appearance defect according to the label information, and the method is simple and convenient.
Further, as a refinement and extension of the specific implementation manner of the foregoing embodiment, in order to fully describe the specific implementation process of the embodiment, another method for detecting an appearance defect is provided, as shown in fig. 4, where the method includes:
first, regions involved in the operation, that is, respective regions to be processed are input in the target image. Then, according to the actual defect detection requirement, the operation type can be determined, and the operation type can be region merging operation, region intersection operation and region difference operation. After the operation type is determined, an area set operation result can be obtained according to the input area participating in the operation and the operation type, and the area set operation result is displayed in the target image. Subsequently, when appearance defect detection is carried out, only the operation result of the region set displayed in the target image is subjected to defect detection, so that the defects in the target image can be detected once, and the defect detection efficiency can be greatly improved. In addition, different areas to be processed can be independently input, corresponding merging, intersection or difference operation is carried out according to the position relation of the areas to be processed in the target image and the defect detection requirement, the target defect detection area can be simply determined, and the target defect detection area is displayed in the target image, so that the user can observe and follow-up debugging work conveniently.
Further, as a specific implementation of the method of fig. 1, an embodiment of the present application provides an appearance defect detection device, as shown in fig. 5, where the device includes:
the image acquisition module is used for acquiring a target image corresponding to the appearance to be detected, wherein the target image comprises a plurality of areas to be processed;
the region identification module is used for respectively identifying each region to be processed from the target image, and determining a target defect detection region based on a target operation type and the plurality of regions to be processed, wherein the target operation type is one of region merging operation, region intersection operation and region difference operation;
and the defect detection module is used for marking the target defect detection area on the target image and detecting the appearance defect of the marked target defect detection area.
Optionally, the area identifying module includes:
the identification unit is used for identifying an area to be processed from the target image each time based on the gray value of each pixel point in the target image;
the array representation unit is used for representing each region to be processed through an array, wherein the array comprises pixel information of each pixel point in the region to be processed, and the pixel information comprises an abscissa and an ordinate of the pixel point.
Optionally, the area identifying module further includes:
the input unit is used for inputting an array corresponding to each region to be processed into the target image;
the array processing unit is used for determining target processing operation based on the target operation type, processing arrays corresponding to the multiple areas to be processed according to the target processing operation to obtain a target processed array, and determining the target defect detection area in the target image according to the target processed array.
Optionally, the array processing unit is configured to:
when the area to be processed is an area to be detected, determining the target defect detection area based on pixel points in the array after target processing; when the area to be processed is the area to be removed, an original array corresponding to the target image is obtained, the array after the target processing is removed from the original array, a residual array is obtained, and the target defect detection area is determined based on pixel points in the residual array.
Optionally, the array processing unit is further configured to:
when the target operation type is the region merging operation, determining the target processing operation as an array merging operation; when the target operation type is the region intersection operation, determining that the target processing operation is a group intersection operation; and when the target operation type is the regional difference operation, determining the target processing operation as a plurality of difference operation operations.
Optionally, when the target processing operation is the array union operation, the array processing unit is further configured to:
judging whether the arrays corresponding to any two areas to be processed contain repeated first pixel points or not; when the target processing method comprises the steps of carrying out combination processing on the arrays corresponding to the multiple areas to be processed, and eliminating the first pixel point from the combined arrays to obtain the target processed arrays; and when the target processing array does not contain the target processing array, merging the arrays corresponding to the plurality of to-be-processed areas to obtain the target processed array.
Optionally, when the target processing operation is the array cross operation, the array processing unit is further configured to:
judging whether the array corresponding to the multiple areas to be processed contains a second pixel point which is repeated together or not; when the target processing array is included, combining the second pixel points to obtain the target processed array; and when the target processing array is not included, determining that the target processing array is empty.
Optionally, when the target processing operation is the array difference operation, the array processing unit is further configured to:
determining a reference region from the plurality of regions to be processed; judging whether an array corresponding to the reference area and an array corresponding to any residual area contain repeated third pixel points or not, wherein the residual area is an area except the reference area in the plurality of areas to be processed; when the target processing method comprises the steps of eliminating the third pixel point from the array corresponding to the reference area to obtain the target processed array; and when the target processing array does not contain the target processing array, taking the array corresponding to the reference area as the target processing array.
Optionally, the target defect detection area comprises at least one closed area; the defect detection module is used for:
determining edge pixel points from each closed area, and sequentially connecting the edge pixel points in each closed area by using a preset line to obtain a marked target defect detection area.
Optionally, the apparatus further comprises:
the area updating module is used for responding to an area adjustment instruction after the target defect detection area is marked on the target image, acquiring adjusted marking data and updating the target defect detection area based on the adjusted marking data;
the defect detection module is also used for detecting appearance defects of the updated target defect detection area.
Optionally, the apparatus further comprises:
the label generation module is used for generating label information for each detected appearance defect after the appearance defect detection is carried out on the marked target defect detection area, and inserting the label information into the area corresponding to the appearance defect in the target image, wherein the label information comprises a defect number and/or a defect type.
It should be noted that, for other corresponding descriptions of each functional unit related to the detection device for appearance defects provided in the embodiments of the present application, reference may be made to corresponding descriptions in the methods of fig. 1 to 4, and no further description is given here.
Based on the above-mentioned method shown in fig. 1 to 4, correspondingly, the embodiment of the present application further provides a storage medium, on which a computer program is stored, which when executed by a processor, implements the above-mentioned method for detecting an appearance defect shown in fig. 1 to 4.
Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (may be a CD-ROM, a U-disk, a mobile hard disk, etc.), and includes several instructions for causing a computer device (may be a personal computer, a server, or a network device, etc.) to perform the methods described in various implementation scenarios of the present application.
Based on the method shown in fig. 1 to fig. 4 and the virtual device embodiment shown in fig. 5, in order to achieve the above object, the embodiment of the present application further provides a computer device, which may specifically be a personal computer, a server, a network device, or the like, where the computer device includes a storage medium and a processor; a storage medium storing a computer program; a processor for executing a computer program to implement the method for detecting an appearance defect as shown in fig. 1 to 4.
Optionally, the computer device may also include a user interface, a network interface, a camera, radio Frequency (RF) circuitry, sensors, audio circuitry, WI-FI modules, and the like. The user interface may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), etc., and the optional user interface may also include a USB interface, a card reader interface, etc. The network interface may optionally include a standard wired interface, a wireless interface (e.g., bluetooth interface, WI-FI interface), etc.
It will be appreciated by those skilled in the art that the architecture of a computer device provided in the present embodiment is not limited to the computer device, and may include more or fewer components, or may combine certain components, or may be arranged in different components.
The storage medium may also include an operating system, a network communication module. An operating system is a program that manages and saves computer device hardware and software resources, supporting the execution of information handling programs and other software and/or programs. The network communication module is used for realizing communication among all components in the storage medium and communication with other hardware and software in the entity equipment.
From the above description of the embodiments, it will be apparent to those skilled in the art that the present application may be implemented by means of software plus necessary general hardware platforms, or may be implemented by hardware. First, a target image corresponding to the appearance to be detected may be acquired. The target image may include a plurality of areas to be processed. One region to be processed can be identified from the target image at a time. When all the areas to be processed are identified, each area to be processed can be independently input into the target image, so that the target image can comprise the position information of each area to be processed. Then, the target operation type can be determined according to the actual defect detection requirement, and the target defect detection area is finally determined in the target image based on the target operation type and a plurality of areas to be processed in the target image. Then, the target defect detection area can be marked on the target image, and appearance defect detection can be controlled to be carried out on the marked target defect detection area only. According to the embodiment of the application, the target defect detection areas are obtained by carrying out operation on each area to be processed according to the target operation type, the target defect detection areas are marked in the target image, and when appearance defect detection is carried out subsequently, only the marked target defect detection areas are subjected to appearance defect detection, on one hand, the target defect detection areas can be detected once, the separate detection of a plurality of detection areas is avoided, the detection time is greatly shortened, and the detection efficiency is improved; on the other hand, the marked target defect detection area is displayed in the target image, so that the user can observe conveniently, and the size and the position of the target defect detection area can be adjusted directly and conveniently.
Those skilled in the art will appreciate that the drawings are merely schematic illustrations of one preferred implementation scenario, and that the modules or flows in the drawings are not necessarily required to practice the present application. Those skilled in the art will appreciate that modules in an apparatus in an implementation scenario may be distributed in an apparatus in an implementation scenario according to an implementation scenario description, or that corresponding changes may be located in one or more apparatuses different from the implementation scenario. The modules of the implementation scenario may be combined into one module, or may be further split into a plurality of sub-modules.
The foregoing application serial numbers are merely for description, and do not represent advantages or disadvantages of the implementation scenario. The foregoing disclosure is merely a few specific implementations of the present application, but the present application is not limited thereto and any variations that can be considered by a person skilled in the art shall fall within the protection scope of the present application.

Claims (24)

1. A method for detecting an appearance defect, comprising:
obtaining a target image corresponding to the appearance to be detected, wherein the target image comprises a plurality of areas to be processed;
each to-be-processed area is respectively identified from the target image, and a target defect detection area is determined based on a target operation type and the to-be-processed areas, wherein the target operation type is one of area merging operation, area intersection operation and area difference operation;
Marking the target defect detection area on the target image, and detecting the appearance defect of the marked target defect detection area.
2. The method of claim 1, wherein the identifying each of the regions to be processed from the target image comprises:
based on the gray value of each pixel point in the target image, identifying an area to be processed from the target image each time;
and representing each region to be processed by an array, wherein the array comprises pixel information of each pixel point in the region to be processed, and the pixel information comprises an abscissa and an ordinate of the pixel point.
3. The method of claim 2, wherein the determining a target defect detection area based on the target operation type and the plurality of areas to be processed comprises:
inputting an array corresponding to each region to be processed into the target image;
determining a target processing operation based on the target operation type, and processing arrays corresponding to the multiple areas to be processed according to the target processing operation to obtain a target processed array;
and determining the target defect detection area in the target image according to the target processed array.
4. A method according to claim 3, wherein said determining said target defect detection area in said target image from said target processed array comprises:
when the area to be processed is an area to be detected, determining the target defect detection area based on pixel points in the array after target processing;
when the area to be processed is the area to be removed, an original array corresponding to the target image is obtained, the array after the target processing is removed from the original array, a residual array is obtained, and the target defect detection area is determined based on pixel points in the residual array.
5. The method of claim 3 or 4, wherein the determining a target processing operation based on the target operation type comprises:
when the target operation type is the region merging operation, determining the target processing operation as an array merging operation;
when the target operation type is the region intersection operation, determining that the target processing operation is a group intersection operation;
and when the target operation type is the regional difference operation, determining the target processing operation as a plurality of difference operation operations.
6. The method of claim 5, wherein when the target processing operation is the array merging operation, the processing the arrays corresponding to the plurality of areas to be processed according to the target processing operation to obtain a target processed array includes:
judging whether the arrays corresponding to any two areas to be processed contain repeated first pixel points or not;
when the target processing method comprises the steps of carrying out combination processing on the arrays corresponding to the multiple areas to be processed, and eliminating the first pixel point from the combined arrays to obtain the target processed arrays;
and when the target processing array does not contain the target processing array, merging the arrays corresponding to the plurality of to-be-processed areas to obtain the target processed array.
7. The method of claim 5, wherein when the target processing operation is the array intersection operation, the processing the arrays corresponding to the plurality of to-be-processed areas according to the target processing operation to obtain a target processed array includes:
judging whether the array corresponding to the multiple areas to be processed contains a second pixel point which is repeated together or not;
when the target processing array is included, combining the second pixel points to obtain the target processed array;
And when the target processing array is not included, determining that the target processing array is empty.
8. The method of claim 5, wherein when the target processing operation is the array difference operation, the processing the arrays corresponding to the plurality of to-be-processed areas according to the target processing operation to obtain a target processed array includes:
determining a reference region from the plurality of regions to be processed;
judging whether an array corresponding to the reference area and an array corresponding to any residual area contain repeated third pixel points or not, wherein the residual area is an area except the reference area in the plurality of areas to be processed;
when the target processing method comprises the steps of eliminating the third pixel point from the array corresponding to the reference area to obtain the target processed array;
and when the target processing array does not contain the target processing array, taking the array corresponding to the reference area as the target processing array.
9. The method of claim 1, wherein the target defect detection area comprises at least one closed area; the marking the target defect detection area on the target image includes:
determining edge pixel points from each closed area, and sequentially connecting the edge pixel points in each closed area by using a preset line to obtain a marked target defect detection area.
10. The method of claim 1, wherein after marking the target defect detection area on the target image, the method further comprises:
responding to a region adjustment instruction, acquiring adjusted marking data, and updating the target defect detection region based on the adjusted marking data;
and detecting the appearance defects of the updated target defect detection area.
11. The method according to claim 1, wherein after the appearance defect detection of the marked target defect detection area, the method further comprises:
generating label information for each detected appearance defect, and inserting the label information into a region corresponding to the appearance defect in the target image, wherein the label information comprises a defect number and/or a defect type.
12. An appearance defect detection device, comprising:
the image acquisition module is used for acquiring a target image corresponding to the appearance to be detected, wherein the target image comprises a plurality of areas to be processed;
the region identification module is used for respectively identifying each region to be processed from the target image, and determining a target defect detection region based on a target operation type and the plurality of regions to be processed, wherein the target operation type is one of region merging operation, region intersection operation and region difference operation;
And the defect detection module is used for marking the target defect detection area on the target image and detecting the appearance defect of the marked target defect detection area.
13. The apparatus of claim 12, wherein the region identification module comprises:
the identification unit is used for identifying an area to be processed from the target image each time based on the gray value of each pixel point in the target image;
the array representation unit is used for representing each region to be processed through an array, wherein the array comprises pixel information of each pixel point in the region to be processed, and the pixel information comprises an abscissa and an ordinate of the pixel point.
14. The apparatus of claim 13, wherein the region identification module further comprises:
the input unit is used for inputting an array corresponding to each region to be processed into the target image;
the array processing unit is used for determining target processing operation based on the target operation type, processing arrays corresponding to the multiple areas to be processed according to the target processing operation to obtain a target processed array, and determining the target defect detection area in the target image according to the target processed array.
15. The apparatus of claim 14, wherein the array processing unit is configured to:
when the area to be processed is an area to be detected, determining the target defect detection area based on pixel points in the array after target processing; when the area to be processed is the area to be removed, an original array corresponding to the target image is obtained, the array after the target processing is removed from the original array, a residual array is obtained, and the target defect detection area is determined based on pixel points in the residual array.
16. The apparatus according to claim 14 or 15, wherein the array processing unit is further configured to:
when the target operation type is the region merging operation, determining the target processing operation as an array merging operation; when the target operation type is the region intersection operation, determining that the target processing operation is a group intersection operation; and when the target operation type is the regional difference operation, determining the target processing operation as a plurality of difference operation operations.
17. The apparatus of claim 16, wherein when the target processing operation is the array merge operation, the array processing unit is further configured to:
Judging whether the arrays corresponding to any two areas to be processed contain repeated first pixel points or not; when the target processing method comprises the steps of carrying out combination processing on the arrays corresponding to the multiple areas to be processed, and eliminating the first pixel point from the combined arrays to obtain the target processed arrays; and when the target processing array does not contain the target processing array, merging the arrays corresponding to the plurality of to-be-processed areas to obtain the target processed array.
18. The apparatus of claim 16, wherein when the target processing operation is the array cross operation, the array processing unit is further configured to:
judging whether the array corresponding to the multiple areas to be processed contains a second pixel point which is repeated together or not; when the target processing array is included, combining the second pixel points to obtain the target processed array; and when the target processing array is not included, determining that the target processing array is empty.
19. The apparatus of claim 16, wherein when the target processing operation is the array difference operation, the array processing unit is further configured to:
determining a reference region from the plurality of regions to be processed; judging whether an array corresponding to the reference area and an array corresponding to any residual area contain repeated third pixel points or not, wherein the residual area is an area except the reference area in the plurality of areas to be processed; when the target processing method comprises the steps of eliminating the third pixel point from the array corresponding to the reference area to obtain the target processed array; and when the target processing array does not contain the target processing array, taking the array corresponding to the reference area as the target processing array.
20. The apparatus of claim 12, wherein the target defect detection area comprises at least one enclosed area; the defect detection module is used for:
determining edge pixel points from each closed area, and sequentially connecting the edge pixel points in each closed area by using a preset line to obtain a marked target defect detection area.
21. The apparatus of claim 12, wherein the apparatus further comprises:
the area updating module is used for responding to an area adjustment instruction after the target defect detection area is marked on the target image, acquiring adjusted marking data and updating the target defect detection area based on the adjusted marking data;
the defect detection module is also used for detecting appearance defects of the updated target defect detection area.
22. The apparatus of claim 12, wherein the apparatus further comprises:
the label generation module is used for generating label information for each detected appearance defect after the appearance defect detection is carried out on the marked target defect detection area, and inserting the label information into the area corresponding to the appearance defect in the target image, wherein the label information comprises a defect number and/or a defect type.
23. A storage medium having stored thereon a computer program, which when executed by a processor, implements the method of any of claims 1 to 11.
24. A computer device comprising a storage medium, a processor and a computer program stored on the storage medium and executable on the processor, characterized in that the processor implements the method of any one of claims 1 to 11 when executing the computer program.
CN202310424855.6A 2023-04-20 2023-04-20 Appearance defect detection method and device, storage medium and computer equipment Active CN116152248B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310424855.6A CN116152248B (en) 2023-04-20 2023-04-20 Appearance defect detection method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310424855.6A CN116152248B (en) 2023-04-20 2023-04-20 Appearance defect detection method and device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN116152248A true CN116152248A (en) 2023-05-23
CN116152248B CN116152248B (en) 2023-06-30

Family

ID=86341054

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310424855.6A Active CN116152248B (en) 2023-04-20 2023-04-20 Appearance defect detection method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN116152248B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101063662A (en) * 2007-05-15 2007-10-31 广州市万世德包装机械有限公司 Method for detecting empty bottle bottom defect and device for detecting empty bottle bottom defect based on DSP
CN110728659A (en) * 2019-09-17 2020-01-24 深圳新视智科技术有限公司 Defect merging method and device, computer equipment and storage medium
CN113935979A (en) * 2021-10-26 2022-01-14 昆山万洲特种焊接有限公司 Defect identification method, defect identification device, storage medium and electronic equipment
CN114152627A (en) * 2022-02-09 2022-03-08 季华实验室 Chip circuit defect detection method and device, electronic equipment and storage medium
CN114937039A (en) * 2022-07-21 2022-08-23 阿法龙(山东)科技有限公司 Intelligent detection method for steel pipe defects
US20230023585A1 (en) * 2020-11-02 2023-01-26 Tencent Technology (Shenzhen) Company Limited Artificial intelligence-based image processing method and apparatus, computer device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101063662A (en) * 2007-05-15 2007-10-31 广州市万世德包装机械有限公司 Method for detecting empty bottle bottom defect and device for detecting empty bottle bottom defect based on DSP
CN110728659A (en) * 2019-09-17 2020-01-24 深圳新视智科技术有限公司 Defect merging method and device, computer equipment and storage medium
US20230023585A1 (en) * 2020-11-02 2023-01-26 Tencent Technology (Shenzhen) Company Limited Artificial intelligence-based image processing method and apparatus, computer device and storage medium
CN113935979A (en) * 2021-10-26 2022-01-14 昆山万洲特种焊接有限公司 Defect identification method, defect identification device, storage medium and electronic equipment
CN114152627A (en) * 2022-02-09 2022-03-08 季华实验室 Chip circuit defect detection method and device, electronic equipment and storage medium
CN114937039A (en) * 2022-07-21 2022-08-23 阿法龙(山东)科技有限公司 Intelligent detection method for steel pipe defects

Also Published As

Publication number Publication date
CN116152248B (en) 2023-06-30

Similar Documents

Publication Publication Date Title
CN110349145B (en) Defect detection method, defect detection device, electronic equipment and storage medium
CN110726724A (en) Defect detection method, system and device
US20150248592A1 (en) Method and device for identifying target object in image
US20140050387A1 (en) System and Method for Machine Vision Inspection
CN109255767B (en) Image processing method and device
US11354889B2 (en) Image analysis and processing pipeline with real-time feedback and autocapture capabilities, and visualization and configuration system
CN107229560A (en) A kind of interface display effect testing method, image specimen page acquisition methods and device
CN112581546A (en) Camera calibration method and device, computer equipment and storage medium
CN111179340A (en) Object positioning method and device and computer system
US11544839B2 (en) System, apparatus and method for facilitating inspection of a target object
US9342883B2 (en) Omnibus resolution assessment target for sensors
CN116152248B (en) Appearance defect detection method and device, storage medium and computer equipment
KR20230042706A (en) Neural network analysis of LFA test strips
CN112651315A (en) Information extraction method and device of line graph, computer equipment and storage medium
CN101408521A (en) Method for increasing defect
CN108564571B (en) Image area selection method and terminal equipment
CN117392042A (en) Defect detection method, defect detection apparatus, and storage medium
CN111935480B (en) Detection method for image acquisition device and related device
Prabha et al. Defect detection of industrial products using image segmentation and saliency
CN116758040B (en) Copper-plated plate surface fold defect detection method, device, equipment and storage medium
CN115619783B (en) Method and device for detecting product processing defects, storage medium and terminal
CN110852770A (en) Data processing method and device, computing equipment and display equipment
CN115861428B (en) Pose measurement method and device, terminal equipment and storage medium
TWI770561B (en) Product defect detection method, computer device and storage medium
WO2022201415A1 (en) Testing support device, testing support system, testing support method, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant