CN114299056A - Defect point recognition method of image and defect image recognition model training method - Google Patents

Defect point recognition method of image and defect image recognition model training method Download PDF

Info

Publication number
CN114299056A
CN114299056A CN202111676509.4A CN202111676509A CN114299056A CN 114299056 A CN114299056 A CN 114299056A CN 202111676509 A CN202111676509 A CN 202111676509A CN 114299056 A CN114299056 A CN 114299056A
Authority
CN
China
Prior art keywords
image
corrected
standard
defective
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111676509.4A
Other languages
Chinese (zh)
Inventor
唐尚华
林义闽
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China United Network Communications Group Co Ltd
Unicom Big Data Co Ltd
Original Assignee
China United Network Communications Group Co Ltd
Unicom Big Data Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China United Network Communications Group Co Ltd, Unicom Big Data Co Ltd filed Critical China United Network Communications Group Co Ltd
Priority to CN202111676509.4A priority Critical patent/CN114299056A/en
Publication of CN114299056A publication Critical patent/CN114299056A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides a method for identifying a flaw point of an image and a method for training a flaw image identification model, which relate to the technical field of industrial quality inspection, and are characterized in that first characteristic information of the image to be identified and second characteristic information of a standard image are obtained; wherein, the standard image is an image without flaw point characteristics; determining a corrected image to be recognized and a corrected standard image according to the first characteristic information and the second characteristic information, and calculating a difference image of the corrected image to be recognized and the corrected standard image; and inputting the corrected image to be recognized, the corrected standard image and the difference image into a defective image recognition model, and outputting the position of a defective point of the image to be recognized. By adopting the technical scheme, the deviation between the flaw image and the standard image can be reduced, the deviation is caused by the shooting angle, and then a more accurate flaw image recognition model is obtained and the position of the flaw point is accurately determined.

Description

Defect point recognition method of image and defect image recognition model training method
Technical Field
The application relates to the technical field of industrial quality inspection, in particular to a method for identifying a flaw point of an image and a method for training a flaw image identification model.
Background
Currently, in an industrial quality inspection process, for example, when a defective cloth and a defective tile are detected, images of the defective cloth and the defective tile are captured and compared with an image of a standard cloth and an image of the standard tile, so as to determine defective points of the defective cloth and the defective tile.
However, due to the problems of defective cloth or defective tile placement, camera shooting angle, and the like, there is a large amount of deviation in shooting angle between the acquired image of defective cloth or defective tile and the standard image, so that the result output by the image recognition model is not accurate.
Therefore, how to reduce the deviation between the defective image and the standard image caused by the shooting angle and how to obtain a more accurate model for identifying the defective image become a problem to be solved.
Disclosure of Invention
The application provides a method for identifying a flaw point of an image and a method for training a flaw image identification model, which are used for reducing deviation between a flaw image and a standard image caused by a shooting angle, further obtaining a more accurate flaw image identification model and accurately determining the position of the flaw point.
In a first aspect, the present application provides a method for identifying a flaw in an image, comprising:
acquiring first characteristic information of an image to be identified and second characteristic information of a standard image; wherein the standard image is an image without a defective point feature;
determining a corrected image to be recognized and a corrected standard image according to the first characteristic information and the second characteristic information, and calculating a difference image of the corrected image to be recognized and the corrected standard image;
and inputting the corrected image to be recognized, the corrected standard image and the difference image into a defective image recognition model, and outputting the position of a defective point of the image to be recognized.
In one example, determining a corrected image to be recognized and a corrected standard image according to the first feature information and the second feature information, and calculating a difference image of the corrected image to be recognized and the corrected standard image includes:
determining homography matrixes of the image to be identified and the standard image according to the first characteristic information and the second characteristic information;
according to the homography matrix, converting coordinate information of a pixel point in the standard image into corresponding coordinate information of the pixel point in the image to be identified in the standard image;
and taking an image formed by the converted pixel points in the standard image as a corrected standard image, taking an image formed by the pixel points of the image to be recognized corresponding to the pixel points in the standard image as a corrected image to be recognized, and calculating a difference image of the corrected image to be recognized and the corrected standard image.
In one example, calculating a difference image of the corrected image to be recognized and the corrected standard image includes:
the pixel value of a pixel point in the corrected image to be identified is subtracted from the pixel value of the pixel point in the corrected standard image, and an absolute value is obtained after a difference value is obtained;
and after normalization processing is carried out on the absolute value, taking an image formed by the processed pixel points as the difference image.
In one example, inputting the corrected image to be recognized, the corrected standard image and the difference image into a flaw image recognition model, comprising:
respectively processing the corrected to-be-identified image, the corrected standard image and the difference image to obtain a gray level image of the corrected to-be-identified image, a gray level image of the corrected standard image and a gray level image of the difference image;
and inputting the corrected gray level image of the image to be recognized, the corrected gray level image of the standard image and the gray level image of the difference image into the flaw image recognition model as three-channel images so as to determine the flaw point characteristics of the image to be recognized.
In one example, acquiring first characteristic information of an image to be recognized and second characteristic information of a standard image comprises:
acquiring a first feature point of an image to be identified and a descriptor corresponding to the first feature point, wherein the first feature point and the descriptor corresponding to the first feature point form first feature information;
and acquiring a second feature point of the standard image and a descriptor corresponding to the second feature point, wherein the second feature point and the descriptor corresponding to the second feature point form second feature information.
In a second aspect, the present application provides a method for training a defective image recognition model, where the method includes:
acquiring first characteristic information of a plurality of defective images and second characteristic information of a standard image; wherein the standard image is an image without a defective point feature;
determining each corrected flaw image and each corrected standard image according to the first characteristic information and the second characteristic information, and calculating a difference image of each corrected flaw image and each corrected standard image;
training the flaw image recognition model according to the flaw point characteristics of each corrected flaw image, each corrected standard image, each corrected difference image and each corrected flaw image;
the trained defective image recognition model is used for obtaining defective point characteristics of an image to be recognized, and the defective point characteristics are used for determining defective points of the image to be recognized.
In one example, determining each corrected defect image and corrected standard image according to the first feature information and the second feature information, and calculating a difference image of each corrected defect image and corrected standard image comprises:
determining homography matrixes of the defective image and the standard image according to the first characteristic information and the second characteristic information;
according to the homography matrix, converting coordinate information of a pixel point in the standard image into corresponding coordinate information of the pixel point in the defective image in the standard image;
and taking an image formed by the converted pixel points in the standard image as a corrected standard image, taking an image formed by the pixel points of the defective image corresponding to the pixel points in the standard image as a corrected defective image, and calculating a difference image of the corrected defective image and the corrected standard image.
In one example, calculating a difference image of the corrected defect image and the corrected standard image includes:
the pixel value of a pixel point in the corrected defective image is subtracted from the pixel value of the pixel point in the corrected standard image, and an absolute value is obtained after a difference value is obtained;
and after normalization processing is carried out on the absolute value, taking an image formed by the processed pixel points as the difference image.
In one example, training the flaw image recognition model with the flaw point characteristics of each corrected flaw image, corrected standard image, the difference image and each flaw image comprises:
respectively processing the corrected flaw image, the corrected standard image and the difference image to obtain a gray scale image of the corrected flaw image, a gray scale image of the corrected standard image and a gray scale image of the difference image;
and taking the corrected gray image of the defective image, the corrected gray image of the standard image and the gray image of the difference image as three-channel images as input ends of the defective image recognition model, and taking the defective point characteristics of the defective image as output ends of the defective image recognition model so as to train the defective image recognition model.
In one example, obtaining first characteristic information of a plurality of defect images and second characteristic information of a standard image comprises:
acquiring a first feature point of a flaw image and a descriptor corresponding to the first feature point, wherein the first feature point and the descriptor corresponding to the first feature point form first feature information;
and acquiring a second feature point of the standard image and a descriptor corresponding to the second feature point, wherein the second feature point and the descriptor corresponding to the second feature point form second feature information.
In a third aspect, the present application provides an apparatus for identifying a flaw in an image, the apparatus comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring first characteristic information of an image to be identified and second characteristic information of a standard image; wherein the standard image is an image without a defective point feature;
the determining unit is used for determining the corrected image to be recognized and the corrected standard image according to the first characteristic information and the second characteristic information, and calculating a difference image of the corrected image to be recognized and the corrected standard image;
and the output unit is used for inputting the corrected image to be recognized, the corrected standard image and the difference image into a defective image recognition model and outputting the position of a defective point of the image to be recognized.
In one example, a determination unit includes:
the homography matrix determining module is used for determining homography matrixes of the image to be identified and the standard image according to the first characteristic information and the second characteristic information;
the conversion module is used for converting the coordinate information of the pixel point in the standard image into the corresponding coordinate information of the pixel point in the image to be identified in the standard image according to the homography matrix;
and the calculation module is used for taking an image formed by the converted pixel points in the standard image as a corrected standard image, taking an image formed by the pixel points of the image to be recognized corresponding to the pixel points in the standard image as a corrected image to be recognized, and calculating a difference image of the corrected image to be recognized and the corrected standard image.
In one example, a computing module, comprising:
the difference module is used for making a difference between the pixel value of the pixel point in the corrected image to be identified and the pixel value of the pixel point in the corrected standard image to obtain a difference value and then taking an absolute value;
and the normalization processing submodule is used for performing normalization processing on the absolute value and then taking an image formed by the processed pixel points as the difference image.
In one example, an output unit includes:
the processing module is used for respectively processing the corrected to-be-identified image, the corrected standard image and the difference image to obtain a gray level image of the corrected to-be-identified image, a gray level image of the corrected standard image and a gray level image of the difference image;
and the input module is used for inputting the corrected gray level image of the image to be recognized, the corrected gray level image of the standard image and the gray level image of the difference image into the defect image recognition model as three-channel images so as to determine the defect point characteristics of the image to be recognized.
In one example, an acquisition unit includes:
the first acquisition module is used for acquiring a first feature point of an image to be identified and a descriptor corresponding to the first feature point, wherein the first feature point and the descriptor corresponding to the first feature point form first feature information;
and the second acquisition module is used for acquiring a second feature point of the standard image and a descriptor corresponding to the second feature point, wherein the second feature point and the descriptor corresponding to the second feature point form the second feature information.
In a fourth aspect, the present application provides a training apparatus for a defective image recognition model, the apparatus comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring first characteristic information of a plurality of defective images and second characteristic information of a standard image; wherein the standard image is an image without a defective point feature;
a determining unit, configured to determine each corrected defect image and each corrected standard image according to the first feature information and the second feature information, and calculate a difference image between each corrected defect image and each corrected standard image;
the training unit is used for training the flaw image recognition model according to the flaw point characteristics of each corrected flaw image, each corrected standard image, each difference image and each flaw image;
the trained defective image recognition model is used for obtaining defective point characteristics of an image to be recognized, and the defective point characteristics are used for determining defective points of the image to be recognized.
In one example, a determination unit includes:
the homography matrix determining module is used for determining homography matrixes of the defective image and the standard image according to the first characteristic information and the second characteristic information;
the conversion module is used for converting the coordinate information of a pixel point in the standard image into the corresponding coordinate information of the pixel point in the defective image in the standard image according to the homography matrix;
and the calculation module is used for taking an image formed by the converted pixel points in the standard image as a corrected standard image, taking an image formed by the pixel points of the defect image corresponding to the pixel points in the standard image as a corrected defect image, and calculating a difference image of the corrected defect image and the corrected standard image.
In one example, a computing module, comprising:
the difference module is used for making a difference between the pixel value of the pixel point in the corrected flaw image and the pixel value of the pixel point in the corrected standard image to obtain a difference value and then taking an absolute value;
and the normalization processing submodule is used for performing normalization processing on the absolute value and then taking an image formed by the processed pixel points as the difference image.
In one example, a training unit, comprising:
the processing module is used for respectively processing the corrected flaw image, the corrected standard image and the difference image to obtain a gray level image of the corrected flaw image, a gray level image of the corrected standard image and a gray level image of the difference image;
and the training module is used for taking the corrected gray image of the defective image, the corrected gray image of the standard image and the gray image of the difference image as three-channel images as input ends of the defective image recognition model, and taking the position of the defective point of the defective image as an output end of the defective image recognition model so as to train the defective image recognition model.
In one example, an acquisition unit includes:
the first acquisition module is used for acquiring a first feature point of a flaw image and a descriptor corresponding to the first feature point, wherein the first feature point and the descriptor corresponding to the first feature point form the first feature information;
and the second acquisition module is used for acquiring a second feature point of the standard image and a descriptor corresponding to the second feature point, wherein the second feature point and the descriptor corresponding to the second feature point form the second feature information.
In a fifth aspect, the present application provides an electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to implement the method of the first aspect.
In a sixth aspect, the present application provides an electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to implement the method of the second aspect.
In a seventh aspect, the present application provides a computer-readable storage medium having stored thereon computer-executable instructions for implementing the method according to the first aspect when executed by a processor.
In an eighth aspect, the present application provides a computer-readable storage medium having stored thereon computer-executable instructions for implementing the method of the second aspect when executed by a processor.
In a ninth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the method according to the first aspect.
In a tenth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the method according to the second aspect.
According to the method for identifying the flaw point of the image and the method for training the flaw image identification model, the first characteristic information of the image to be identified and the second characteristic information of the standard image are obtained; wherein the standard image is an image without a defective point position; determining a corrected image to be recognized and a corrected standard image according to the first characteristic information and the second characteristic information, and calculating a difference image of the corrected image to be recognized and the corrected standard image; and inputting the corrected image to be recognized, the corrected standard image and the difference image into a defective image recognition model, and outputting the position of a defective point of the image to be recognized. By adopting the technical scheme, the deviation between the flaw image and the standard image caused by the shooting angle can be reduced, and then a more accurate flaw image recognition model is obtained and the position of the flaw point is accurately determined.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a schematic flowchart of a method for identifying a defective spot of an image according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for identifying a defective spot of an image according to a second embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for training a defective image recognition model according to a third embodiment of the present application;
FIG. 4 is a schematic diagram of a defect image according to the third embodiment of the present application;
FIG. 5 is a flowchart illustrating a method for training a defective image recognition model according to a fourth embodiment of the present application;
fig. 6 is a schematic diagram of a device for identifying a defective spot of an image according to an embodiment of the present application;
fig. 7 is a schematic diagram of a device for identifying a defective spot of an image according to a sixth embodiment of the present application;
FIG. 8 is a schematic diagram of a training apparatus for a defect image recognition model according to a seventh embodiment of the present disclosure;
FIG. 9 is a schematic diagram of a training apparatus for a defect image recognition model according to an eighth embodiment of the present application;
FIG. 10 is a block diagram illustrating an electronic device in accordance with an example embodiment.
With the above figures, there are shown specific embodiments of the present application, which will be described in more detail below. These drawings and written description are not intended to limit the scope of the inventive concepts in any manner, but rather to illustrate the inventive concepts to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating a method for identifying a defective spot of an image according to an embodiment of the present application. The first embodiment comprises the following steps:
s101, acquiring first characteristic information of an image to be identified and second characteristic information of a standard image; wherein the standard image is an image without a defect point feature.
For example, the shooting angles of the image to be recognized and the standard image may be different, and under the condition that the shooting angles of the image to be recognized and the standard image are different, the poses of the pixel points in the two images are also different. The image to be recognized can be an image of the cloth to be detected or an image of the tile to be detected, and correspondingly, if the image to be recognized is the image of the cloth to be detected, the image corresponding to the image to be recognized is a standard image of the cloth; and if the image to be recognized is the image of the tile to be detected, the image corresponding to the image to be recognized is a standard image of the tile. The first feature information is data information capable of characterizing features of the image to be recognized, and for example, the first feature information may be color information, texture information, size information, or the like of the image to be recognized. The second feature information is data information for characterizing the standard image.
S102, determining the corrected image to be recognized and the corrected standard image according to the first characteristic information and the second characteristic information, and calculating a difference image of the corrected image to be recognized and the corrected standard image.
In this embodiment, after comparing the pixel points represented by the first characteristic information and the second characteristic information, the sizes of the corrected image to be recognized and the corrected standard image are the same, and each pixel point in the corrected image to be recognized can find a pixel point corresponding to the pixel point in the corrected standard image. After the corrected image to be recognized and the corrected standard image are obtained, difference value calculation is carried out on each pixel point of the corrected image to be recognized and the corrected standard image to obtain a difference value image of the corrected image to be recognized and the corrected standard image.
S103, inputting the corrected image to be recognized, the corrected standard image and the difference image into a defective image recognition model, and outputting the position of a defective point of the image to be recognized.
In this embodiment, the corrected image to be recognized, the corrected standard image, and the difference image are processed as three channels, where the corrected image to be recognized is used as a B channel, the corrected standard image is used as a G channel, and the difference image is used as an R channel, the images of the three channels are input into the defective image recognition model, and the defective point position of the image to be recognized is output.
According to the method for identifying the flaw of the image, the first characteristic information of the image to be identified and the second characteristic information of the standard image are obtained; wherein, the standard image is an image without flaw point characteristics; determining a corrected image to be recognized and a corrected standard image according to the first characteristic information and the second characteristic information, and calculating a difference image of the corrected image to be recognized and the corrected standard image; and inputting the corrected image to be recognized, the corrected standard image and the difference image into a defective image recognition model, and outputting the position of a defective point of the image to be recognized. By adopting the technical scheme, the deviation between the flaw image and the standard image caused by the shooting angle can be reduced, and then a more accurate flaw image recognition model is obtained and the position of the flaw point is accurately determined.
Fig. 2 is a flowchart illustrating a method for identifying a defective spot of an image according to a second embodiment of the present application. The second embodiment comprises the following steps:
s201, acquiring first characteristic information of an image to be identified and second characteristic information of a standard image; wherein the standard image is an image without a defect point feature.
In one example, acquiring first characteristic information of an image to be recognized and second characteristic information of a standard image comprises:
acquiring a first feature point of an image to be identified and a descriptor corresponding to the first feature point, wherein the first feature point and the descriptor corresponding to the first feature point form first feature information; and acquiring a second feature point of the standard image and a descriptor corresponding to the second feature point, wherein the second feature point and the descriptor corresponding to the second feature point form second feature information.
In this embodiment, the manner of acquiring the first feature point of the image to be recognized is to extract orb feature points and corresponding descriptors, and similarly, the manner of acquiring the second feature point of the image to be recognized is to extract orb feature points and corresponding descriptors.
Wherein, the characteristic points are extracted orb by using a FAST algorithm to detect the characteristic points. The FAST algorithm compares a point with its surrounding points, and if it is different from most of them, it is considered to be an orb feature point. Further, a descriptor is a numerical value for an attribute of a feature point.
S202, determining homography matrixes of the image to be recognized and the standard image according to the first characteristic information and the second characteristic information.
In this embodiment, a part of feature points in the first feature information is set to be P2, and a part of feature points in the second feature information is set to be P1, where the feature point P2 in the first feature information and the feature point P1 in the second feature information are the same pixel point, and therefore, a relationship between the feature point in the first feature information and the feature point in the second feature information can be established by the same pixel point, specifically, by a formula: the homography matrix H of the image to be recognized and the standard image is determined by P2 — HP 1.
S203, converting the coordinate information of the pixel point in the standard image into the corresponding coordinate information of the pixel point in the image to be identified in the standard image according to the homography matrix.
In this embodiment, after the homography matrix H is determined, left information of other pixel points in the standard image is converted into coordinate information of the pixel point in the image to be identified according to the homography matrix H.
And S204, taking an image formed by the converted pixel points in the standard image as a corrected standard image, taking an image formed by the pixel points of the image to be recognized corresponding to the pixel points in the standard image as a corrected image to be recognized, and calculating a difference image of the corrected image to be recognized and the corrected standard image.
In this embodiment, after the coordinate information of the pixel point in the standard image is converted into the coordinate information of the pixel point in the image to be recognized according to the homography matrix, the pixel point of the converted standard image and the pixel point which is not found in the image to be recognized are deleted to obtain a corrected standard image, and a difference image of the corrected image to be recognized and the corrected standard image is calculated.
In one example, calculating a difference image of the corrected image to be recognized and the corrected standard image includes:
the pixel value of a pixel point in the corrected image to be identified is subtracted from the pixel value of the pixel point in the corrected standard image, and an absolute value is obtained after a difference value is obtained; and after normalization processing is carried out on the absolute value, an image formed by the processed pixel points is used as a difference image.
In this embodiment, if the pixel value of a pixel point in the corrected image to be identified is 100 and the pixel value of the pixel point in the corrected standard image is 150, the difference value of the pixel point is-50, and after an absolute value is obtained, the difference value is 50, normalization processing is performed on 50 according to a range of [ 0-255 ], so as to obtain a pixel value of the processed pixel point, and then a difference image is obtained.
And S205, respectively processing the corrected to-be-recognized image, the corrected standard image and the difference image to obtain a gray scale image of the corrected to-be-recognized image, a gray scale image of the corrected standard image and a gray scale image of the difference image.
In one example, a grayscale image is an image with only one sample color per pixel point. Such images are typically displayed in gray scale from darkest black to brightest white, although in theory this sampling could be of different shades of any color and even different colors at different brightnesses. The gray image is different from the black and white image, the black and white image only has two colors of black and white in the computer image field, and the gray image has a plurality of levels of color depth between black and white. And processing the corrected image to be recognized to obtain a gray image of the corrected image to be recognized, processing the corrected standard image to obtain a gray image of the corrected standard image, and processing the difference image to obtain a gray image of the difference image.
S206, inputting the corrected gray level image of the image to be recognized, the corrected gray level image of the standard image and the gray level image of the difference image into the defect image recognition model as three-channel images so as to determine the position of a defect point of the image to be recognized.
For example, this step may refer to step S103 described above, and is not described again.
According to the method for identifying the flaw of the image, the first characteristic information of the image to be identified and the second characteristic information of the standard image are obtained; converting coordinate information of pixel points in a standard image into coordinate information of the pixel points in an image to be recognized in the standard image according to a homography matrix, taking an image formed by the converted pixel points in the standard image as a corrected standard image, taking an image formed by the pixel points of the image to be recognized corresponding to the pixel points in the standard image as a corrected image to be recognized, calculating a difference image of the corrected image to be recognized and the corrected standard image, respectively processing the corrected image to be recognized, the corrected standard image and the difference image to obtain a gray level image of the corrected image to be recognized, a gray level image of the corrected standard image and a gray level image of the difference image, and inputting the gray level image of the corrected image to be recognized, the gray level image of the corrected standard image and the gray level image of the difference image into a flaw image recognition model as three-channel images, to determine the location of the flaw point in the image to be identified. By adopting the technical scheme, the deviation between the defective image and the standard image caused by the shooting angle can be reduced, and the position of the defective point can be accurately determined.
Fig. 3 is a flowchart illustrating a method for training a defective image recognition model according to a third embodiment of the present application. The third embodiment comprises the following steps:
s301, acquiring first characteristic information of a plurality of defective images and second characteristic information of a standard image; wherein the standard image is an image without a defect point feature.
In this embodiment, the defect image is an image with known defect point characteristics, for example, the defect image may be a defect image diagram shown in fig. 4. Wherein the defect point feature is a part marked by a box.
S302, determining each corrected flaw image and each corrected standard image according to the first characteristic information and the second characteristic information, and calculating a difference image of each corrected flaw image and each corrected standard image.
For example, this step may refer to step S102 described above, and is not described again.
S303, training a defective image recognition model according to the corrected defective image, the corrected standard image, the corrected difference image and the defective point characteristics of each defective image; and the trained flaw image recognition model is used for obtaining the positions of the flaws of the image to be recognized.
For example, this step may refer to step S103 described above, and is not described again.
According to the defect image recognition model training method, first characteristic information of a plurality of defect images and second characteristic information of a standard image are obtained; the standard image is an image without flaw characteristics, each corrected flaw image and each corrected standard image are determined according to the first characteristic information and the second characteristic information, a difference image of each corrected flaw image and each corrected standard image is calculated, and a flaw image recognition model is trained on each corrected flaw image, each corrected standard image, each difference image and the flaw characteristics of each flaw image. By adopting the technical scheme, a more accurate flaw image recognition model can be obtained and the position of the flaw point can be accurately determined.
Fig. 5 is a flowchart illustrating a method for training a defective image recognition model according to a fourth embodiment of the present application. The fourth example includes the following steps:
s501, acquiring first characteristic information of a plurality of defective images and second characteristic information of a standard image; wherein the standard image is an image without a defect point feature.
In one example, obtaining first characteristic information of a plurality of defect images and second characteristic information of a standard image comprises:
acquiring a first feature point of a flaw image and a descriptor corresponding to the first feature point, wherein the first feature point and the descriptor corresponding to the first feature point form first feature information;
and acquiring a second feature point of the standard image and a descriptor corresponding to the second feature point, wherein the second feature point and the descriptor corresponding to the second feature point form second feature information.
For example, this step may refer to step S201 described above, and is not described again.
S502, determining homography matrixes of the defective image and the standard image according to the first characteristic information and the second characteristic information.
For example, this step may refer to step S202, which is not described again.
S503, according to the homography matrix, converting the coordinate information of the pixel point in the standard image into the corresponding coordinate information of the pixel point in the flaw image in the standard image.
For example, this step may refer to step S203, which is not described again.
S504, taking an image formed by the converted pixel points in the standard image as a corrected standard image, taking an image formed by the pixel points of the defect image corresponding to the pixel points in the standard image as a corrected defect image, and calculating a difference image of the corrected defect image and the corrected standard image.
In one example, calculating a difference image of the corrected defect image and the corrected standard image includes: the pixel value of a pixel point in the corrected defective image is subtracted from the pixel value of the pixel point in the corrected standard image, and an absolute value is obtained after a difference value is obtained; and after normalization processing is carried out on the absolute value, an image formed by the processed pixel points is used as a difference image.
For example, this step may refer to step S204 described above, and is not described again.
And S505, training a defective image recognition model according to the corrected defective image, the corrected standard image, the corrected difference image and the defective point characteristics of each defective image. And the trained flaw image recognition model is used for obtaining the positions of the flaws of the image to be recognized.
In one example, training the flaw image recognition model with the flaw point characteristics of each corrected flaw image, the corrected standard image, the difference image and each flaw image includes: respectively processing the corrected flaw image, the corrected standard image and the difference image to obtain a gray level image of the corrected flaw image, a gray level image of the corrected standard image and a gray level image of the difference image; and taking the corrected gray image of the defective image, the corrected gray image of the standard image and the gray image of the difference image as three-channel images as input ends of a defective image recognition model, and taking the defective point characteristics of the defective image as output ends of the defective image recognition model so as to train the defective image recognition model.
For example, this step may refer to step S205 described above, and is not described again.
According to the defect image recognition model training method, first characteristic information of a plurality of defect images and second characteristic information of a standard image are obtained; determining a homography matrix of the defective image and the standard image according to the first characteristic information and the second characteristic information, converting coordinate information of pixel points in the standard image into coordinate information of the pixel points in the defective image according to the homography matrix, taking an image formed by the converted pixel points in the standard image as a corrected standard image, taking an image formed by the pixel points of the defective image corresponding to the pixel points in the standard image as a corrected defective image, calculating a difference image of the corrected defective image and the corrected standard image, and training a defective image identification model according to the characteristics of each corrected defective image, the corrected standard image, the difference image and the defective points of each defective image. And the trained flaw image recognition model is used for obtaining the positions of the flaws of the image to be recognized. By adopting the technical scheme, a more accurate flaw image recognition model can be obtained and the position of the flaw point can be accurately determined.
Fig. 6 is a schematic diagram of a device for identifying a defective spot of an image according to an embodiment of the present application. The apparatus 60 of the fifth embodiment, comprising:
an acquiring unit 601, configured to acquire first feature information of an image to be recognized and second feature information of a standard image; wherein the standard image is an image without a defect point feature.
A determining unit 602, configured to determine the corrected image to be recognized and the corrected standard image according to the first feature information and the second feature information, and calculate a difference image between the corrected image to be recognized and the corrected standard image.
The output unit 603 is configured to input the corrected image to be recognized, the corrected standard image, and the difference image into the defect image recognition model, and output a position of a defect point of the image to be recognized.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the above-described apparatus may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Fig. 7 is a schematic diagram of a device for identifying a defective spot of an image according to a sixth embodiment of the present application. The apparatus 70 of the sixth embodiment, comprising:
an obtaining unit 701, configured to obtain first feature information of an image to be identified and second feature information of a standard image; wherein the standard image is an image without a defect point feature.
A determining unit 702, configured to determine the corrected image to be recognized and the corrected standard image according to the first feature information and the second feature information, and calculate a difference image between the corrected image to be recognized and the corrected standard image.
The output unit 703 is configured to input the corrected image to be recognized, the corrected standard image, and the difference image into the defective image recognition model, and output a defective point position of the image to be recognized.
In one example, the determining unit 702 includes:
a homography matrix determining module 7021, configured to determine homography matrices of the image to be identified and the standard image according to the first feature information and the second feature information;
the conversion module 7022 is configured to convert, in the standard image, the coordinate information of the pixel point in the standard image into coordinate information of the pixel point in the image to be recognized according to the homography matrix;
the calculating module 7023 is configured to use an image formed by the converted pixel points in the standard image as a corrected standard image, use an image formed by the pixel points of the image to be recognized corresponding to the pixel points in the standard image as a corrected image to be recognized, and calculate a difference image between the corrected image to be recognized and the corrected standard image.
In one example, computing module 7023 includes:
a difference module 70231, configured to perform a difference between a pixel value of a pixel point in the corrected image to be identified and a pixel value of the pixel point in the corrected standard image, so as to obtain a difference value, and then take an absolute value;
and the normalization processing submodule 70232 is configured to normalize the absolute value, and then use an image formed by the processed pixel points as a difference image.
In one example, the output unit 703 includes:
a processing module 7031, configured to process the corrected to-be-identified image, the corrected standard image, and the difference image respectively to obtain a grayscale image of the corrected to-be-identified image, a grayscale image of the corrected standard image, and a grayscale image of the difference image;
the input module 7032 is configured to input the corrected grayscale image of the image to be recognized, the corrected grayscale image of the standard image, and the grayscale image of the difference image as three-channel images into the defective image recognition model, so as to determine a defective point position of the image to be recognized.
In one example, the obtaining unit 701 includes:
the first obtaining module 7011 is configured to obtain a first feature point of the image to be identified and a descriptor corresponding to the first feature point, where the first feature point and the descriptor corresponding to the first feature point form first feature information.
The second obtaining module 7012 is configured to obtain a second feature point of the standard image and a descriptor corresponding to the second feature point, where the second feature point and the descriptor corresponding to the second feature point form second feature information.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the above-described apparatus may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Fig. 8 is a schematic diagram of a training apparatus for a defective image recognition model according to a seventh embodiment of the present application. The apparatus 80 according to the seventh embodiment, comprising:
an obtaining unit 801, configured to obtain first feature information of a plurality of defect images and second feature information of a standard image; wherein the standard image is an image without a defect point feature.
A determining unit 802, configured to determine each corrected defect image and each corrected standard image according to the first feature information and the second feature information, and calculate a difference image between each corrected defect image and each corrected standard image.
A training unit 803, configured to train a defective image recognition model on the defective point characteristics of each corrected defective image, the corrected standard image, the difference image, and each defective image; and the trained flaw image recognition model is used for obtaining the positions of the flaws of the image to be recognized.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the above-described apparatus may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
Fig. 9 is a schematic diagram of a training apparatus for a defective image recognition model according to an eighth embodiment of the present application. The apparatus 90 according to the eighth embodiment, comprising:
an obtaining unit 901, configured to obtain first feature information of a plurality of defective images and second feature information of a standard image; wherein the standard image is an image without a defect point feature.
A determining unit 902, configured to determine each corrected defect image and each corrected standard image according to the first feature information and the second feature information, and calculate a difference image between each corrected defect image and each corrected standard image.
A training unit 903, configured to train a defective image recognition model on each corrected defective image, each corrected standard image, each corrected difference image, and a defective point feature of each defective image; and the trained flaw image recognition model is used for obtaining the positions of the flaws of the image to be recognized.
In one example, determining unit 902 includes:
and the homography matrix determining module 9021 is configured to determine homography matrices of the defective image and the standard image according to the first characteristic information and the second characteristic information.
The converting module 9022 is configured to convert, in the standard image, coordinate information of a pixel point in the standard image into corresponding coordinate information of the pixel point in the defective image according to the homography matrix.
The calculating module 9023 is configured to use an image formed by the converted pixel points in the standard image as a corrected standard image, use an image formed by the pixel points of the defect image corresponding to the pixel points in the standard image as a corrected defect image, and calculate a difference image between the corrected defect image and the corrected standard image.
In one example, the calculation module 9023 includes:
and the difference module 90231 is configured to perform a difference between the pixel value of the pixel point in the corrected defective image and the pixel value of the pixel point in the corrected standard image, so as to obtain a difference value, and then take an absolute value.
And the normalization processing submodule 90232 is configured to perform normalization processing on the absolute value, and then use an image formed by the processed pixel points as a difference image.
In one example, training unit 903, comprises:
the processing module 9031 is configured to process the corrected defective image, the corrected standard image, and the difference image respectively to obtain a grayscale image of the corrected defective image, a grayscale image of the corrected standard image, and a grayscale image of the difference image.
The training module 9032 is configured to use the corrected grayscale image of the defective image, the corrected grayscale image of the standard image, and the grayscale image of the difference image as three-channel images to serve as input ends of a defective image recognition model, and use a defective point position of the defective image as an output end of the defective image recognition model, so as to train the defective image recognition model.
In one example, the obtaining unit 901 includes:
the first obtaining module 9011 is configured to obtain a first feature point of the defective image and a descriptor corresponding to the first feature point, where the first feature point and the descriptor corresponding to the first feature point form first feature information.
The second obtaining module 9012 is configured to obtain a second feature point of the standard image and a descriptor corresponding to the second feature point, where the second feature point and the descriptor corresponding to the second feature point form second feature information.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the above-described apparatus may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
FIG. 10 is a block diagram illustrating an electronic device, which may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like, in accordance with an exemplary embodiment.
The apparatus 1000 may include one or more of the following components: processing component 1002, memory 1004, power component 1006, multimedia component 1008, audio component 1010, input/output (I/O) interface 1012, sensor component 1014, and communications component 1016.
The processing component 1002 generally controls the overall operation of the device 1000, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 1002 may include one or more processors 1020 to execute instructions to perform all or a portion of the steps of the methods described above. Further, processing component 1002 may include one or more modules that facilitate interaction between processing component 1002 and other components. For example, the processing component 1002 may include a multimedia module to facilitate interaction between the multimedia component 1008 and the processing component 1002.
The memory 1004 is configured to store various types of data to support operations at the apparatus 1000. Examples of such data include instructions for any application or method operating on device 1000, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1004 may be implemented by any type or combination of volatile or non-volatile storage devices, such as static random access memory, electrically erasable programmable read only memory, magnetic storage, flash memory, magnetic or optical disks.
The power supply component 1006 provides power to the various components of the device 1000. The power components 1006 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 1000.
The multimedia component 1008 includes a screen that provides an output interface between the device 1000 and a user. In some embodiments, the screen may include a liquid crystal display and a touch panel. If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1008 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 1000 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1010 is configured to output and/or input audio signals. For example, audio component 1010 includes a microphone configured to receive external audio signals when device 1000 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 1004 or transmitted via the communication component 1016. In some embodiments, audio component 1010 also includes a speaker for outputting audio signals.
I/O interface 1012 provides an interface between processing component 1002 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1014 includes one or more sensors for providing various aspects of status assessment for the device 1000. For example, sensor assembly 1014 may detect an open/closed state of device 1000, the relative positioning of components, such as a display and keypad of device 1000, the change in position of device 1000 or a component of device 1000, the presence or absence of user contact with device 1000, the orientation or acceleration/deceleration of device 1000, and the change in temperature of device 1000. The sensor assembly 1014 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1014 may also include a light sensor, such as an image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1014 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1016 is configured to facilitate communications between the apparatus 1000 and other devices in a wired or wireless manner. The device 1000 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1016 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1016 further includes a near field communication module to facilitate short range communication. For example, the modules may be implemented based on radio frequency identification technology, infrared data association technology, ultra wideband technology, bluetooth technology, and other technologies.
In an exemplary embodiment, the apparatus 1000 may be implemented by one or more application specific integrated circuits, digital signal processors, digital signal processing devices, programmable logic devices, field programmable gate arrays, controllers, microcontrollers, microprocessors or other electronic components for performing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 1004 comprising instructions, executable by the processor 1020 of the device 1000 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a random access memory, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform a method of identifying a flaw in an image or a method of training a flaw image identification model of the electronic device.
The application also discloses a computer program product comprising a computer program which, when executed by a processor, implements the method as described in the embodiments.
Various implementations of the systems and techniques described here above may be realized in digital electronic circuitry, integrated circuitry, field programmable gate arrays, application specific integrated circuits, application specific standard products, systems on a chip, load programmable logic devices, computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present application may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or electronic device.
In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a random access memory, a read-only memory, an erasable programmable read-only memory, an optical fiber, a portable compact disc read-only memory, an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a cathode ray tube) or (liquid crystal display) monitor for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data electronic device), or that includes a middleware component (e.g., an application electronic device), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local area networks, wide area networks, and the internet.
The computer system may include a client and an electronic device. The client and the electronic device are generally remote from each other and typically interact through a communication network. The relationship of client and electronic device arises by virtue of computer programs running on the respective computers and having a client-electronic device relationship to each other. The electronic device may be a cloud electronic device, which is also called a cloud computing electronic device or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in a traditional physical host and a VPS service ("Virtual Private Server", or "VPS" for short). The electronic device may also be a distributed system of electronic devices or an electronic device incorporating a blockchain. It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present application can be achieved, and the present invention is not limited herein.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (18)

1. A method of identifying blemishes in an image, said method comprising:
acquiring first characteristic information of an image to be identified and second characteristic information of a standard image; wherein the standard image is an image without a defective point feature;
determining a corrected image to be recognized and a corrected standard image according to the first characteristic information and the second characteristic information, and calculating a difference image of the corrected image to be recognized and the corrected standard image;
and inputting the corrected image to be recognized, the corrected standard image and the difference image into a defective image recognition model, and outputting the position of a defective point of the image to be recognized.
2. The method according to claim 1, wherein determining the corrected image to be recognized and the corrected standard image according to the first feature information and the second feature information, and calculating a difference image of the corrected image to be recognized and the corrected standard image comprises:
determining homography matrixes of the image to be identified and the standard image according to the first characteristic information and the second characteristic information;
according to the homography matrix, converting coordinate information of a pixel point in the standard image into corresponding coordinate information of the pixel point in the image to be identified in the standard image;
and taking an image formed by the converted pixel points in the standard image as a corrected standard image, taking an image formed by the pixel points of the image to be recognized corresponding to the pixel points in the standard image as a corrected image to be recognized, and calculating a difference image of the corrected image to be recognized and the corrected standard image.
3. The method of claim 2, wherein computing a difference image of the corrected image to be recognized and the corrected standard image comprises:
the pixel value of a pixel point in the corrected image to be identified is subtracted from the pixel value of the pixel point in the corrected standard image, and an absolute value is obtained after a difference value is obtained;
and after normalization processing is carried out on the absolute value, taking an image formed by the processed pixel points as the difference image.
4. The method of claim 1, wherein inputting the corrected image to be recognized, the corrected standard image, and the difference image to a defect image recognition model comprises:
respectively processing the corrected to-be-identified image, the corrected standard image and the difference image to obtain a gray level image of the corrected to-be-identified image, a gray level image of the corrected standard image and a gray level image of the difference image;
and inputting the corrected gray level image of the image to be recognized, the corrected gray level image of the standard image and the gray level image of the difference image into the flaw image recognition model as three-channel images so as to determine the flaw point characteristics of the image to be recognized.
5. The method according to claim 1, wherein acquiring first feature information of the image to be recognized and second feature information of the standard image comprises:
acquiring a first feature point of an image to be identified and a descriptor corresponding to the first feature point, wherein the first feature point and the descriptor corresponding to the first feature point form first feature information;
and acquiring a second feature point of the standard image and a descriptor corresponding to the second feature point, wherein the second feature point and the descriptor corresponding to the second feature point form second feature information.
6. A method for training a flaw image recognition model, the method comprising:
acquiring first characteristic information of a plurality of defective images and second characteristic information of a standard image; wherein the standard image is an image without a defective point feature;
determining each corrected flaw image and each corrected standard image according to the first characteristic information and the second characteristic information, and calculating a difference image of each corrected flaw image and each corrected standard image;
training the flaw image recognition model according to the flaw point characteristics of each corrected flaw image, each corrected standard image, each corrected difference image and each corrected flaw image;
the trained defective image recognition model is used for obtaining defective point characteristics of an image to be recognized, and the defective point characteristics are used for determining defective points of the image to be recognized.
7. The method of claim 6, wherein determining each corrected defect image and corrected standard image based on the first feature information and the second feature information, and calculating a difference image for each corrected defect image and corrected standard image comprises:
determining homography matrixes of the defective image and the standard image according to the first characteristic information and the second characteristic information;
according to the homography matrix, converting coordinate information of a pixel point in the standard image into corresponding coordinate information of the pixel point in the defective image in the standard image;
and taking an image formed by the converted pixel points in the standard image as a corrected standard image, taking an image formed by the pixel points of the defective image corresponding to the pixel points in the standard image as a corrected defective image, and calculating a difference image of the corrected defective image and the corrected standard image.
8. The method of claim 6, wherein computing a difference image of the corrected defect image and the corrected standard image comprises:
the pixel value of a pixel point in the corrected defective image is subtracted from the pixel value of the pixel point in the corrected standard image, and an absolute value is obtained after a difference value is obtained;
and after normalization processing is carried out on the absolute value, taking an image formed by the processed pixel points as the difference image.
9. The method of claim 6, wherein training the defect image recognition model for each corrected defect image, corrected standard image, difference image and defect point feature of each defect image comprises:
respectively processing the corrected flaw image, the corrected standard image and the difference image to obtain a gray scale image of the corrected flaw image, a gray scale image of the corrected standard image and a gray scale image of the difference image;
and taking the corrected gray image of the defective image, the corrected gray image of the standard image and the gray image of the difference image as three-channel images as input ends of the defective image recognition model, and taking the defective point characteristics of the defective image as output ends of the defective image recognition model so as to train the defective image recognition model.
10. The method of claim 6, wherein obtaining first characteristic information of a plurality of defect images and second characteristic information of a standard image comprises:
acquiring a first feature point of a flaw image and a descriptor corresponding to the first feature point, wherein the first feature point and the descriptor corresponding to the first feature point form first feature information;
and acquiring a second feature point of the standard image and a descriptor corresponding to the second feature point, wherein the second feature point and the descriptor corresponding to the second feature point form second feature information.
11. An apparatus for identifying a defective point of an image, the apparatus comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring first characteristic information of an image to be identified and second characteristic information of a standard image; wherein the standard image is an image without a defective point feature;
the determining unit is used for determining the corrected image to be recognized and the corrected standard image according to the first characteristic information and the second characteristic information, and calculating a difference image of the corrected image to be recognized and the corrected standard image;
and the output unit is used for inputting the corrected image to be recognized, the corrected standard image and the difference image into a defective image recognition model and outputting the position of a defective point of the image to be recognized.
12. A flaw image recognition model training apparatus, characterized in that the apparatus comprises:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring first characteristic information of a plurality of defective images and second characteristic information of a standard image; wherein the standard image is an image without a defective point feature;
a determining unit, configured to determine each corrected defect image and each corrected standard image according to the first feature information and the second feature information, and calculate a difference image between each corrected defect image and each corrected standard image;
the training unit is used for training the flaw image recognition model according to the flaw point characteristics of each corrected flaw image, each corrected standard image, each difference image and each flaw image;
the trained defective image recognition model is used for obtaining defective point characteristics of an image to be recognized, and the defective point characteristics are used for determining defective points of the image to be recognized.
13. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to implement the method of any of claims 1-5.
14. An electronic device, comprising: a processor, and a memory communicatively coupled to the processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to implement the method of any of claims 6-10.
15. A computer-readable storage medium having computer-executable instructions stored therein, which when executed by a processor, are configured to implement the method of any one of claims 1-5.
16. A computer-readable storage medium having computer-executable instructions stored therein, which when executed by a processor, are configured to implement the method of any one of claims 6-10.
17. A computer program product, comprising a computer program which, when executed by a processor, implements the method of any one of claims 1-5.
18. A computer program product, comprising a computer program which, when executed by a processor, implements the method of any one of claims 6-10.
CN202111676509.4A 2021-12-31 2021-12-31 Defect point recognition method of image and defect image recognition model training method Pending CN114299056A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111676509.4A CN114299056A (en) 2021-12-31 2021-12-31 Defect point recognition method of image and defect image recognition model training method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111676509.4A CN114299056A (en) 2021-12-31 2021-12-31 Defect point recognition method of image and defect image recognition model training method

Publications (1)

Publication Number Publication Date
CN114299056A true CN114299056A (en) 2022-04-08

Family

ID=80975639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111676509.4A Pending CN114299056A (en) 2021-12-31 2021-12-31 Defect point recognition method of image and defect image recognition model training method

Country Status (1)

Country Link
CN (1) CN114299056A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114859022A (en) * 2022-07-05 2022-08-05 泉州市颖秀科技发展有限公司 Fabric quality evaluation method, system, electronic device and storage medium
CN117635603A (en) * 2024-01-02 2024-03-01 汉狮光动科技(广东)有限公司 System and method for detecting on-line quality of hollow sunshade product based on target detection

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114859022A (en) * 2022-07-05 2022-08-05 泉州市颖秀科技发展有限公司 Fabric quality evaluation method, system, electronic device and storage medium
CN114859022B (en) * 2022-07-05 2022-09-02 泉州市颖秀科技发展有限公司 Fabric quality evaluation method, system, electronic device and storage medium
CN117635603A (en) * 2024-01-02 2024-03-01 汉狮光动科技(广东)有限公司 System and method for detecting on-line quality of hollow sunshade product based on target detection

Similar Documents

Publication Publication Date Title
US10452890B2 (en) Fingerprint template input method, device and medium
US10216976B2 (en) Method, device and medium for fingerprint identification
US20160027187A1 (en) Techniques for image segmentation
JP2018500705A (en) Region recognition method and apparatus
CN114299056A (en) Defect point recognition method of image and defect image recognition model training method
CN105427233A (en) Method and device for removing watermark
EP2975574A2 (en) Method, apparatus and terminal for image retargeting
CN112367559B (en) Video display method and device, electronic equipment, server and storage medium
US20180220066A1 (en) Electronic apparatus, operating method of electronic apparatus, and non-transitory computer-readable recording medium
CN110874809A (en) Image processing method and device, electronic equipment and storage medium
CN110876014B (en) Image processing method and device, electronic device and storage medium
US20190166299A1 (en) Image processing apparatus, control method thereof, and non-transitory computer-readable storage medium
US9665925B2 (en) Method and terminal device for retargeting images
CN107239758B (en) Method and device for positioning key points of human face
CN104899611A (en) Method and device for determining card position in image
CN111784772B (en) Attitude estimation model training method and device based on domain randomization
CN113920083A (en) Image-based size measurement method and device, electronic equipment and storage medium
CN109753217B (en) Dynamic keyboard operation method and device, storage medium and electronic equipment
CN113869295A (en) Object detection method and device, electronic equipment and storage medium
CN115641269A (en) Image repairing method and device and readable storage medium
CN108459770B (en) Coordinate correction method and device
CN108174101B (en) Shooting method and device
CN115145415A (en) Touch control method and device, electronic equipment and storage medium
CN110473138B (en) Graphic code conversion method, graphic code conversion device, electronic equipment and storage medium
CN110876015B (en) Method and device for determining image resolution, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination