CN115345895A - Image segmentation method and device for visual detection, computer equipment and medium - Google Patents

Image segmentation method and device for visual detection, computer equipment and medium Download PDF

Info

Publication number
CN115345895A
CN115345895A CN202211276217.6A CN202211276217A CN115345895A CN 115345895 A CN115345895 A CN 115345895A CN 202211276217 A CN202211276217 A CN 202211276217A CN 115345895 A CN115345895 A CN 115345895A
Authority
CN
China
Prior art keywords
pixel
pixel point
value
image
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211276217.6A
Other languages
Chinese (zh)
Other versions
CN115345895B (en
Inventor
刘冰
高锦龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yibi Technology Co ltd
Original Assignee
Shenzhen Yibi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yibi Technology Co ltd filed Critical Shenzhen Yibi Technology Co ltd
Priority to CN202211276217.6A priority Critical patent/CN115345895B/en
Publication of CN115345895A publication Critical patent/CN115345895A/en
Application granted granted Critical
Publication of CN115345895B publication Critical patent/CN115345895B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30148Semiconductor; IC; Wafer

Abstract

The invention relates to the technical field of artificial intelligence, in particular to an image segmentation method and device for visual inspection, computer equipment and a medium. The method includes the steps of pre-dividing pixel points in an image to be processed into a foreground category and a background category, constructing first normal distribution according to the mean value and the variance of the pixel points of the foreground category, constructing second normal distribution according to the mean value and the variance of the pixel points of the background category, determining a foreground characteristic value according to the pixel points and the first normal distribution, determining a background characteristic value according to the pixel points and the second normal distribution, calculating the similarity between the foreground characteristic value and the background characteristic value, determining a division category according to the similarity, determining the foreground characteristic value and the background characteristic value according to the normal distribution, enabling the difference between the foreground characteristic value and the background characteristic value to be more obvious and the contrast to be higher, and accordingly improving the accuracy of image division.

Description

Image segmentation method and device for visual detection, computer equipment and medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an image segmentation method and device for visual inspection, computer equipment and a medium.
Background
At present, with the development of artificial intelligence technology, image segmentation technology has been widely applied to scenes such as intelligent manufacturing and intelligent detection, and visual edge detection is an important application task of image segmentation technology.
The existing image segmentation technology generally adopts a machine learning method and a machine vision method, the machine learning method can have better generalization capability, but needs a large amount of data to learn, the preparation cost of the data is higher, the segmentation accuracy at the edge position of the image is lower, the machine vision method needs to construct a large amount of operators to ensure the accuracy of image segmentation, but the large amount of operators can strictly limit the acquisition condition of the image to be processed, and further the generalization capability of the machine vision method is insufficient, so that how to improve the generalization capability of the image segmentation becomes an urgent problem to be solved under the condition of ensuring the accuracy of the image segmentation.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image segmentation method, an image segmentation apparatus, a computer device, and a medium for visual inspection, so as to solve the problem that the generalization capability of image segmentation is low while the accuracy of image segmentation is ensured.
In a first aspect, an embodiment of the present invention provides an image segmentation method for visual inspection, where the image segmentation method includes:
pre-dividing an image to be processed by adopting a preset division threshold value to obtain a pre-division category of each pixel point in the image to be processed, wherein the pre-division category comprises a foreground category and a background category;
constructing a first normal distribution according to the mean and the variance of the pixel values of the pixel points of all the foreground categories, and constructing a second normal distribution according to the mean and the variance of the pixel values of the pixel points of all the background categories;
determining a foreground characteristic value of each pixel point according to the pixel value of each pixel point and the first normal distribution, and determining a background characteristic value of each pixel point according to the pixel value of each pixel point and the second normal distribution;
and calculating the similarity between the foreground characteristic value of the pixel point and the background characteristic value of the pixel point aiming at any pixel point, and determining the segmentation category of the pixel point according to the comparison result of the similarity and a preset similarity threshold value to obtain the segmentation categories of all the pixel points in the image to be processed.
In a second aspect, an embodiment of the present invention provides an image segmentation apparatus for visual inspection, including:
the pre-segmentation module is used for pre-segmenting an image to be processed by adopting a preset segmentation threshold value to obtain a pre-segmentation category of each pixel point in the image to be processed, wherein the pre-segmentation category comprises a foreground category and a background category;
the distribution construction module is used for constructing first normal distribution according to the mean value and the variance of the pixel values of the pixel points of all the foreground categories, and constructing second normal distribution according to the mean value and the variance of the pixel values of the pixel points of all the background categories;
the characteristic value determining module is used for determining a foreground characteristic value of each pixel point according to the pixel value of each pixel point and the first normal distribution, and determining a background characteristic value of each pixel point according to the pixel value of each pixel point and the second normal distribution;
and the image segmentation module is used for calculating the similarity between the foreground characteristic value of the pixel point and the background characteristic value of the pixel point aiming at any pixel point, determining the segmentation class of the pixel point according to the comparison result of the similarity and a preset similarity threshold value, and obtaining the segmentation classes of all the pixel points in the image to be processed.
In a third aspect, an embodiment of the present invention provides a computer device, where the computer device includes a processor, a memory, and a computer program stored in the memory and executable on the processor, and the processor implements the image segmentation method according to the first aspect when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the image segmentation method according to the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the method comprises the steps of pre-segmenting an image to be processed by adopting a preset segmentation threshold value to obtain a pre-segmentation class of each pixel point in the image to be processed, wherein the pre-segmentation class comprises a foreground class and a background class, a first normal distribution is constructed according to the mean value and the variance of pixel values of pixel points of all the foreground classes, a second normal distribution is constructed according to the mean value and the variance of pixel values of pixel points of all the background classes, the foreground characteristic value of each pixel point is determined according to the pixel value and the first normal distribution of each pixel point, the background characteristic value of each pixel point is determined according to the pixel value and the second normal distribution of each pixel point, the similarity of the foreground characteristic value of each pixel point and the similarity of the background characteristic value of each pixel point is calculated for any pixel point, the segmentation class of each pixel point is determined according to the comparison result of the similarity and the preset similarity threshold value, the segmentation class of all the pixel points in the image to be processed is obtained, the foreground class pixel points and the background class pixel points obtained by pre-segmentation through segmentation, the first normal distribution and the second normal distribution are respectively constructed, the foreground characteristic values and the normal distribution of the pixel points and the pixel points obtained by pre-segmentation of the foreground class and background class pixels are determined according to the comparison result of the foreground characteristic values and the background distribution, so that the foreground characteristic values of the image to be processed are different from the image to be processed, the general segmentation of the image to be processed, and the general segmentation of the image to be processed is more obviously improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic diagram of an application environment of an image segmentation method for visual inspection according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an image segmentation method for visual inspection according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating an image segmentation method for visual inspection according to a second embodiment of the present invention;
fig. 4 is a schematic structural diagram of an image segmentation apparatus for visual inspection according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present invention and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present invention. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The embodiment of the invention can acquire and process related data based on an artificial intelligence technology. Among them, artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
It should be understood that, the sequence numbers of the steps in the following embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
An image segmentation method for visual inspection according to an embodiment of the present invention can be applied to the application environment shown in fig. 1, in which a client communicates with a server. The client includes, but is not limited to, a palm top computer, a desktop computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cloud terminal device, a Personal Digital Assistant (PDA), and other computer devices. The server may be an independent server, or may be a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), a big data and artificial intelligence platform, and the like.
Referring to fig. 2, which is a schematic flow chart of an image segmentation method for visual inspection according to an embodiment of the present invention, the image segmentation method may be applied to the client in fig. 1, and a computer device corresponding to the client is connected to a server to obtain an image to be processed from the server. As shown in fig. 2, the image segmentation method may include the steps of:
step S201, pre-dividing the image to be processed by using a preset division threshold to obtain a pre-division category of each pixel point in the image to be processed.
The segmentation threshold can be used for comparing with pixel points of the image to be processed, and the pre-segmentation categories to which the pixel points in the image to be processed belong are determined according to the comparison result, wherein the pre-segmentation categories comprise foreground categories and background categories, the foreground categories can refer to the categories to which the pixel points corresponding to the object to be segmented in the image to be processed belong, and the background categories can refer to the categories to which the pixel points corresponding to the object not to be segmented in the image to be processed belong.
Specifically, the set value range of the segmentation threshold is [0, 255], the specific value of the segmentation threshold can be determined by an implementer according to actual conditions, and according to the comparison result between the pixel point in the image to be processed and the segmentation threshold, the pixel point in the image to be processed can be divided into two categories, one category is the category to which the pixel point of which the pixel value is less than or equal to the segmentation threshold belongs, and the other category is the category to which the pixel point of which the pixel value is greater than the segmentation threshold belongs.
For example, in this embodiment, the image segmentation may be applied to the wafer visual inspection, if the background regions except for the wafer in the scene of the wafer image acquisition are all white, since the pixel value corresponding to the white is 255, the segmentation threshold may be set to 200, the class to which the pixel value of the pixel point greater than 200 belongs in the image to be processed is determined as the background class, and correspondingly, the class to which the pixel value of the pixel point less than or equal to 200 belongs is determined as the foreground class, thereby implementing the pre-segmentation of the foreground pixel point and the background pixel point in the image to be processed.
Optionally, before the pre-segmenting the image to be processed by using the preset segmentation threshold, the method further includes:
processing each pixel point in the image to be processed by adopting a preset sharpness algorithm to obtain an evaluation value of each pixel point;
summing the evaluation values of all the pixel points, and mapping a summation result into an image weight according to a preset mapping table;
and calculating the product of the pixel value mean value of all pixel points in the image to be processed and the image weight, and determining the calculation result as a preset segmentation threshold value.
The sharpness may represent the definition of an image to be processed without a reference image, the sharpness algorithm may be a method for calculating sharpness according to pixel values in the image to be processed, the evaluation value may be a sharpness evaluation value calculated by the sharpness algorithm for each pixel point in the image to be processed, the preset mapping table may include a mapping relationship between a sum of the image evaluation values and image weights, and the image weights may be numerical representations of the image definition.
Specifically, the size of the image to be processed is set to be M × N, that is, the image to be processed is a two-dimensional matrix of M rows and N columns, and for any pixel point in the image to be processed, neighborhood pixel points of the pixel point are obtained.
And calculating the difference value between the pixel value of any neighborhood pixel and the pixel value of the pixel, taking the ratio of the absolute value of the calculation result to the preset distance value as the evaluation sub-value of the pixel, traversing all the neighborhood pixels to obtain all the evaluation sub-values of the pixel, and calculating the sum of all the evaluation sub-values as the evaluation value of the pixel.
The preset distance value may be determined according to the distance between the pixel points, in this embodiment, the distance between the pixel points is determined according to the distance of the center position of the pixel point, the preset distance value between the adjacent pixel points in the same row or in the same column is set to be 1, for example, the preset distance value between the pixel point with the coordinate (m, n) and the pixel point with the coordinate (m +1,n) is set to be 1, and accordingly, according to the pythagorean theorem, the non-same-row and non-same-column adjacent pixel points can be calculatedThe preset distance value between the pixel points is
Figure DEST_PATH_IMAGE001
For example, the preset distance value between the pixel point with the coordinate (m, n) and the pixel point with the coordinate (m +1, n + 1) is
Figure 551719DEST_PATH_IMAGE001
It should be noted that, for a neighborhood pixel of a pixel point, it should usually include four pixels adjacent to the pixel point in the same row or in the same column and four pixels adjacent to the pixel point in non-same row and non-same column, and when calculating an evaluation sub-value of a neighborhood pixel, according to the adjacent type of the neighborhood pixel and the pixel point, the adjacent type is the adjacent of the same row or the same column and the adjacent of the non-same row and non-same column, the preset distance value when calculating the evaluation sub-value is determined.
When the summation result is mapped to an image weight according to a preset mapping table, a plurality of reference intervals may be preset, one reference interval corresponds to one image weight, and after the summation result is obtained, the reference interval to which the summation result belongs is determined to obtain the image weight corresponding to the summation result, in this embodiment, the value range of the image weight is [0,1], so as to ensure that the value of the segmentation threshold does not exceed the value range of the pixel value.
In the embodiment, the sum of the evaluation values of the images to be processed is calculated by adopting the sharpness algorithm, and then the product of the image weight obtained by mapping the sum of the evaluation values and the pixel value mean of the images to be processed is used as the segmentation threshold, compared with the manual setting of the segmentation threshold, the segmentation threshold can be dynamically set according to the gray information of the images to be processed, so that the generalization capability of the pre-segmentation process is stronger, the evaluation value capable of representing the image definition can be effectively calculated by adopting the sharpness algorithm, the influence of the working conditions such as image blurring and the like on the pre-segmentation process can be effectively overcome, and the accuracy of the pre-segmentation is further improved.
Optionally, the pre-dividing the image to be processed by using a preset division threshold to obtain the pre-division category of each pixel point in the image to be processed includes:
multiplying the pixel value of the pixel point by the evaluation value aiming at any pixel point in the image to be processed to obtain a multiplication result;
if the multiplication result is larger than the segmentation threshold, determining the pre-segmentation class of the pixel point as the foreground class;
and if the multiplication result is less than or equal to the segmentation threshold, determining the pre-segmentation class of the pixel point as the background class.
The evaluation values are normalized, namely the value range of the evaluation value is [0,1], so that the multiplication result of the pixel value of the pixel point and the evaluation value cannot exceed the value range of the pixel value.
In the embodiment, the multiplication result of the pixel value and the evaluation value of the pixel point is compared with the segmentation threshold value, the image to be processed is pre-segmented, and the pixel value can be adjusted according to the local image quality of the pixel point, so that the condition of mistaken segmentation caused by small difference of the pixel values of the pixel points of the foreground category and the background category under the working conditions of image blurring and the like is avoided, and the accuracy of the pre-segmentation is improved.
According to the step of pre-segmenting the image to be processed by adopting the preset segmentation threshold value to obtain the pre-segmentation categories of each pixel point in the image to be processed, the pixel points in the image to be processed are pre-segmented into the foreground categories and the background categories, so that the working condition isolation effect is achieved, namely the influence of the background category pixel points can be isolated when the foreground characteristic value is determined subsequently, meanwhile, the influence of the foreground category pixel points can be isolated when the background characteristic value is determined subsequently, the accuracy of determining the subsequent foreground characteristic value and the subsequent background characteristic value is improved, and the accuracy of image segmentation is further improved.
Step S202, a first normal distribution is constructed according to the mean and the variance of the pixel values of the pixel points of all the foreground categories, and a second normal distribution is constructed according to the mean and the variance of the pixel values of the pixel points of all the background categories.
The first normal distribution can be used for representing the value probability distribution of the pixel values of the foreground category pixel points, and the second normal distribution can be used for representing the value probability distribution of the pixel values of the background category pixel points.
Specifically, the normal distribution can represent the probability that a pixel takes a certain pixel value under the condition that the pre-segmentation category of the pixel is known.
The step of constructing the first normal distribution according to the mean and the variance of the pixel values of the pixel points of all the foreground categories and the step of constructing the second normal distribution according to the mean and the variance of the pixel values of the pixel points of all the background categories can effectively represent the value probability of the pixel values of the known pre-segmentation categories, so that the characteristic value can be conveniently determined according to the value probability in the follow-up process, and the accuracy of determining the characteristic value is improved.
Step S203, determining the foreground characteristic value of each pixel point according to the pixel value and the first normal distribution of each pixel point, and determining the background characteristic value of each pixel point according to the pixel value and the second normal distribution of each pixel point.
The foreground characteristic value can be used for representing foreground information of each pixel point in the image to be processed, and the background characteristic value can be used for representing background information of each pixel point in the image to be processed.
Optionally, determining the foreground characteristic value of each pixel point according to the pixel value of each pixel point and the first normal distribution, and determining the background characteristic value of each pixel point according to the pixel value of each pixel point and the second normal distribution includes:
determining the minimum value of the pixel values of all the pixel points, and determining a target interval according to the minimum value and the pixel value of the pixel point aiming at any pixel point;
determining a first probability value corresponding to the target interval in the first normal distribution, multiplying the first probability value by a preset value, and taking a multiplication result as a foreground characteristic value of the pixel point;
and determining a second probability value corresponding to the target interval in the second normal distribution, multiplying the second probability value by a preset value, taking the multiplication result as a background characteristic value, traversing all the pixel points, and obtaining a foreground characteristic value and a background characteristic value of each pixel point.
The minimum value of the pixel values of all the pixel points can be used as the left boundary of the target interval, the pixel values of the pixel points can be used as the right boundary of the target interval, and the target interval can be determined according to the left boundary and the right boundary.
Specifically, in normal distribution, a horizontal axis may represent a value of a pixel value, and a physical meaning of a mapping value of the pixel value in the normal distribution is probability density, and by adopting a value interval mode, integral calculation may be performed on a distribution area corresponding to the interval, and an area obtained by the integral calculation is a probability value corresponding to the interval.
In this embodiment, the minimum value is used as the left boundary of the target interval, and the probability value corresponding to the target interval is actually the cumulative probability, that is, the sum of probability values of all pixel values that are less than or equal to the pixel value of the pixel, and is calculated in the cumulative probability manner, so that the pixel value of the pixel is directly adjusted according to the cumulative probability, and the foreground characteristic value and the background characteristic value are further obtained.
For example, if the pixel value of the pixel point is 120, and the minimum value of the pixel values of all the pixel points is 5, the target interval is [5, 120], and accordingly, in the first normal distribution, the proportion of the distribution integral area corresponding to the interval [5, 120] to the area of the entire first normal distribution is 0.5, the first probability value is 0.5, in this embodiment, the preset value is 255, after the first probability value is multiplied by the preset value, rounding processing needs to be performed, in this embodiment, the multiplication result is 127.5, that is, after rounding processing is 128, and then 128 is determined as the foreground characteristic value of the pixel point.
In an embodiment, when determining the probability value corresponding to the pixel value of the pixel point in the normal distribution, a target interval with a smaller interval range may be further set to obtain the probability value corresponding to the pixel value of the pixel point, for example, the interval length may be set to be 1, and for the pixel value 120, the probability value corresponding to the target interval [120, 121] is used as the feature value.
In this embodiment, according to the first normal distribution and the normal distribution, the foreground characteristic value and the background characteristic value of each pixel point are determined, so that a scaling effect can be achieved, and under the condition that the variance of the normal distribution is small, that is, under the condition that the pixel value distribution of the image to be processed is relatively compact, differences among different pixel values are amplified, so that an effect of improving contrast is achieved, and the accuracy of subsequent image segmentation is further improved.
The step of determining the foreground characteristic value of each pixel point according to the pixel value and the first normal distribution of each pixel point and the step of determining the background characteristic value of each pixel point according to the pixel value and the second normal distribution of each pixel point respectively determine the foreground characteristic value and the background characteristic value of each pixel point, so that the segmentation categories of the pixel points can be determined conveniently according to the similarity calculation results of the foreground characteristic value and the background characteristic value, and the accuracy of image segmentation is improved.
Step S204, calculating the similarity of the foreground characteristic value of the pixel point and the background characteristic value of the pixel point aiming at any pixel point, and determining the segmentation class of the pixel point according to the comparison result of the similarity and a preset similarity threshold value to obtain the segmentation classes of all the pixel points in the image to be processed.
The similarity can be represented by a difference between a foreground characteristic value of the pixel and a background characteristic value of the pixel, and the similarity threshold is a preset value, and in this embodiment, the similarity threshold is set to be 0.7.
Optionally, the segmentation class includes an edge class and a non-edge class;
determining the segmentation class of the corresponding pixel point according to the comparison result of the similarity and the preset similarity threshold comprises:
comparing the similarity with a preset similarity threshold, and determining the corresponding pixel point as an edge category when the similarity is greater than the preset similarity threshold;
and when the similarity is smaller than or equal to a preset similarity threshold, determining that the corresponding pixel point is in a non-edge category.
In the wafer visual inspection scene of the embodiment, in addition to the edge of the wafer itself, other edges may be determined as defects, such as scratches, abrasion and the like, so that the image segmentation method of the embodiment may be used for quality inspection of the wafer.
In the embodiment, based on the prior information of the image attribute, the similarity obtained by calculating the pixel points at the edge positions is high, and the similarity obtained by calculating the pixel points at the non-edge positions is low, so that the image is accurately segmented according to the similarity threshold, and the accuracy of image segmentation is improved.
Optionally, after obtaining the segmentation categories of all pixel points in the image to be processed, the method further includes:
and according to the segmentation categories of all the pixel points, carrying out segmentation again on the image to be processed to obtain an image segmentation result.
The segmentation class can include an edge class and a non-edge class, after the segmentation class is obtained, the image to be processed is segmented again according to the segmentation class to obtain a segmentation image, and the segmentation image is a binary image, wherein the pixel value of the pixel point of the edge class is 1, and the pixel value of the pixel point of the non-edge class is 0.
According to the method and the device, the image to be processed is segmented again according to the segmentation categories of all the pixel points, so that the visual segmentation image is obtained, the segmentation image is consistent with the size of the image to be processed, the pixel point coordinates of the edge categories in the segmentation image can be corresponded to the image to be processed, edge positioning is facilitated, and positioning efficiency in the visual detection process is improved.
The method comprises the steps of calculating the similarity between the foreground characteristic value of each pixel point and the background characteristic value of each pixel point, determining the segmentation class of each pixel point according to the comparison result between the similarity and a preset similarity threshold, obtaining the segmentation classes of all the pixel points in an image to be processed, determining the segmentation classes of the pixel points according to the similarity between the foreground characteristic value of each pixel point and the background characteristic value of each pixel point, and effectively segmenting the pixel points by utilizing the image attributes belonging to the edge pixel points, so that the image segmentation accuracy is improved.
In the embodiment, the foreground category pixel points and the background category pixel points obtained by pre-segmentation are respectively constructed into the first normal distribution and the second normal distribution, and the foreground characteristic value and the background characteristic value are determined according to the normal distribution, so that the difference between the foreground characteristic value and the background characteristic value of the edge pixel points is more obvious, the accuracy of image segmentation is ensured, and meanwhile, the normal distribution is determined based on the image to be processed, namely the characteristic value can be adaptively adjusted according to the change of the image to be processed, so that the characteristic value can adapt to different images to be processed, and the generalization capability of image segmentation is improved.
Referring to fig. 3, which is a schematic flow chart of an image segmentation method for visual inspection according to a second embodiment of the present invention, after obtaining a foreground characteristic value and a background characteristic value of each pixel point in the image segmentation method, similarity calculation may be performed directly using the foreground characteristic value and the background characteristic value, or the foreground characteristic value and the background characteristic value may be updated according to neighborhood information of the pixel points, and similarity calculation may be performed using the updated foreground characteristic value and the updated background characteristic value.
The process of calculating the similarity by directly using the foreground characteristic value and the background characteristic value is described in the first embodiment, and is not described herein again.
Updating the foreground characteristic value and the background characteristic value according to the neighborhood information of the pixel points, and adopting the updated foreground characteristic value and the updated background characteristic value to carry out similarity calculation according to the following steps:
step S301, aiming at any pixel point, extracting at least two target pixel points including the pixel point from the image to be processed by using a preset template, constructing a first target normal distribution according to the mean value and the variance of foreground characteristic values corresponding to all the target pixel points, and constructing a second target normal distribution according to the mean value and the variance of background characteristic values corresponding to all the target pixel points;
step S302, inputting the foreground characteristic value of each target pixel point into a first target normal distribution, and taking the mapping value of the corresponding target pixel point as a first weight of the corresponding target pixel point according to output;
step S303, inputting the background characteristic value of each target pixel point into a second target normal distribution, and taking the mapping value of the corresponding target pixel point as a second weight of the corresponding target pixel point according to the output;
step S304, multiplying the foreground characteristic value of each target pixel point by the corresponding first weight respectively, and then adding the values, updating the foreground characteristic value according to the addition result, and obtaining the updated foreground characteristic value of the pixel point;
step S305, multiplying the background characteristic value of each target pixel point by the corresponding second weight respectively, and then adding the multiplied background characteristic values, updating the background characteristic value according to the addition result, and obtaining the updated background characteristic value of the pixel point;
correspondingly, for any pixel point, calculating the similarity between the foreground characteristic value of the pixel point and the background characteristic value of the pixel point comprises:
step S306, aiming at any pixel point, calculating the similarity between the foreground characteristic value after the pixel point is updated and the background characteristic value after the pixel point is updated.
The preset template may refer to a template with a preset size, the size of the template is set to 3*3 in this embodiment, that is, the number of target pixels is 9, the values of elements in the template are all 1, the target pixels may refer to pixels for constructing target normal distribution, the first target normal distribution may refer to normal distribution constructed based on foreground characteristic values of the target pixels, and the second target normal distribution may refer to normal distribution constructed based on background characteristic values of the target pixels.
The first weight may refer to a weight of the target pixel point when the foreground characteristic value is updated, and the second weight may refer to a weight of the target pixel point when the background characteristic value is updated.
Specifically, the mean and the variance of the foreground characteristic values corresponding to all target pixel points are calculated, a one-dimensional normal distribution can be constructed according to the calculated mean and variance, in the one-dimensional normal distribution, a horizontal axis can represent the foreground characteristic values, the dereferencing range of the foreground characteristic values is [0, 255], then the foreground characteristic values of the target pixel points can be input into the first target normal distribution, the corresponding value of the foreground characteristic values of the target pixel points in the first target normal distribution is determined as a first weight, and similarly, the corresponding value of the background characteristic values of the target pixel points in the second target normal distribution can be obtained as a second weight.
And performing weighted summation on the foreground characteristic values of all the target pixel points according to the first weights corresponding to the target pixel points, wherein ratio calculation needs to be performed on the foreground characteristic values and the number of the target pixel points after weighted summation, and the calculation result is used as the updated foreground characteristic value.
And performing weighted summation on the background characteristic values of all the target pixel points according to the second weights corresponding to the target pixel points, wherein ratio calculation needs to be performed on the background characteristic values and the number of the target pixel points after weighted summation, and the calculation result is used as an updated background characteristic value.
And (4) performing difference on the updated foreground characteristic value and the updated background characteristic value, and performing normalization processing on the difference result, wherein the processed result can be used for representing the similarity.
In the embodiment, the characteristic value of the pixel point is updated according to the local characteristic, the concept of normal distribution is introduced for weighted calculation when the local characteristic is calculated, and compared with direct calculation of the neighborhood, the correlation degree between the neighborhoods is highlighted, so that the result is more authentic, the characterization capability of the characteristic value is improved, and the accuracy of image segmentation is favorably improved.
Fig. 4 shows a block diagram of an image segmentation apparatus for visual inspection according to a third embodiment of the present invention, where the image segmentation apparatus is applied to a client, and a computer device corresponding to the client is connected to a server to obtain an image to be processed from the server. For convenience of explanation, only portions related to the embodiments of the present invention are shown.
Referring to fig. 4, the image segmentation apparatus includes:
the pre-segmentation module 41 is configured to perform pre-segmentation on the image to be processed by using a preset segmentation threshold to obtain a pre-segmentation class of each pixel point in the image to be processed, where the pre-segmentation class includes a foreground class and a background class;
the distribution building module 42 is configured to build a first normal distribution according to the mean and the variance of the pixel values of the pixel points of all the foreground categories, and build a second normal distribution according to the mean and the variance of the pixel values of the pixel points of all the background categories;
a feature value determining module 43, configured to determine a foreground feature value of each pixel according to the pixel value of each pixel and the first normal distribution, and determine a background feature value of each pixel according to the pixel value of each pixel and the second normal distribution;
the image segmentation module 44 is configured to calculate, for any pixel point, similarity between a foreground characteristic value of the pixel point and a background characteristic value of the pixel point, and determine a segmentation class of the pixel point according to a comparison result between the similarity and a preset similarity threshold, to obtain segmentation classes of all pixel points in the image to be processed.
Optionally, the image segmentation apparatus further includes:
the target distribution construction module is used for extracting at least two target pixel points including the pixel points from the image to be processed by using a preset template aiming at any pixel point, constructing a first target normal distribution according to the mean value and the variance of the foreground characteristic values corresponding to all the target pixel points, and constructing a second target normal distribution according to the mean value and the variance of the background characteristic values corresponding to all the target pixel points;
the first weight determining module is used for inputting the foreground characteristic value of each target pixel point into first target normal distribution and outputting a mapping value of the corresponding target pixel point as a first weight of the corresponding target pixel point;
the second weight determining module is used for inputting the background characteristic value of each target pixel point into second target normal distribution and outputting a mapping value corresponding to the target pixel point as a second weight corresponding to the target pixel point;
the foreground updating module is used for multiplying the foreground characteristic value of each target pixel point by the corresponding first weight respectively and then adding the values together, updating the foreground characteristic value according to the adding result, and obtaining the updated foreground characteristic value of the pixel point;
the background updating module is used for multiplying the background characteristic value of each target pixel point by the corresponding second weight and then adding the multiplied background characteristic values, updating the background characteristic value according to the addition result and obtaining the updated background characteristic value of the pixel point;
accordingly, the image segmentation module 44 includes:
and the update similarity calculation unit is used for calculating the similarity between the updated foreground characteristic value of the pixel point and the updated background characteristic value of the pixel point aiming at any pixel point.
Optionally, the characteristic value determining module 43 includes:
the interval determining unit is used for determining the minimum value in the pixel values of all the pixel points, and determining a target interval according to the minimum value and the pixel values of the pixel points aiming at any pixel point;
the foreground calculation unit is used for determining a first probability value corresponding to the target interval in the first normal distribution, multiplying the first probability value by a preset value, and taking a multiplication result as a foreground characteristic value of the pixel point;
and the background calculation unit is used for determining a second probability value corresponding to the target interval in the second normal distribution, multiplying the second probability value by a preset value, taking the multiplication result as a background characteristic value, traversing all the pixel points and obtaining a foreground characteristic value and a background characteristic value of each pixel point.
Optionally, the segmentation class includes an edge class and a non-edge class;
the image segmentation module 44 includes:
the edge determining unit is used for comparing the similarity with a preset similarity threshold, and when the similarity is greater than the preset similarity threshold, determining the corresponding pixel point as an edge category;
and the non-edge determining unit is used for determining the corresponding pixel point as a non-edge category when the similarity is less than or equal to a preset similarity threshold.
Optionally, the image segmentation apparatus further includes:
the pixel evaluation module is used for processing each pixel point in the image to be processed by adopting a preset sharpness algorithm to obtain an evaluation value of each pixel point;
the weight mapping module is used for summing the evaluation values of all the pixel points and mapping the summation result into image weight according to a preset mapping table;
and the threshold value determining module is used for calculating the product of the pixel value mean value of all pixel points in the image to be processed and the image weight and determining the calculation result as a preset segmentation threshold value.
Optionally, the pre-segmentation module 41 includes:
the pixel calculation unit is used for multiplying the pixel value of the pixel point by the evaluation value aiming at any pixel point in the image to be processed to obtain a multiplication result;
the foreground pre-segmentation unit is used for determining the pre-segmentation class of the pixel point as the foreground class if the multiplication result is greater than the segmentation threshold;
and the background pre-segmentation unit is used for determining the pre-segmentation class of the pixel point as the background class if the multiplication result is less than or equal to the segmentation threshold.
Optionally, the image segmentation apparatus further includes:
and the segmentation execution module is used for segmenting the image to be processed again according to the segmentation categories of all the pixel points to obtain an image segmentation result.
It should be noted that, because the above-mentioned modules and units are based on the same concept, and their specific functions and technical effects are brought about by the method embodiment of the present invention, reference may be made to the method embodiment part specifically, and details are not described here again.
Fig. 5 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention. As shown in fig. 5, the computer apparatus of this embodiment includes: at least one processor (only one shown in fig. 5), a memory, and a computer program stored in the memory and executable on the at least one processor, the processor when executing the computer program implementing the steps in any of the various embodiments of the image segmentation method for visual inspection described above.
The computer device may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that fig. 5 is merely an example of a computer device and is not intended to be limiting, and that a computer device may include more or fewer components than those shown, or some components may be combined, or different components may be included, such as a network interface, a display screen, and input devices, etc.
The Processor may be a CPU, or other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory includes readable storage media, internal memory, etc., wherein the internal memory may be the internal memory of the computer device, and the internal memory provides an environment for the operating system and the execution of the computer-readable instructions in the readable storage media. The readable storage medium may be a hard disk of the computer device, and in other embodiments may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the computer device. Further, the memory may also include both internal and external storage units of the computer device. The memory is used for storing an operating system, application programs, a BootLoader (BootLoader), data, and other programs, such as program codes of a computer program, and the like. The memory may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention. The specific working processes of the units and modules in the above-mentioned apparatus may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method of the above embodiments may be implemented by a computer program, which may be stored in a computer readable storage medium and used by a processor to implement the steps of the above method embodiments. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code, recording medium, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier signals, telecommunications signals, and software distribution media. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In some jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and proprietary practices.
The present invention can also be implemented by a computer program product, which when executed on a computer device causes the computer device to implement all or part of the processes in the method of the above embodiments.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided by the present invention, it should be understood that the disclosed apparatus/computer device and method may be implemented in other ways. For example, the above-described apparatus/computer device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. An image segmentation method for visual inspection, the image segmentation method comprising:
pre-dividing an image to be processed by adopting a preset division threshold value to obtain a pre-division category of each pixel point in the image to be processed, wherein the pre-division category comprises a foreground category and a background category;
constructing a first normal distribution according to the mean and variance of the pixel values of the pixel points of all the foreground categories, and constructing a second normal distribution according to the mean and variance of the pixel values of the pixel points of all the background categories;
determining a foreground characteristic value of each pixel point according to the pixel value of each pixel point and the first normal distribution, and determining a background characteristic value of each pixel point according to the pixel value of each pixel point and the second normal distribution;
and calculating the similarity between the foreground characteristic value of the pixel point and the background characteristic value of the pixel point aiming at any pixel point, and determining the segmentation class of the pixel point according to the comparison result of the similarity and a preset similarity threshold value to obtain the segmentation classes of all the pixel points in the image to be processed.
2. The image segmentation method according to claim 1, further comprising, after the determining the background feature value of each pixel point:
aiming at any pixel point, extracting at least two target pixel points including the pixel point from the image to be processed by using a preset template, constructing a first target normal distribution according to the mean value and the variance of foreground characteristic values corresponding to all the target pixel points, and constructing a second target normal distribution according to the mean value and the variance of background characteristic values corresponding to all the target pixel points;
inputting the foreground characteristic value of each target pixel point into the first target normal distribution, and taking the mapping value of the corresponding target pixel point as a first weight of the corresponding target pixel point according to the output;
inputting the background characteristic value of each target pixel point into the second target normal distribution, and taking the mapping value of the corresponding target pixel point as a second weight of the corresponding target pixel point according to the output;
respectively multiplying the foreground characteristic value of each target pixel point by the corresponding first weight and then adding the foreground characteristic values, and updating the foreground characteristic values according to the addition results to obtain updated foreground characteristic values of the pixel points;
multiplying the background characteristic value of each target pixel point by the corresponding second weight respectively and then adding the multiplied background characteristic values, and updating the background characteristic value according to the addition result to obtain the updated background characteristic value of the pixel point;
correspondingly, for any pixel point, the calculating the similarity between the foreground characteristic value of the pixel point and the background characteristic value of the pixel point includes:
and calculating the similarity between the updated foreground characteristic value of the pixel point and the updated background characteristic value of the pixel point aiming at any pixel point.
3. The image segmentation method of claim 1, wherein the determining the foreground characteristic value of each pixel according to the pixel value of each pixel and the first normal distribution, and the determining the background characteristic value of each pixel according to the pixel value of each pixel and the second normal distribution comprises:
determining the minimum value of the pixel values of all the pixel points, and determining a target interval according to the minimum value and the pixel values of the pixel points aiming at any pixel point;
determining a first probability value corresponding to the target interval in the first normal distribution, multiplying the first probability value by a preset value, and taking a multiplication result as a foreground characteristic value of the pixel point;
and determining a second probability value corresponding to the target interval in the second normal distribution, multiplying the second probability value by the preset value, taking the multiplication result as the background characteristic value, and traversing all the pixel points to obtain the foreground characteristic value and the background characteristic value of each pixel point.
4. The image segmentation method according to claim 1, wherein the segmentation classes include an edge class and a non-edge class;
the determining the segmentation class of the corresponding pixel point according to the comparison result of the similarity and the preset similarity threshold comprises:
comparing the similarity with the preset similarity threshold, and when the similarity is greater than the preset similarity threshold, determining the corresponding pixel point as the edge category;
and when the similarity is smaller than or equal to the preset similarity threshold, determining the corresponding pixel point as the non-edge category.
5. The image segmentation method according to claim 1, wherein before the pre-segmenting the image to be processed by using the preset segmentation threshold, the method further comprises:
processing each pixel point in the image to be processed by adopting a preset sharpness algorithm to obtain an evaluation value of each pixel point;
summing the evaluation values of all the pixel points, and mapping a summation result into an image weight according to a preset mapping table;
and calculating the product of the pixel value mean value of all pixel points in the image to be processed and the image weight, and determining the calculation result as the preset segmentation threshold.
6. The image segmentation method according to claim 5, wherein the pre-segmenting the image to be processed by using the preset segmentation threshold to obtain the pre-segmentation class of each pixel point in the image to be processed comprises:
aiming at any pixel point in the image to be processed, multiplying the pixel value of the pixel point by the evaluation value to obtain a multiplication result;
if the multiplication result is larger than the segmentation threshold, determining the pre-segmentation class of the pixel point as the foreground class;
and if the multiplication result is less than or equal to the segmentation threshold, determining the pre-segmentation class of the pixel point as the background class.
7. The image segmentation method according to any one of claims 1 to 6, wherein after obtaining the segmentation classes of all the pixel points in the image to be processed, the method further comprises:
and segmenting the image to be processed again according to the segmentation categories of all the pixel points to obtain an image segmentation result.
8. An image segmentation apparatus for visual inspection, characterized in that the image segmentation apparatus comprises:
the pre-segmentation module is used for pre-segmenting an image to be processed by adopting a preset segmentation threshold value to obtain a pre-segmentation class of each pixel point in the image to be processed, wherein the pre-segmentation class comprises a foreground class and a background class;
the distribution construction module is used for constructing first normal distribution according to the mean value and the variance of the pixel values of the pixel points of all the foreground categories, and constructing second normal distribution according to the mean value and the variance of the pixel values of the pixel points of all the background categories;
the characteristic value determining module is used for determining a foreground characteristic value of each pixel point according to the pixel value of each pixel point and the first normal distribution, and determining a background characteristic value of each pixel point according to the pixel value of each pixel point and the second normal distribution;
and the image segmentation module is used for calculating the similarity between the foreground characteristic value of the pixel point and the background characteristic value of the pixel point aiming at any pixel point, determining the segmentation class of the pixel point according to the comparison result of the similarity and a preset similarity threshold value, and obtaining the segmentation classes of all the pixel points in the image to be processed.
9. A computer device comprising a processor, a memory, and a computer program stored in the memory and executable on the processor, the processor implementing the image segmentation method as claimed in any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the image segmentation method according to any one of claims 1 to 7.
CN202211276217.6A 2022-10-19 2022-10-19 Image segmentation method and device for visual detection, computer equipment and medium Active CN115345895B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211276217.6A CN115345895B (en) 2022-10-19 2022-10-19 Image segmentation method and device for visual detection, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211276217.6A CN115345895B (en) 2022-10-19 2022-10-19 Image segmentation method and device for visual detection, computer equipment and medium

Publications (2)

Publication Number Publication Date
CN115345895A true CN115345895A (en) 2022-11-15
CN115345895B CN115345895B (en) 2023-01-06

Family

ID=83957527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211276217.6A Active CN115345895B (en) 2022-10-19 2022-10-19 Image segmentation method and device for visual detection, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN115345895B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363362A (en) * 2023-03-08 2023-06-30 阿里巴巴(中国)有限公司 Image semantic segmentation method, object recognition method and computing device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268874A (en) * 2014-09-26 2015-01-07 中国民航科学技术研究院 Non-coherent radar image background modeling method based on normal distribution function
EP2977956A1 (en) * 2014-07-23 2016-01-27 Xiaomi Inc. Method, apparatus and device for segmenting an image
CN105528784A (en) * 2015-12-02 2016-04-27 沈阳东软医疗系统有限公司 Method and device for segmenting foregrounds and backgrounds
CN109978890A (en) * 2019-02-25 2019-07-05 平安科技(深圳)有限公司 Target extraction method, device and terminal device based on image procossing
CN111178211A (en) * 2019-12-20 2020-05-19 北京迈格威科技有限公司 Image segmentation method and device, electronic equipment and readable storage medium
CN113656627A (en) * 2021-08-20 2021-11-16 北京达佳互联信息技术有限公司 Skin color segmentation method and device, electronic equipment and storage medium
CN113888543A (en) * 2021-08-20 2022-01-04 北京达佳互联信息技术有限公司 Skin color segmentation method and device, electronic equipment and storage medium
CN114549913A (en) * 2022-04-25 2022-05-27 深圳思谋信息科技有限公司 Semantic segmentation method and device, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2977956A1 (en) * 2014-07-23 2016-01-27 Xiaomi Inc. Method, apparatus and device for segmenting an image
CN104268874A (en) * 2014-09-26 2015-01-07 中国民航科学技术研究院 Non-coherent radar image background modeling method based on normal distribution function
CN105528784A (en) * 2015-12-02 2016-04-27 沈阳东软医疗系统有限公司 Method and device for segmenting foregrounds and backgrounds
CN109978890A (en) * 2019-02-25 2019-07-05 平安科技(深圳)有限公司 Target extraction method, device and terminal device based on image procossing
CN111178211A (en) * 2019-12-20 2020-05-19 北京迈格威科技有限公司 Image segmentation method and device, electronic equipment and readable storage medium
CN113656627A (en) * 2021-08-20 2021-11-16 北京达佳互联信息技术有限公司 Skin color segmentation method and device, electronic equipment and storage medium
CN113888543A (en) * 2021-08-20 2022-01-04 北京达佳互联信息技术有限公司 Skin color segmentation method and device, electronic equipment and storage medium
CN114549913A (en) * 2022-04-25 2022-05-27 深圳思谋信息科技有限公司 Semantic segmentation method and device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116363362A (en) * 2023-03-08 2023-06-30 阿里巴巴(中国)有限公司 Image semantic segmentation method, object recognition method and computing device
CN116363362B (en) * 2023-03-08 2024-01-09 阿里巴巴(中国)有限公司 Image semantic segmentation method, object recognition method and computing device

Also Published As

Publication number Publication date
CN115345895B (en) 2023-01-06

Similar Documents

Publication Publication Date Title
CN110705583B (en) Cell detection model training method, device, computer equipment and storage medium
CN111275730A (en) Method, device and equipment for determining map area and storage medium
CN109343920B (en) Image processing method and device, equipment and storage medium thereof
CN112989995B (en) Text detection method and device and electronic equipment
CN112651953B (en) Picture similarity calculation method and device, computer equipment and storage medium
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112101386B (en) Text detection method, device, computer equipment and storage medium
CN115984662B (en) Multi-mode data pre-training and identifying method, device, equipment and medium
CN111899270A (en) Card frame detection method, device and equipment and readable storage medium
CN115345895B (en) Image segmentation method and device for visual detection, computer equipment and medium
CN114581646A (en) Text recognition method and device, electronic equipment and storage medium
CN111898610A (en) Card unfilled corner detection method and device, computer equipment and storage medium
CN113487473B (en) Method and device for adding image watermark, electronic equipment and storage medium
CN112116585B (en) Image removal tampering blind detection method, system, device and storage medium
CN114511862B (en) Form identification method and device and electronic equipment
CN115937537A (en) Intelligent identification method, device and equipment for target image and storage medium
CN114936395A (en) Household type graph recognition method and device, computer equipment and storage medium
CN112084874B (en) Object detection method and device and terminal equipment
CN113378837A (en) License plate shielding identification method and device, electronic equipment and storage medium
CN113792671A (en) Method and device for detecting face synthetic image, electronic equipment and medium
CN116071625B (en) Training method of deep learning model, target detection method and device
CN114882298B (en) Optimization method and device for confrontation complementary learning model
CN114037865B (en) Image processing method, apparatus, device, storage medium, and program product
WO2024000728A1 (en) Monocular three-dimensional plane recovery method, device, and storage medium
CN117636370A (en) Method and device for detecting image content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: Room 425, Block C, Bao'an New Generation Information Technology Industrial Park, No. 3, North Second Lane, Chuangye Second Road, 28 Dalang Community, Xin'an Street, Bao'an District, Shenzhen, Guangdong 518000

Patentee after: Shenzhen Yibi Technology Co.,Ltd.

Address before: 518000 406, block C, Bao'an new generation information technology industrial park, No. 3, North 2nd Lane, Chuangye 2nd Road, Dalang community, Xin'an street, Bao'an District, Shenzhen, Guangdong Province

Patentee before: Shenzhen Yibi Technology Co.,Ltd.

CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 518000, 1st Floor, Building B5, Taohuayuan Science and Technology Innovation Ecological Park, Tiegang Community, Xixiang Street, Bao'an District, Shenzhen City, Guangdong Province

Patentee after: Shenzhen Yibi Technology Co.,Ltd.

Address before: Room 425, Block C, Bao'an New Generation Information Technology Industrial Park, No. 3, North Second Lane, Chuangye Second Road, 28 Dalang Community, Xin'an Street, Bao'an District, Shenzhen, Guangdong 518000

Patentee before: Shenzhen Yibi Technology Co.,Ltd.