CN114842008A - Injection molding part color difference detection method based on computer vision - Google Patents

Injection molding part color difference detection method based on computer vision Download PDF

Info

Publication number
CN114842008A
CN114842008A CN202210776547.5A CN202210776547A CN114842008A CN 114842008 A CN114842008 A CN 114842008A CN 202210776547 A CN202210776547 A CN 202210776547A CN 114842008 A CN114842008 A CN 114842008A
Authority
CN
China
Prior art keywords
distribution
pixel point
color difference
vector
category
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210776547.5A
Other languages
Chinese (zh)
Other versions
CN114842008B (en
Inventor
朱玉凤
张伟丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NANTONG SANXIN PLASTICS EQUIPMENT TECHNOLOGY CO LTD
Original Assignee
NANTONG SANXIN PLASTICS EQUIPMENT TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NANTONG SANXIN PLASTICS EQUIPMENT TECHNOLOGY CO LTD filed Critical NANTONG SANXIN PLASTICS EQUIPMENT TECHNOLOGY CO LTD
Priority to CN202210776547.5A priority Critical patent/CN114842008B/en
Publication of CN114842008A publication Critical patent/CN114842008A/en
Application granted granted Critical
Publication of CN114842008B publication Critical patent/CN114842008B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of computer vision, in particular to a computer vision-based injection molding part color difference detection method, which comprises the following steps: collecting a surface image under an initial light source to obtain the color characteristics of each pixel point; classifying all color features into a plurality of first categories; acquiring the attribution probability of each pixel point; clustering the first distribution vector of the first category to obtain a plurality of second categories to form a plurality of primary color difference areas; acquiring the attribution probability and the primary distribution vector of each pixel point in each light source direction; for each pixel point, clustering all primary distribution vectors of the pixel point to obtain a plurality of third categories, and calculating the comprehensive attribution probability of the pixel point to further obtain a comprehensive distribution vector; clustering the distribution vectors of the determined pixel points to obtain a preset number of fourth categories to form a medium-level color difference area; and carrying out region distribution on the discrete pixel points to obtain a color difference detection result of the surface image. The invention can improve the accuracy of color difference detection.

Description

Injection molding part color difference detection method based on computer vision
Technical Field
The invention relates to the technical field of computer vision, in particular to a computer vision-based injection molding part color difference detection method.
Background
Various injection products produced by an injection molding machine are generally called injection molding parts, comprise various packages, parts and the like, and are widely applied to industries such as mobile phones, portable computers, various plastic shells, communication, toys, clocks, lamp decorations, locomotives and the like.
In the injection molding process, the injection molding part has color difference caused by material pollution, unreasonable control parameters of the production process such as temperature, pressure and the like, dust, oil stain and other pollutants in the equipment and the like; when the injection molding equipment has defects, for example, poor air exhaust caused by mold blockage, dead angles of a mold structure and the like can also cause color difference of injection molded parts. The injection molding part has color and glossiness changes after color difference, the appearance and the product quality of the injection molding part are affected, and therefore the product quality needs to be improved through color difference detection of the injection molding part.
At present, most of the manual methods are adopted for detecting chromatic aberration, the method is easily influenced by objective conditions, the efficiency is low, the error is large, and particularly, accurate detection is difficult for an area with small chromatic aberration.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a computer vision-based injection molding part color difference detection method, which adopts the following technical scheme:
the invention provides a computer vision-based injection molding part color difference detection method, which comprises the following steps of:
collecting a surface image of an injection molding part with an overlooking visual angle under an initial light source, and acquiring a three-channel pixel value of each pixel point in the surface image as a color characteristic of the pixel point; clustering all the color features to obtain a plurality of first classes;
constructing three-dimensional Gaussian distribution of each first category by using the average vectors and the differences of the color features, acquiring Gaussian mixture models of all the three-dimensional Gaussian distribution, and further acquiring attribution probability of each pixel point;
clustering first distribution vectors of all first categories to obtain a plurality of second categories, and forming a primary color difference area by all pixel points belonging to each second category; obtaining a primary distribution vector of each pixel point according to the three-dimensional Gaussian distribution included in each primary color difference area;
adjusting the direction of the initial light source to obtain the attribution probability and the primary distribution vector of each pixel point in each light source direction;
for each pixel point, clustering all the primary distribution vectors to obtain a plurality of third categories, and calculating the comprehensive attribution probability of each pixel point according to the attribution probability of each pixel point in each third category to further obtain a corresponding comprehensive distribution vector;
taking pixel points corresponding to the comprehensive attribution probability larger than the probability threshold value as determined pixel points, clustering the distribution vectors of the determined pixel points to obtain a preset number of fourth classes, and enabling the pixel points belonging to each fourth class to form a middle-level chromatic aberration area;
distributing discrete pixel points except the determined pixel points to the intermediate-level chromatic aberration area according to the comprehensive distribution vector to obtain a final chromatic aberration area of the surface image as a chromatic aberration detection result;
the method for acquiring the primary distribution vector comprises the following steps:
acquiring first distribution vectors of the first category according to the three-dimensional Gaussian distribution, taking the average vector of all the first distribution vectors in each primary color difference area as the primary distribution vector of the primary color difference area, wherein the primary distribution vector of each pixel point is the primary distribution vector corresponding to the primary color difference area to which the pixel point belongs;
the method for acquiring the comprehensive attribution probability comprises the following steps:
calculating the average attribution probability of each third category, and taking the maximum average attribution probability as the comprehensive attribution probability of the pixel point;
the method for acquiring the comprehensive distribution vector comprises the following steps:
taking the average vector of the initial distribution vectors of the third category corresponding to the comprehensive attribution probability as a comprehensive distribution vector;
the three-dimensional Gaussian distribution construction step comprises the following steps:
for each first category, obtaining average color characteristics of all pixel points in the first category, and making the color characteristics of each pixel point in the first category and the average color characteristics be different to obtain a central offset vector of each pixel point;
acquiring the projection variance of the central offset vector in the principal component direction, and constructing a covariance matrix according to the projection variance;
constructing the three-dimensional Gaussian distribution by taking the average color characteristic and the covariance matrix as parameters;
the first distribution vector is composed of the average color feature and the projected mean of variance.
Preferably, the method for obtaining the attribution probability comprises:
and averaging the function values of all the three-dimensional Gaussian distributions of the first class to obtain a three-dimensional Gaussian mixture model, and taking the function value output by the Gaussian mixture model as the attribution probability.
Preferably, the preset number of obtaining methods is as follows:
and acquiring the number of the second types in the surface images acquired in each light source direction, and taking the maximum value of the number of the second types in all the surface images as the preset number.
Preferably, the color difference detection result further includes:
and taking the ultimate color difference area with the largest number of pixel points as a non-color difference area, and taking other areas as colored difference areas.
The embodiment of the invention at least has the following beneficial effects:
the method comprises the steps of obtaining a preliminary distribution vector and an attribution probability of each pixel by carrying out cluster analysis on color features of each pixel to estimate a color difference region to which each pixel belongs, then accurately obtaining a middle-level color difference region by synthesizing each preliminary distribution vector and the attribution probability in the multi-light-source direction, and finally carrying out region distribution according to membership degrees of other pixels and the middle-level color difference region to further obtain an accurate and credible color difference region to finish color difference detection. The embodiment of the invention can improve the accuracy of color difference detection, can detect smaller color difference and improve the precision of color difference detection.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart illustrating steps of a method for detecting color differences of an injection-molded part based on computer vision according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description will be given of the method for detecting color difference of injection-molded parts based on computer vision according to the present invention with reference to the accompanying drawings and preferred embodiments, and the detailed implementation, structure, features and effects thereof. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of the injection molding color difference detection method based on computer vision in detail with reference to the accompanying drawings.
Referring to fig. 1, a flow chart of steps of a method for detecting chromatic aberration of an injection-molded part based on computer vision according to an embodiment of the present invention is shown, the method includes the following steps:
s001, collecting a surface image of the injection molding part at an overlooking visual angle under an initial light source, and acquiring a three-channel pixel value of each pixel point in the surface image as a color characteristic of the pixel point; and clustering all the color features to obtain a plurality of first categories.
The method comprises the following specific steps:
1. surface images of the injection-molded part from a top view are acquired under an initial light source.
The detection device is composed of a camera and an adjustable light source, the detection device is used for carrying out color difference detection on the injection molding part, when a surface image is collected, a lens of the camera is vertically fixed downwards, a circular horizontal rail is installed by taking the camera as a center, the light source obliquely downwards irradiates under the camera on the horizontal rail, the angle is unchanged, and when the light source moves along the circular horizontal rail, the light source always irradiates under the camera.
One or more injection molded parts are placed below the detection device, and surface images of the injection molded parts at the overlooking visual angle are acquired, wherein the surface images are RGB images.
2. And acquiring a three-channel pixel value of each pixel point in the surface image as the color characteristic of the pixel point.
And acquiring three-channel pixel values of each pixel point, regarding the three-channel pixel values as a three-dimensional vector, mapping the pixel values of each channel to [0,1] corresponding to three channels of the image, and taking the obtained three-dimensional vector as the color characteristic of the pixel point.
3. A plurality of first categories is obtained.
And clustering the color characteristics of all the pixel points by using a mean shift clustering algorithm to obtain a plurality of first categories.
The pixel points belonging to the same first category have similar pixel values or similar color characteristics; the difference in pixel values between the first and second classes may be due to a difference in color of the injection-molded part or may be due to a difference in illumination without a difference in color.
Step S002, building three-dimensional Gaussian distribution of each first category according to the average vector and difference of the color features, obtaining Gaussian mixture models of all the three-dimensional Gaussian distributions, and further obtaining the attribution probability of each pixel point.
The method comprises the following specific steps:
1. a three-dimensional gaussian distribution for each first class is constructed.
For each first category, obtaining average color characteristics of all pixel points in the first category, and making the color characteristics of each pixel point in the first category and the average color characteristics be different to obtain a center offset vector of each pixel point; acquiring the projection variance of the central offset vector in the principal component direction, and constructing a covariance matrix according to the projection variance; and constructing three-dimensional Gaussian distribution by taking the average color characteristic and the covariance matrix as parameters.
Specifically, the average vector of the color features of all the pixel points in a certain first category is obtained
Figure DEST_PATH_IMAGE002
Expressing the average color feature of all pixels in the category, and then enabling the color feature and the average color feature of each pixel point in the first category
Figure 243160DEST_PATH_IMAGE002
Making a difference, and taking the obtained difference value as a center offset vector corresponding to each pixel point; performing principal component analysis on the central offset vector of each pixel point in the category, obtaining three principal component directions by the principal component analysis because the three-channel pixel value of each pixel point is regarded as a three-dimensional vector and the obtained central offset vector is also three-dimensional, obtaining the projection variance of the central offset vectors of all the pixel points in the category in each principal component direction, obtaining the mean value B of the projection variances in the three principal component directions, constructing a covariance matrix B by taking the B as a diagonal element, and using the B as the diagonal element to construct a covariance matrix B
Figure 97984DEST_PATH_IMAGE002
B is a parameter for constructing three-dimensional Gaussian distribution by using average color characteristics
Figure 64672DEST_PATH_IMAGE002
And a first distribution vector formed by the projection variance mean b
Figure DEST_PATH_IMAGE004
The three-dimensional Gaussian distribution is expressed, and the distribution condition of the color characteristics of the pixel points in the first category is represented.
Similarly, each first class corresponds to a three-dimensional gaussian distribution.
2. And obtaining a Gaussian mixture model, and further obtaining the attribution probability of each pixel point.
And calculating the mean value of the function values of the three-dimensional Gaussian distribution of all the first types to obtain a three-dimensional Gaussian mixture model, and taking the function value output by the Gaussian mixture model as the attribution probability. The expression of the Gaussian mixture model is as follows:
Figure DEST_PATH_IMAGE006
wherein (r, g, b) represents the independent variables of the Gaussian mixture model,
Figure DEST_PATH_IMAGE008
representing the output of the gaussian mixture model, Q represents the number of first classes,
Figure DEST_PATH_IMAGE010
representing the gaussian distribution of the qth first class.
Let the first distribution vector of the qth class be
Figure DEST_PATH_IMAGE012
Wherein
Figure DEST_PATH_IMAGE014
Then the functional expression of the Gaussian distribution corresponding to the first class is
Figure DEST_PATH_IMAGE016
(r, g, b) represents an independent variable of the Gaussian distribution,
Figure DEST_PATH_IMAGE018
and a function value corresponding to the independent variable is shown.
And inputting the color characteristics of each pixel point into the three-dimensional Gaussian mixture model to obtain an output result, wherein the output result is used as the attribution probability of the pixel point.
The higher the attribution probability is, the more accurate and credible the classification result of the pixel point is; the smaller the attribution probability is, the more uncertain the classification result of the pixel point is, namely, the more uncertain which first class each pixel point belongs to; the probability of attribution to those discrete pixels that are not classified is very small.
Step S003, clustering the first distribution vector of the first category to obtain a plurality of second categories, and forming a primary chromatic aberration area by all pixel points belonging to each second category; and obtaining a primary distribution vector of each pixel point according to the three-dimensional Gaussian distribution included in each primary color difference area.
The method comprises the following specific steps:
1. a plurality of primary color difference regions are acquired.
Ideally, the color features of different pixels in the same first category should be completely overlapped, that is, the mean b =0 of the projection variance, but due to the influence of illumination, the color features of the pixels have differences, although the pixels are distributed together in a concentrated manner, but are not completely overlapped, even the pixels originally having the same color feature may be divided into a plurality of first categories under different illumination intensities, and the average color features of the first categories
Figure DEST_PATH_IMAGE020
Although there is some difference, since the portions without color difference have the same gloss, the illumination change law is the same, i.e., the mean b of the projected variance should be the same or similar, and thus, the first distribution vector of the first class is clustered to obtain a plurality of second classes.
Obtaining a covariance matrix G of all first distribution vectors:
Figure DEST_PATH_IMAGE022
where S represents a set of first distribution vectors of all first classes.
Calculating the distance between the first distribution vectors of every two first classes:
Figure DEST_PATH_IMAGE024
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE026
representing the distance between the first distribution vectors of the mth first class and the nth first class,
Figure DEST_PATH_IMAGE028
a first distribution vector representing the mth first class,
Figure DEST_PATH_IMAGE030
a first distribution vector representing the nth first class,
Figure DEST_PATH_IMAGE032
a constant vector is represented for adjusting the weight magnitude of the mean color feature and the mean of the projection variance.
As an example, embodiments of the present invention
Figure DEST_PATH_IMAGE034
Figure DEST_PATH_IMAGE036
Representing a third order identity matrix.
When in use
Figure DEST_PATH_IMAGE038
Is an identity matrix, i.e.
Figure 568203DEST_PATH_IMAGE032
In the case of a four-order identity matrix,
Figure 320259DEST_PATH_IMAGE026
it is shown
Figure 329672DEST_PATH_IMAGE028
Figure 534388DEST_PATH_IMAGE030
The euclidean distance of (c).
And performing DBSCAN clustering on the first distribution vectors of all the first classes to obtain a plurality of second classes, and forming a primary color difference area by all the pixel points belonging to each second class to obtain a plurality of primary color difference areas.
2. A distribution vector for each pixel point is obtained.
Each second category comprises a plurality of first categories and also corresponds to a plurality of first distribution vectors, all the first distribution vectors belonging to the same second category have similar average color characteristics and projection variance mean values, the average vectors of all the first distribution vectors in the primary color difference region corresponding to each second category are used as the primary distribution vectors of the primary color difference region, and the primary distribution vector of each pixel point is the primary distribution vector corresponding to the primary color difference region to which the pixel point belongs.
And step S004, adjusting the direction of the initial light source to obtain the attribution probability and the initial distribution vector of each pixel point in each light source direction.
When the mean shift clustering is performed, not all the pixels are classified, that is, a part of the discrete points are not clustered into any first class, and the clustering may have an error, which may be caused by the illumination angle of the light source. Therefore, the error of the division of the primary color difference region is reduced by changing the light source direction.
Specifically, the light source moves on the track to change the light source direction, a plurality of surface images in the light source directions are obtained, and the above steps are performed on each surface image to obtain the corresponding attribution probability and distribution vector of each pixel point in each light source direction.
It should be noted that the surface of the injection molding piece is flat without complex textures, and diffuse reflection and specular reflection occur when the light source irradiates the surface of the injection molding piece. Because the light source has a great influence on the chromatic aberration detection of the injection molding part, the illumination distribution in different light source directions is different, the obtained primary chromatic aberration areas are also different, and the initial distribution vectors and the attribution probabilities of the same pixel points in different illumination directions are possibly different.
For example, some pixel points belong to different primary color difference regions under different light source directions; or some pixel points are classified to the same primary color difference area in the direction of two light sources, but the calculated attribution probability is different; even the attribution probability of some pixels under a certain light source approaches to 0, which primary color difference area the pixels belong to cannot be judged at all, and which primary color difference area the pixels belong to can be judged under another light source angle. Therefore, in order to ensure the accuracy of chromatic aberration detection, the direction of the light source needs to be changed, the attribution probability and the initial distribution vector under different light source angles are calculated by collecting surface images under different light source directions, and then subsequent comprehensive judgment is carried out to obtain an accurate chromatic aberration detection result.
Step S005, for each pixel point, clustering all initial distribution vectors to obtain a plurality of third categories, and calculating the comprehensive attribution probability of each pixel point according to the attribution probability of each pixel point in each third category to further obtain the corresponding comprehensive distribution vector.
The method comprises the following specific steps:
1. and acquiring a plurality of third categories of each pixel point.
And performing mean shift clustering on all initial distribution vectors of each pixel point to obtain a plurality of third categories. For a certain pixel point, the initial distribution vectors belonging to the same third category are relatively similar, which indicates that the initial distribution vectors of the pixel point are intensively distributed in the third categories under different light source directions.
2. And acquiring comprehensive attribution probability and comprehensive distribution vectors.
For each pixel point, obtaining the average value of the attribution probability of the pixel point in each third category as the average attribution probability of the third category, selecting the maximum average attribution probability as the comprehensive attribution probability of the pixel point, and taking the average vector of the initial distribution vector of the third category corresponding to the comprehensive attribution probability as the comprehensive distribution vector.
The average home probability of the third category corresponding to the comprehensive home probability is the maximum, which indicates that the pixel point has similar initial distribution vectors in a plurality of illumination directions, and the home probability of the pixel point under the initial distribution vectors is the maximum.
Step S006, using the pixel points corresponding to the comprehensive attribution probability larger than the probability threshold value as determination pixel points, clustering the comprehensive distribution vectors of the determination pixel points to obtain a preset number of fourth categories, and forming a middle-level chromatic aberration area by the pixel points belonging to each fourth category.
The method comprises the following specific steps:
1. and acquiring a determined pixel point.
And taking pixel points corresponding to the comprehensive attribution probability larger than the probability threshold value as determination pixel points, wherein primary color difference areas to which the pixel points belong can be determined and are accurate.
As an example, the value of the probability threshold in the embodiment of the present invention is 0.5.
2. And acquiring the preset number of clusters.
And acquiring the number of second categories in the surface images acquired in each light source direction, and taking the maximum value of the number of the second categories in all the surface images as a preset number K to indicate that at most K intermediate-level color difference areas exist.
3. And acquiring a plurality of medium-level color difference areas.
And acquiring comprehensive distribution vectors of the determined pixel points, clustering the comprehensive distribution vectors of all the determined pixel points by using K-Means, and totally dividing the comprehensive distribution vectors into K types of fourth categories.
And the pixel points belonging to each fourth category form a middle-level color difference area, the attribution probability of each pixel point in the middle-level color difference area is higher, and the obtained middle-level color difference area is more accurate.
And step S007, distributing the discrete pixel points except the determined pixel points to a middle-level chromatic aberration area according to the comprehensive distribution vector, and obtaining a final chromatic aberration area of the surface image as a chromatic aberration detection result.
The method comprises the following specific steps:
1. and distributing the discrete pixel points to a middle-level chromatic aberration area.
And taking pixel points corresponding to the comprehensive attribution probability less than or equal to the probability threshold as discrete pixel points, wherein the attribution probability of the pixel points is low, the middle-level chromatic aberration area to which the pixel points belong cannot be accurately determined, and the pixel points need to be distributed by utilizing the comprehensive distribution vector.
Obtaining the comprehensive distribution vector of a certain discrete pixel point
Figure DEST_PATH_IMAGE040
Obtaining an average vector of the integrated distribution vectors of all pixels in a certain intermediate-level color difference region
Figure DEST_PATH_IMAGE042
Calculating the membership degree of the discrete pixel and the intermediate-level color difference area:
Figure DEST_PATH_IMAGE044
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE046
and
Figure DEST_PATH_IMAGE048
is a three-dimensional vector that is,
Figure 556790DEST_PATH_IMAGE046
the color characteristics of the discrete pixel points are represented,
Figure 491248DEST_PATH_IMAGE048
representing the average color characteristic of the medium-level color difference area;
Figure DEST_PATH_IMAGE050
to represent
Figure 612175DEST_PATH_IMAGE046
And
Figure 304188DEST_PATH_IMAGE048
the L2 norm of (a), which represents the difference between the color feature of the discrete pixel and the average color feature of the intermediate-level color difference region;
Figure DEST_PATH_IMAGE052
and representing the variance of the medium-level color difference area, and representing the distribution range of the medium-level color difference area with the average color characteristic as the center.
Figure DEST_PATH_IMAGE054
The larger the difference is, the closer the color feature of the discrete pixel is to the distribution range of the color feature of the intermediate-level color difference area, even the color feature of the discrete pixel is in the distribution range of the intermediate-level color difference area, and at this time, the larger the membership degree P between the discrete pixel and the intermediate-level color difference area is, and the smaller the membership degree P is otherwise.
And acquiring the membership degree of each discrete pixel point relative to each intermediate-level color difference area, and taking the intermediate-level color difference area corresponding to the maximum membership degree of each discrete pixel point as a distribution area of the discrete pixel point.
2. And obtaining a color difference detection result.
And after all the discrete pixel points are distributed, obtaining a final color difference area of the surface image, taking the final color difference area with the largest number of the pixel points as a non-color difference area, and taking other areas as colored difference areas.
In summary, in the embodiment of the present invention, a surface image of an injection molded part at an overlooking viewing angle is collected under an initial light source, and a three-channel pixel value of each pixel point in the surface image is obtained as a color feature of the pixel point; clustering all color features to obtain a plurality of first classes; constructing three-dimensional Gaussian distribution of each first category by using average vectors and differences of color features, acquiring Gaussian mixture models of all the three-dimensional Gaussian distribution, and further acquiring attribution probability of each pixel point; clustering the first distribution vectors of all the first classes to obtain a plurality of second classes, and forming a primary color difference area by all pixel points belonging to each second class; obtaining a primary distribution vector of each pixel point according to the three-dimensional Gaussian distribution included in each primary color difference area; adjusting the direction of the initial light source to obtain the attribution probability and the primary distribution vector of each pixel point in each light source direction; for each pixel point, clustering all the primary distribution vectors to obtain a plurality of third categories, and calculating the comprehensive attribution probability of each pixel point according to the attribution probability of each pixel point in each third category to further obtain a corresponding comprehensive distribution vector; taking pixel points corresponding to the comprehensive attribution probability larger than the probability threshold value as determined pixel points, clustering distribution vectors of the determined pixel points to obtain a preset number of fourth classes, and enabling the pixel points belonging to each fourth class to form a medium-level chromatic aberration area; and distributing the discrete pixel points except the determined pixel points to a middle-level chromatic aberration area according to the comprehensive distribution vector, and obtaining a final chromatic aberration area of the surface image as a chromatic aberration detection result. The embodiment of the invention can obtain an accurate color difference area and improve the accuracy of color difference detection.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalents, improvements, etc. made within the principle of the present invention should be included in the scope of the present invention.

Claims (4)

1. The injection molding color difference detection method based on computer vision is characterized by comprising the following steps:
collecting a surface image of an injection molding part with an overlooking visual angle under an initial light source, and acquiring a three-channel pixel value of each pixel point in the surface image as a color characteristic of the pixel point; clustering all the color features to obtain a plurality of first classes;
constructing three-dimensional Gaussian distribution of each first category by using the average vectors and the differences of the color features, acquiring Gaussian mixture models of all the three-dimensional Gaussian distribution, and further acquiring attribution probability of each pixel point;
clustering first distribution vectors of all first categories to obtain a plurality of second categories, and forming a primary color difference area by all pixel points belonging to each second category; obtaining a primary distribution vector of each pixel point according to the three-dimensional Gaussian distribution included in each primary color difference area;
adjusting the direction of the initial light source to obtain the attribution probability and the primary distribution vector of each pixel point in each light source direction;
for each pixel point, clustering all the primary distribution vectors to obtain a plurality of third categories, and calculating the comprehensive attribution probability of each pixel point according to the attribution probability of each pixel point in each third category to further obtain a corresponding comprehensive distribution vector;
taking pixel points corresponding to the comprehensive attribution probability larger than the probability threshold value as determined pixel points, clustering the distribution vectors of the determined pixel points to obtain a preset number of fourth classes, and enabling the pixel points belonging to each fourth class to form a middle-level chromatic aberration area;
distributing discrete pixel points except the determined pixel points to the intermediate-level chromatic aberration area according to the comprehensive distribution vector to obtain a final chromatic aberration area of the surface image as a chromatic aberration detection result;
the method for acquiring the primary distribution vector comprises the following steps:
acquiring first distribution vectors of the first category according to the three-dimensional Gaussian distribution, taking the average vector of all the first distribution vectors in each primary color difference area as the primary distribution vector of the primary color difference area, wherein the primary distribution vector of each pixel point is the primary distribution vector corresponding to the primary color difference area to which the pixel point belongs;
the method for acquiring the comprehensive attribution probability comprises the following steps:
calculating the average attribution probability of each third category, and taking the maximum average attribution probability as the comprehensive attribution probability of the pixel point;
the method for acquiring the comprehensive distribution vector comprises the following steps:
taking the average vector of the initial distribution vectors of the third category corresponding to the comprehensive attribution probability as a comprehensive distribution vector;
the three-dimensional Gaussian distribution construction step comprises the following steps:
for each first category, obtaining average color characteristics of all pixel points in the first category, and making the color characteristics of each pixel point in the first category and the average color characteristics be different to obtain a central offset vector of each pixel point;
acquiring the projection variance of the central offset vector in the principal component direction, and constructing a covariance matrix according to the projection variance;
constructing the three-dimensional Gaussian distribution by taking the average color characteristic and the covariance matrix as parameters;
the first distribution vector is composed of the average color feature and the projected mean of variance.
2. The method for detecting chromatic aberration of injection-molded parts based on computer vision according to claim 1, characterized in that the method for obtaining the attribution probability is:
and averaging the function values of all the three-dimensional Gaussian distributions of the first class to obtain a three-dimensional Gaussian mixture model, and taking the function value output by the Gaussian mixture model as the attribution probability.
3. The computer vision-based color difference detection method for injection-molded parts according to claim 1, wherein the predetermined number of acquisition methods are:
and acquiring the number of the second types in the surface images acquired in each light source direction, and taking the maximum value of the number of the second types in all the surface images as the preset number.
4. The computer vision-based color difference detection method for injection-molded parts according to claim 1, wherein the color difference detection result further comprises:
and taking the ultimate chromatic aberration area with the maximum number of pixel points as a chromatic aberration-free area, and taking other areas as chromatic aberration areas.
CN202210776547.5A 2022-07-04 2022-07-04 Injection molding part color difference detection method based on computer vision Active CN114842008B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210776547.5A CN114842008B (en) 2022-07-04 2022-07-04 Injection molding part color difference detection method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210776547.5A CN114842008B (en) 2022-07-04 2022-07-04 Injection molding part color difference detection method based on computer vision

Publications (2)

Publication Number Publication Date
CN114842008A true CN114842008A (en) 2022-08-02
CN114842008B CN114842008B (en) 2022-10-21

Family

ID=82574080

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210776547.5A Active CN114842008B (en) 2022-07-04 2022-07-04 Injection molding part color difference detection method based on computer vision

Country Status (1)

Country Link
CN (1) CN114842008B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115222732A (en) * 2022-09-15 2022-10-21 惠民县黄河先进技术研究院 Injection molding process anomaly detection method based on big data analysis and color difference detection
CN115311299A (en) * 2022-10-12 2022-11-08 如皋市金轶纺织有限公司 Textile dyeing abnormity detection method based on multiple light source angles
CN117314916A (en) * 2023-11-29 2023-12-29 宝鸡市钛程金属复合材料有限公司 Explosion welding detection method for metal composite plate based on artificial intelligence

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018174493A1 (en) * 2017-03-24 2018-09-27 삼성전자 주식회사 Method for correcting image processing region corresponding to skin and electronic device
CN114494260A (en) * 2022-04-18 2022-05-13 深圳思谋信息科技有限公司 Object defect detection method and device, computer equipment and storage medium
CN114627125A (en) * 2022-05-17 2022-06-14 南通剑烽机械有限公司 Stainless steel tablet press surface quality evaluation method based on optical means

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018174493A1 (en) * 2017-03-24 2018-09-27 삼성전자 주식회사 Method for correcting image processing region corresponding to skin and electronic device
CN114494260A (en) * 2022-04-18 2022-05-13 深圳思谋信息科技有限公司 Object defect detection method and device, computer equipment and storage medium
CN114627125A (en) * 2022-05-17 2022-06-14 南通剑烽机械有限公司 Stainless steel tablet press surface quality evaluation method based on optical means

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张怡: "基于模糊聚类的颜色块自动模式识别方法研究", 《浙江水利水电学院学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115222732A (en) * 2022-09-15 2022-10-21 惠民县黄河先进技术研究院 Injection molding process anomaly detection method based on big data analysis and color difference detection
CN115311299A (en) * 2022-10-12 2022-11-08 如皋市金轶纺织有限公司 Textile dyeing abnormity detection method based on multiple light source angles
CN117314916A (en) * 2023-11-29 2023-12-29 宝鸡市钛程金属复合材料有限公司 Explosion welding detection method for metal composite plate based on artificial intelligence
CN117314916B (en) * 2023-11-29 2024-01-30 宝鸡市钛程金属复合材料有限公司 Explosion welding detection method for metal composite plate based on artificial intelligence

Also Published As

Publication number Publication date
CN114842008B (en) 2022-10-21

Similar Documents

Publication Publication Date Title
CN114842008B (en) Injection molding part color difference detection method based on computer vision
US10929649B2 (en) Multi-pose face feature point detection method based on cascade regression
CN115082683B (en) Injection molding defect detection method based on image processing
US11417148B2 (en) Human face image classification method and apparatus, and server
CN116205919B (en) Hardware part production quality detection method and system based on artificial intelligence
CN114170228B (en) Computer image edge detection method
CN107194371B (en) User concentration degree identification method and system based on hierarchical convolutional neural network
CN115311270A (en) Plastic product surface defect detection method
CN111368683A (en) Face image feature extraction method and face recognition method based on modular constraint CentreFace
CN116912261B (en) Plastic mold injection molding surface defect detection method
CN103886597A (en) Circle detection method based on edge detection and fitted curve clustering
Zhang et al. Fast covariance matching with fuzzy genetic algorithm
CN113935999B (en) Injection molding defect detection method based on image processing
CN107169117A (en) A kind of manual draw human motion search method based on autocoder and DTW
CN112802054A (en) Mixed Gaussian model foreground detection method fusing image segmentation
CN115359042B (en) Defect detection method of wood-plastic new material door based on optical vision
CN102799872A (en) Image processing method based on face image characteristics
CN111652836A (en) Multi-scale target detection method based on clustering algorithm and neural network
CN114022483A (en) Injection molding flash area identification method based on edge characteristics
Huang et al. High-efficiency face detection and tracking method for numerous pedestrians through face candidate generation
CN108520539B (en) Image target detection method based on sparse learning variable model
CN116945521B (en) Injection molding defect detection method
CN117475170A (en) FPP-based high-precision point cloud registration method guided by local-global structure
CN103577826A (en) Target characteristic extraction method, identification method, extraction device and identification system for synthetic aperture sonar image
CN115170545A (en) Dynamic molten pool size detection and forming direction discrimination method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant