CN115311288A - Method for detecting damage of automobile film - Google Patents

Method for detecting damage of automobile film Download PDF

Info

Publication number
CN115311288A
CN115311288A CN202211244738.3A CN202211244738A CN115311288A CN 115311288 A CN115311288 A CN 115311288A CN 202211244738 A CN202211244738 A CN 202211244738A CN 115311288 A CN115311288 A CN 115311288A
Authority
CN
China
Prior art keywords
pixel point
value
channel
feature vector
rgb image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211244738.3A
Other languages
Chinese (zh)
Other versions
CN115311288B (en
Inventor
郭九周
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Moshi Intelligent Technology Co ltd
Original Assignee
Jiangsu Moshi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Moshi Intelligent Technology Co ltd filed Critical Jiangsu Moshi Intelligent Technology Co ltd
Priority to CN202211244738.3A priority Critical patent/CN115311288B/en
Publication of CN115311288A publication Critical patent/CN115311288A/en
Application granted granted Critical
Publication of CN115311288B publication Critical patent/CN115311288B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the field of automobile film damage detection, and provides an automobile film damage detection method, which comprises the following steps: obtaining a preprocessed RGB image; extracting a dark channel image; obtaining an RGB image of the corresponding minimum sub-block when the image is terminated; obtaining an atmospheric light value; obtaining the transmissivity of each pixel point in the preprocessed RGB image; obtaining the brightness value of each pixel point; obtaining a characteristic value of each pixel point; obtaining a suspected damaged area; determining whether the pixel point is a damaged pixel point; and obtaining a damaged area through the determined damaged pixel points. The invention adopts the image-based method to carry out damage detection, and has the effects of high detection speed and high accuracy.

Description

Method for detecting damage of automobile film
Technical Field
The invention relates to the field of automobile film damage detection, in particular to an automobile film damage detection method.
Background
The automobile film mainly has the functions of blocking ultraviolet rays, blocking partial heat, decorating the appearance of a vehicle and the like. However, when the automobile film is damaged or broken during the production and sale processes, the damaged film may be stuck to the surface of the automobile due to human detection negligence, which may affect the appearance of the automobile film and may not effectively protect the surface of the automobile. Therefore, the detection of the damage of the automobile film before the film pasting is a very critical step.
When the automobile film is detected manually, the detection precision is low, comprehensive detection of the automobile film is difficult to achieve, and the method for detecting the damage of the automobile film is provided by the invention aiming at the problems of low efficiency and accuracy, large workload and the like of manual detection.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a method for detecting the damage of an automobile film.
In order to achieve the purpose, the invention adopts the following technical scheme that the method for detecting the damage of the automobile film comprises the following steps:
preprocessing an image of the acquired film covered on a dark background to obtain a preprocessed RGB image;
extracting a dark channel image of the preprocessed RGB image;
equally dividing the dark channel image serving as an initial sub-block into a plurality of sub-blocks, calculating the weight of each sub-block, selecting the sub-block with the maximum weight from all the sub-blocks as a new initial sub-block for equally dividing, sequentially iterating, terminating iteration when the area of the equally divided sub-block is smaller than an area threshold value, and obtaining the RGB image of the corresponding minimum sub-block when termination is performed;
obtaining atmospheric light values through channel values of each pixel point in R, G and B channels in the minimum sub-block RGB image;
obtaining the transmissivity of each pixel point in the pre-processed RGB image through the atmospheric light value and the channel value of each pixel point in the pre-processed RGB image in R, G and B channels;
processing the preprocessed RGB image to obtain the brightness value of each pixel point;
obtaining a characteristic value of each pixel point by preprocessing the transmissivity and the brightness value of each pixel point in the RGB image;
clustering all the pixel points by using the characteristic value of each pixel point to obtain a suspected damaged area;
forming a feature vector by the transmittance and the brightness value of each pixel point in the suspected damage area; forming a basic feature vector by preprocessing the transmittance average value and the brightness average value of all pixel points in the RGB image; obtaining the confidence coefficient of each pixel point in the suspected damaged area by using the obtained feature vector and the basic feature vector, and determining whether the pixel point is a damaged pixel point according to the confidence coefficient;
and obtaining a damaged area through the determined damaged pixel points.
Further, the method for detecting the damage of the automobile film utilizes the characteristic value of each pixel point to cluster all the pixel points, and the method for obtaining the suspected damaged area comprises the following steps:
clustering all pixel points in the preprocessed RGB image according to the magnitude of the characteristic value of each pixel point to obtain two clustering clusters;
calculating the mean value of the characteristic values of all the pixel points in each cluster according to the characteristic value of each pixel point, selecting the cluster with the large mean value of the characteristic values, and enabling all the pixel points in the cluster to form a suspected damaged area of the adhesive film.
Further, in the method for detecting the damage of the automobile film, the expression of the weight of each sub-block is as follows:
Figure 860198DEST_PATH_IMAGE002
in the formula:
Figure 100002_DEST_PATH_IMAGE003
represents the division into
Figure 687209DEST_PATH_IMAGE004
The weight of the individual sub-blocks,
Figure 943528DEST_PATH_IMAGE004
represents the division into
Figure 576503DEST_PATH_IMAGE004
The number of the individual blocks is one,
Figure 100002_DEST_PATH_IMAGE005
Figure 536631DEST_PATH_IMAGE006
is as follows
Figure 613171DEST_PATH_IMAGE004
All pixel point channels in individual block
Figure 100002_DEST_PATH_IMAGE007
Corresponding to the standard deviation of the channel values,
Figure 586813DEST_PATH_IMAGE008
is a first
Figure 522014DEST_PATH_IMAGE004
All pixel point channels in each block
Figure 179392DEST_PATH_IMAGE007
The mean value of the corresponding channel values,
Figure 100002_DEST_PATH_IMAGE009
to represent
Figure 613784DEST_PATH_IMAGE007
Is an R channel, a G channel or a B channel.
Further, in the method for detecting damage of the automobile film, the transmittance expression of the pixel point is as follows:
Figure 100002_DEST_PATH_IMAGE011
in the formula:
Figure 576186DEST_PATH_IMAGE012
denotes the first
Figure 100002_DEST_PATH_IMAGE013
The transmittance of each of the pixels is measured by the transmittance sensor,
Figure 754226DEST_PATH_IMAGE014
representing pre-processed RGB image
Figure 476895DEST_PATH_IMAGE013
The channel value of each pixel point on the c channel,
Figure 100002_DEST_PATH_IMAGE015
which is indicative of the value of the atmospheric light,
Figure 82189DEST_PATH_IMAGE016
to represent
Figure 100002_DEST_PATH_IMAGE017
Is an R channel, a G channel or a B channel,
Figure 266308DEST_PATH_IMAGE018
representing the dark channel image corresponding to the pre-processed RGB image,
Figure 100002_DEST_PATH_IMAGE019
representing by pixel points
Figure 451302DEST_PATH_IMAGE013
A central filtering window.
Further, the method for detecting the damage of the automobile film comprises the following steps of obtaining an atmospheric light value expression through channel values of each pixel point in R, G and B channels in the RGB image of the minimum sub-block:
Figure 100002_DEST_PATH_IMAGE021
in the formula:
Figure 34336DEST_PATH_IMAGE022
in RGB image representing minimum subblock
Figure 100002_DEST_PATH_IMAGE023
The channel value of each pixel point on the c channel, N represents the total number of pixel points in the RGB image of the minimum sub-block,
Figure 30105DEST_PATH_IMAGE023
in RGB image representing minimum subblock
Figure 13105DEST_PATH_IMAGE023
And (5) each pixel point.
Further, in the method for detecting damage of the automobile film, the expression of the characterization value of the pixel point is as follows:
Figure 100002_DEST_PATH_IMAGE025
in the formula:
Figure 736210DEST_PATH_IMAGE026
representing the second in a pre-processed RGB image
Figure 241928DEST_PATH_IMAGE013
The characteristic value of each pixel point is represented,
Figure 100002_DEST_PATH_IMAGE027
representing first in pre-processed RGB image
Figure 923445DEST_PATH_IMAGE013
The brightness value of each pixel point is calculated,
Figure 82156DEST_PATH_IMAGE028
a first parameter of the model is represented,
Figure 100002_DEST_PATH_IMAGE029
representing a second parameter of the model.
Further, the method for detecting the damage of the automobile film by using the obtained feature vector and the basic feature vector to obtain the confidence of each pixel point in the suspected damaged area comprises the following steps:
calculating the similarity of the feature vector and the basic feature vector of each pixel point in the suspected damaged area according to the obtained feature vector and the basic feature vector;
and obtaining the confidence of each pixel point in the suspected damaged area according to the similarity of the feature vector of each pixel point in the suspected damaged area and the basic feature vector.
Further, in the method for detecting damage of the automobile film, the similarity expression between the feature vector of each pixel point in the suspected damaged area and the basic feature vector is as follows:
Figure 100002_DEST_PATH_IMAGE031
in the formula:
Figure 763279DEST_PATH_IMAGE032
indicating a suspected damaged area
Figure 100002_DEST_PATH_IMAGE033
The similarity between the feature vector of each pixel and the basic feature vector,
Figure 556792DEST_PATH_IMAGE034
the base feature vector is represented by a vector of features,
Figure 100002_DEST_PATH_IMAGE035
indicating a suspected damaged area
Figure 707413DEST_PATH_IMAGE033
The feature vectors of the individual pixels are then,
Figure 133846DEST_PATH_IMAGE036
a feature vector is represented.
Further, in the method for detecting damage of the automobile film, the expression of the confidence of each pixel point in the suspected damage area is as follows:
Figure 526650DEST_PATH_IMAGE038
in the formula:
Figure 100002_DEST_PATH_IMAGE039
indicating a suspected damaged area
Figure 159624DEST_PATH_IMAGE033
The confidence of each of the pixel points is calculated,
Figure 776419DEST_PATH_IMAGE040
representing confidence model parameters.
The beneficial effects of the invention are: the method carries out damage detection on the automobile film based on the image data, extracts the damaged area based on the transmissivity and the brightness information of each pixel point, establishes a damage confidence coefficient analysis model to carry out damage judgment on the pixel points in the damaged area again, can realize accurate detection on the damage condition of the surface of the automobile film, simultaneously carries out damage detection on the automobile film by adopting an image-based method, can avoid the secondary damage to the surface of the film caused by artificial contact, and has the effects of high detection speed, high accuracy and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of the present embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Example 1
An embodiment of a method for detecting a damaged film of an automobile of the present invention, as shown in fig. 1, includes:
the applicable scenarios of the embodiment are as follows: the colorless frosted automobile film is subjected to damage detection or comprehensive damage detection on the film before the automobile film is adhered. This embodiment is mainly applicable to and carries out the damage detection to colourless dull polish car pad pasting.
101. And obtaining a pre-processing RGB image.
Preprocessing the image covered on the dark background by the acquired film to obtain a preprocessed RGB image;
the embodiment mainly carries out damage detection to colorless dull polish pad pasting based on the characteristic information of the pixel of the image of gathering, consequently, this embodiment will wait to detect the pad pasting and cover on a dark background, then wait to detect the pad pasting top and deploy image acquisition equipment, carry out image acquisition to car glass pad pasting surface through the camera to follow-up damaged area to the pad pasting carries out accurate discernment. For the setting of the camera position and the camera shooting range, the implementer sets the setting according to the actual situation. In this embodiment, the camera is located right above the surface of the to-be-detected film, and collects an orthographic view of the surface of the to-be-detected film, and the dark background implementer can select the dark background by himself or herself in the actual application process, and the dark background is set to be a pure black background in this embodiment.
After the front-view image of the surface of the film to be detected is collected, because of more noise in the environment, a large amount of image noise points are generated in the image collection process, and the quality of the image on the surface of the film is influenced. The preprocessing comprises image filtering denoising and image equalization processing, an implementer can select a corresponding existing image preprocessing method, the Gaussian filter is adopted to perform noise removal on the image, and histogram equalization is adopted to process the image so as to eliminate the problems of uneven illumination on the surface of the collected film. The specific pretreatment process is a known technology, and the embodiment is not described in detail.
Therefore, the high-quality image to be detected of the film can be obtained according to the method and used as a subsequent image to be detected for analyzing the image on the surface of the film so as to identify the damaged area.
The main purpose of this embodiment is to detect the damaged condition of the surface of the film through the image data, and therefore, to the image data obtained in the above steps, this embodiment will establish a damaged detection model of the film, and construct a pixel feature vector based on the transmittance and luminance value information of each pixel, so as to perform cluster analysis on each pixel, and realize extraction of the damaged area, the damaged detection model of the film is specifically:
102. dark channel images are extracted.
For the preprocessed RGB image data, the transmittance of each pixel point is extracted and analyzed first in this embodiment, and the extracted and analyzed transmittance is used as a characteristic parameter for detecting the damage of the automobile glass film, specifically:
in this embodiment, the film is analyzed as a fog, and a model is formed according to an existing fog map, so that the image model acquired in this embodiment after being preprocessed may be represented as:
Figure 955728DEST_PATH_IMAGE042
in the formula:
Figure DEST_PATH_IMAGE043
representing a pre-processed RGB image, i.e. a hazy image,
Figure 840638DEST_PATH_IMAGE044
which represents a black background image when the patch is not covered, i.e., a fog-free image, which is an RGB image obtained by image-capturing and processing a dark background in the same environment when the patch is not placed, a represents an atmospheric light value,
Figure DEST_PATH_IMAGE045
representing a pixelTransmittance at the point.
Further, carrying out deformation processing on the preprocessed image model:
Figure DEST_PATH_IMAGE047
then, carrying out minimum operation twice on the images in the deformed model to obtain a final processing model:
Figure DEST_PATH_IMAGE049
in the formula:
Figure 169595DEST_PATH_IMAGE050
to be pixel points
Figure 973603DEST_PATH_IMAGE013
A filter window that is the center of the filter,
Figure DEST_PATH_IMAGE051
as an image
Figure 63044DEST_PATH_IMAGE043
First, the
Figure 594388DEST_PATH_IMAGE013
The pixel value (i.e., channel value) of each pixel point on the c channel, c represents the R channel, G channel or B channel,
Figure 30049DEST_PATH_IMAGE052
as an image
Figure 430724DEST_PATH_IMAGE044
First, the
Figure 381362DEST_PATH_IMAGE013
The channel value of each pixel point in the c channel.
Then, the embodiment performs dark channel processing on the dark background image when the film is not covered to obtain a corresponding dark channel image:
Figure 654081DEST_PATH_IMAGE054
in the formula:
Figure DEST_PATH_IMAGE055
as an image
Figure 101505DEST_PATH_IMAGE044
The corresponding dark channel image, x is the pixel point in the dark channel image,
Figure 965425DEST_PATH_IMAGE056
as an image
Figure 341042DEST_PATH_IMAGE044
First, the
Figure 978304DEST_PATH_IMAGE013
And the channel value of each pixel point on the c channel.
103. The RGB image of the smallest subblock is obtained.
According to the dark channel prior algorithm, the method comprises the following steps:
Figure 857398DEST_PATH_IMAGE058
namely, the method comprises the following steps:
Figure 423377DEST_PATH_IMAGE060
according to the dark channel prior and the final processing model, a transmittance formula of each pixel point in the film pasting image to be detected (the pre-processed RGB image) can be obtained:
Figure DEST_PATH_IMAGE061
in the formula:
Figure 974707DEST_PATH_IMAGE062
representing the dark channel image corresponding to the image to be detected of the pre-processed film,
Figure 589228DEST_PATH_IMAGE015
indicating the atmospheric light value.
Then, calculating the atmospheric light value, wherein the calculation of the atmospheric light value is set as follows:
extracting a dark channel image of the preprocessed RGB image;
equally dividing the dark channel image serving as an initial sub-block into a plurality of sub-blocks, calculating the weight of each sub-block, selecting the sub-block with the maximum weight from all the sub-blocks as a new initial sub-block for equally dividing, sequentially iterating, terminating iteration when the area of the equally divided sub-block is smaller than an area threshold value, and obtaining the RGB image of the corresponding minimum sub-block when termination is performed;
first, the dark channel image is
Figure 853987DEST_PATH_IMAGE018
Are divided into a plurality of sub-blocks, set by the implementer, and the embodiment is set to divide the dark channel image into
Figure DEST_PATH_IMAGE063
Sub-block, setting sub-block weight calculation model:
Figure 755647DEST_PATH_IMAGE064
in the formula:
Figure 89546DEST_PATH_IMAGE003
represents the division into
Figure 320807DEST_PATH_IMAGE004
The weight of the individual sub-block is,
Figure 659646DEST_PATH_IMAGE004
represents the division into
Figure 973953DEST_PATH_IMAGE004
The number of the individual blocks is one,
Figure 293683DEST_PATH_IMAGE005
Figure 266318DEST_PATH_IMAGE006
is a first
Figure 161462DEST_PATH_IMAGE004
All pixel point channels in individual block
Figure 413714DEST_PATH_IMAGE007
Corresponding to the standard deviation of the channel values,
Figure 191046DEST_PATH_IMAGE008
is a first
Figure 986613DEST_PATH_IMAGE004
All pixel point channels in each block
Figure 798580DEST_PATH_IMAGE007
The mean value of the corresponding channel values,
Figure 736580DEST_PATH_IMAGE009
represent
Figure 502673DEST_PATH_IMAGE007
Is an R channel, a G channel or a B channel.
The higher the weight of the sub-block is, the higher the brightness value of the corresponding sub-block is, and the smaller the gradient change of the pixel value of the pixel point in the sub-block is.
Further, the sub-block with the largest weight is processed again
Figure 331958DEST_PATH_IMAGE063
Sub-block division, calculating the weight of each sub-block according to the weight calculation model, setting the sub-block division termination condition, stopping sub-block division when the divided sub-block area is less than the area threshold value, wherein the sub-block area is the number of pixels in the sub-blockThe sum of the numbers, the area threshold can be set by the operator. At this point, the minimum subblock corresponding to the end of subblock division can be obtained and recorded as
Figure DEST_PATH_IMAGE065
. Then the dark channel image
Figure 965808DEST_PATH_IMAGE018
Minimum subblock of (1)
Figure 871447DEST_PATH_IMAGE065
Setting the pixel value of the corresponding pixel point to 1, setting the pixel values of the pixel points at other positions to zero, and acquiring the dark channel image
Figure 357792DEST_PATH_IMAGE018
The corresponding binary image, the binary image and the image to be detected
Figure 757812DEST_PATH_IMAGE043
Multiplying to obtain the image to be detected
Figure 91841DEST_PATH_IMAGE043
And the corresponding minimum sub-block (the RGB image corresponding to the minimum sub-block) is used as an ROI (region of interest) for calculating the atmospheric light value.
104. And obtaining the atmospheric light value.
Obtaining atmospheric light values through channel values of each pixel point in R, G and B channels in the minimum sub-block RGB image;
based on this, the atmospheric light value is calculated:
Figure 417649DEST_PATH_IMAGE066
in the formula:
Figure 876443DEST_PATH_IMAGE015
which is indicative of the value of the atmospheric light,
Figure DEST_PATH_IMAGE067
RGB image (i.e. ROI area) representing minimum sub-block
Figure 750595DEST_PATH_IMAGE023
The pixel value of each pixel point on a c channel, wherein c represents an R channel, a G channel or a B channel, N represents the total number of the pixel points in the RGB image of the minimum subblock,
Figure 673551DEST_PATH_IMAGE023
in RGB image representing minimum subblock
Figure 671725DEST_PATH_IMAGE023
And (6) each pixel point.
Therefore, the atmospheric light value during image acquisition can be calculated and used for calculating and analyzing the transmissivity of the image pixel points.
105. And obtaining the transmissivity of each pixel point.
Obtaining the transmissivity of each pixel point in the pre-processed RGB image through the atmospheric light value and the channel value of each pixel point in the pre-processed RGB image in R, G and B channels;
substituting the obtained atmospheric light value into the transmittance formula of the pixel points to calculate the transmittance of each pixel point in the image to be detected of the film
Figure 414553DEST_PATH_IMAGE012
The characteristic parameters are used as the characteristic parameters of the pixel points and are used for identifying and detecting the damage of the adhesive film;
106. and obtaining the brightness value of each pixel point.
Processing the preprocessed RGB images to obtain the brightness value of each pixel point;
further, in the embodiment, when the to-be-detected automobile film is covered on the black background, the brightness value of the acquired image is integrally improved due to the covering of the frosted film, and for the to-be-detected film image after pretreatment, HSV conversion is performed on the to-be-detected film image to obtain the brightness value of each pixel point
Figure 920490DEST_PATH_IMAGE027
The HSV is used as a characteristic parameter for detecting the breakage of the film, the known technology is converted from HSV, and relevant explanation is not provided in the embodiment;
107. and obtaining the characteristic value of each pixel point.
Obtaining a characteristic value of each pixel point by preprocessing the transmissivity and the brightness value of each pixel point in the RGB image;
and finally, establishing a pixel point characteristic value based on the characteristic parameters, wherein the pixel point characteristic value is used for carrying out characteristic description on the pixel point and is as follows:
Figure 901215DEST_PATH_IMAGE068
in the formula:
Figure 67361DEST_PATH_IMAGE026
representing first in pre-processed RGB image
Figure 828644DEST_PATH_IMAGE013
The characteristic value of each pixel point is represented,
Figure 607113DEST_PATH_IMAGE012
representing first in pre-processed RGB image
Figure 504662DEST_PATH_IMAGE013
The transmittance of the individual pixels is determined,
Figure 844638DEST_PATH_IMAGE027
representing first in pre-processed RGB image
Figure 562059DEST_PATH_IMAGE013
The brightness value of each pixel point is calculated,
Figure 144219DEST_PATH_IMAGE028
a first parameter of the model is represented,
Figure 896274DEST_PATH_IMAGE029
representing second parameters of the modelSet by the implementer, the embodiment sets it to
Figure DEST_PATH_IMAGE069
. The higher the characterization value of the pixel point is, the higher the possibility of being classified as a damaged pixel point is.
Therefore, based on the method of the embodiment, the feature vectors of the pixel points of the image to be detected can be extracted for identifying the damaged area.
108. A suspected damaged area is obtained.
Clustering all the pixel points by using the characteristic value of each pixel point to obtain a suspected damaged area;
after the characteristic value of each pixel point is obtained based on the method, the embodiment performs cluster analysis on each pixel point based on the characteristic value, and obtains different cluster categories to realize identification of the damaged area. The existing clustering algorithm is many, the clustering process is the existing known technology, the clustering process is not in the protection range of the embodiment and is not elaborated, an implementer can select the clustering algorithm by himself, the embodiment adopts K-means to perform clustering analysis, the implementer of the K value sets the K value to be K =2 by himself, and the embodiment sets the K value to be K =2. For two cluster categories, the present embodiment will calculate the mean value of the characterization values of all the pixels in the two categories
Figure 800295DEST_PATH_IMAGE070
Wherein Z is the number of pixel points in the corresponding category,
Figure DEST_PATH_IMAGE071
the representative category is a category of the user,
Figure 162268DEST_PATH_IMAGE072
represent
Figure 298851DEST_PATH_IMAGE071
In the category of
Figure 420260DEST_PATH_IMAGE072
After obtaining the mean value of the characterization values corresponding to the two categories for each pixel, this embodiment will provide a mean value of the characterization values corresponding to the two categoriesThe category with the larger average value is used as the category corresponding to the damage of the film, each connected domain formed by the pixel points contained in the category is used as a suspected damaged area, and the other category is the category corresponding to the normal film, so that the suspected damaged area on the surface of the film can be extracted.
109. It is determined whether the pixel is a broken pixel.
Forming a feature vector by the transmittance and the brightness value of each pixel point in the suspected damage area; forming a basic feature vector by preprocessing the transmittance average value and the brightness average value of all pixel points in the RGB image; obtaining the confidence coefficient of each pixel point in the suspected damaged area by using the obtained feature vector and the basic feature vector, and determining whether the pixel point is a damaged pixel point according to the confidence coefficient;
through the clustering process, a suspected damaged area can be preliminarily obtained, further, damage confidence degree analysis is performed on all the pixel points in the suspected damaged area again, and only the extracted pixel points in the suspected damaged area are subjected to damage confidence degree analysis, so that the calculation amount can be effectively reduced, calculation of irrelevant pixel points is avoided, and the precision of damage identification is ensured. For an image to be detected, firstly, obtaining a characteristic parameter mean value of all pixel points in the image to be detected
Figure DEST_PATH_IMAGE073
And
Figure 240055DEST_PATH_IMAGE074
constructing a base feature vector
Figure DEST_PATH_IMAGE075
And the method is used for calculating the damage confidence of each pixel point in the suspected damage area. And then calculating the similarity between the feature vector of each pixel point in the suspected damaged area and the basic feature vector:
Figure DEST_PATH_IMAGE077
in the formula:
Figure 276275DEST_PATH_IMAGE032
indicating a suspected damaged area
Figure 216550DEST_PATH_IMAGE033
The similarity between the feature vector of each pixel and the basic feature vector,
Figure 926886DEST_PATH_IMAGE034
the base feature vector is represented by a vector of features,
Figure 28834DEST_PATH_IMAGE035
indicating a suspected damaged area
Figure 215665DEST_PATH_IMAGE033
The feature vectors of the individual pixels are then,
Figure 694051DEST_PATH_IMAGE078
Figure 727735DEST_PATH_IMAGE036
the feature vector is represented.
The larger the function value is, the more similar the function value is, and the smaller the possibility that the corresponding pixel point is damaged is. Establishing a damage confidence coefficient model based on the similarity index, and calculating the damage confidence coefficient of the pixel points in the suspected damage area, wherein the damage confidence coefficient model is as follows:
Figure 220158DEST_PATH_IMAGE080
in the formula:
Figure 886763DEST_PATH_IMAGE039
indicating a suspected damaged area
Figure 418107DEST_PATH_IMAGE033
The confidence of the damage to an individual pixel point,
Figure 539254DEST_PATH_IMAGE040
the confidence model parameter is represented, and the embodiment sets the confidence model parameter to be 0.5, which can be set by an implementer. Normalizing the model to ensure that the function value is [0,1], wherein the higher the function value of the model is, the greater the damage confidence of the corresponding pixel point is, the embodiment calculates the damage confidence of each pixel point in the suspected damage area, and sets a confidence threshold
Figure DEST_PATH_IMAGE081
And when the damage confidence coefficient is higher than the threshold value, the confidence coefficient of the pixel point which is the damage pixel point is considered to be higher, the pixel point is taken as the damage pixel point, and otherwise, the pixel point is a normal pixel point. The confidence threshold is set by the implementer, and the embodiment sets the confidence threshold as the implemented confidence
Figure 701114DEST_PATH_IMAGE082
110. A damaged area is obtained.
And obtaining a damaged area through the determined damaged pixel points.
Therefore, all the pixel points of the image to be detected can be classified and judged, and the damage condition of the surface of the film can be analyzed. The damage condition of each pixel point is judged based on the extracted feature vectors of the pixel points, the damaged area can be accurately extracted, the damaged position and the damaged area on the surface of the film can be prompted, reference opinions can be provided for relevant operators, and the repair processing of the film by the operators is facilitated.
The method carries out damage detection on the automobile film based on the image data, extracts the damaged area based on the transmissivity and the brightness information of each pixel point, establishes a damage confidence coefficient analysis model to carry out damage judgment on the pixel points in the damaged area again, can realize accurate detection on the damage condition of the surface of the automobile film, simultaneously carries out damage detection on the automobile film by adopting an image-based method, can avoid the secondary damage to the surface of the film caused by artificial contact, and has the effects of high detection speed, high accuracy and the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. A method for detecting damage of an automobile film is characterized by comprising the following steps:
preprocessing the image covered on the dark background by the acquired film to obtain a preprocessed RGB image;
extracting a dark channel image of the preprocessed RGB image;
equally dividing the dark channel image serving as an initial sub-block into a plurality of sub-blocks, calculating the weight of each sub-block, selecting the sub-block with the maximum weight from all the sub-blocks as a new initial sub-block for equally dividing, sequentially iterating, terminating iteration when the area of the equally divided sub-block is smaller than an area threshold value, and obtaining the RGB image of the corresponding minimum sub-block when termination is performed;
obtaining an atmospheric light value through channel values of each pixel point in R, G and B channels in the RGB image of the minimum subblock;
obtaining the transmissivity of each pixel point in the pre-processed RGB image through the atmospheric light value and the channel value of each pixel point in the pre-processed RGB image in R, G and B channels;
processing the preprocessed RGB image to obtain the brightness value of each pixel point;
obtaining a characteristic value of each pixel point by preprocessing the transmissivity and the brightness value of each pixel point in the RGB image;
clustering all the pixel points by using the characterization value of each pixel point to obtain a suspected damaged area;
forming a feature vector by the transmittance and the brightness value of each pixel point in the suspected damage area; forming a basic feature vector by preprocessing the transmittance average value and the brightness average value of all pixel points in the RGB image; obtaining the confidence coefficient of each pixel point in the suspected damaged area by using the obtained feature vector and the basic feature vector, and determining whether the pixel point is a damaged pixel point according to the confidence coefficient;
and obtaining a damaged area through the determined damaged pixel points.
2. The method for detecting the damage of the automobile film according to claim 1, wherein the method for clustering all the pixels by using the characterization value of each pixel to obtain the suspected damaged area comprises the following steps:
clustering all pixel points in the preprocessed RGB image according to the magnitude of the characteristic value of each pixel point to obtain two clustering clusters;
calculating the mean value of the characteristic values of all the pixel points in each cluster through the characteristic values of all the pixel points, selecting the cluster with the large mean value of the characteristic values, and enabling all the pixel points in the cluster to form a suspected damaged area of the film.
3. The method for detecting the damage of the automobile film according to claim 1, wherein the expression of the weight of each sub-block is as follows:
Figure 497361DEST_PATH_IMAGE002
in the formula:
Figure DEST_PATH_IMAGE003
represents the division into
Figure 395654DEST_PATH_IMAGE004
The weight of the individual sub-block is,
Figure 960496DEST_PATH_IMAGE004
represents the division into
Figure 233346DEST_PATH_IMAGE004
The number of the individual blocks is one,
Figure DEST_PATH_IMAGE005
Figure 588366DEST_PATH_IMAGE006
is as follows
Figure 854131DEST_PATH_IMAGE004
All pixel point channels in each block
Figure DEST_PATH_IMAGE007
Corresponding to the standard deviation of the channel values,
Figure 981137DEST_PATH_IMAGE008
is as follows
Figure 690467DEST_PATH_IMAGE004
All pixel point channels in individual block
Figure 828056DEST_PATH_IMAGE007
The mean value of the corresponding channel values,
Figure DEST_PATH_IMAGE009
represent
Figure 336660DEST_PATH_IMAGE007
Is an R channel, a G channel or a B channel.
4. The method for detecting the damage of the automobile film according to claim 1, wherein the transmittance expression of the pixel points is as follows:
Figure DEST_PATH_IMAGE011
in the formula:
Figure 813778DEST_PATH_IMAGE012
denotes the first
Figure DEST_PATH_IMAGE013
The transmittance of the individual pixels is determined,
Figure 379495DEST_PATH_IMAGE014
representing pre-processed RGB images
Figure 755112DEST_PATH_IMAGE013
The channel value of each pixel point on the c channel,
Figure DEST_PATH_IMAGE015
which is indicative of the value of the atmospheric light,
Figure 300363DEST_PATH_IMAGE016
represent
Figure DEST_PATH_IMAGE017
Is an R channel, a G channel or a B channel,
Figure 399032DEST_PATH_IMAGE018
representing the dark channel image corresponding to the pre-processed RGB image,
Figure DEST_PATH_IMAGE019
is expressed in pixel points
Figure 637115DEST_PATH_IMAGE013
A central filtering window.
5. The automobile film sticking damage detection method according to claim 4, wherein an expression of the atmospheric light value obtained by the channel value of each pixel point in the R, G and B channels in the minimum sub-block RGB image is as follows:
Figure DEST_PATH_IMAGE021
in the formula:
Figure 124464DEST_PATH_IMAGE022
in RGB image representing minimum subblock
Figure DEST_PATH_IMAGE023
The channel value of each pixel point on the c channel, N represents the total number of pixel points in the RGB image of the minimum subblock,
Figure 240450DEST_PATH_IMAGE023
in RGB image representing minimum subblock
Figure 161001DEST_PATH_IMAGE023
And (5) each pixel point.
6. The method for detecting the damage of the automobile film according to claim 1, wherein the expression of the characterization value of the pixel point is as follows:
Figure DEST_PATH_IMAGE025
in the formula:
Figure 802942DEST_PATH_IMAGE026
representing the second in a pre-processed RGB image
Figure 949889DEST_PATH_IMAGE013
The characteristic value of each pixel point is represented,
Figure DEST_PATH_IMAGE027
representing first in pre-processed RGB image
Figure 836943DEST_PATH_IMAGE013
The brightness value of each pixel point is calculated,
Figure 300416DEST_PATH_IMAGE028
a first parameter of the model is represented,
Figure DEST_PATH_IMAGE029
representing a second parameter of the model.
7. The method for detecting the damage of the automobile film according to claim 1, wherein the method for obtaining the confidence of each pixel point in the suspected damaged area by using the obtained feature vector and the basic feature vector comprises the following steps:
calculating the similarity of the feature vector and the basic feature vector of each pixel point in the suspected damage area according to the obtained feature vector and the basic feature vector;
and obtaining the confidence of each pixel point in the suspected damaged area according to the similarity of the feature vector of each pixel point in the suspected damaged area and the basic feature vector.
8. The method for detecting the damage of the automobile film according to claim 7, wherein the expression of the similarity between the feature vector of each pixel point in the suspected damaged area and the basic feature vector is as follows:
Figure DEST_PATH_IMAGE031
in the formula:
Figure 243751DEST_PATH_IMAGE032
indicating a suspected damaged area
Figure DEST_PATH_IMAGE033
The similarity between the feature vector of each pixel and the basic feature vector,
Figure 566411DEST_PATH_IMAGE034
the base feature vector is represented by a vector of features,
Figure DEST_PATH_IMAGE035
indicating a suspected damaged area
Figure 991576DEST_PATH_IMAGE033
The feature vectors of the individual pixels are then,
Figure 230927DEST_PATH_IMAGE036
the feature vector is represented.
9. The method for detecting the damage of the automobile film according to claim 8, wherein the confidence of each pixel point in the suspected damage area is expressed as:
Figure 949091DEST_PATH_IMAGE038
in the formula:
Figure DEST_PATH_IMAGE039
indicating a suspected damaged area
Figure 460844DEST_PATH_IMAGE033
The confidence of each of the pixel points is calculated,
Figure 519061DEST_PATH_IMAGE040
representing confidence model parameters.
CN202211244738.3A 2022-10-12 2022-10-12 Method for detecting damage of automobile film Active CN115311288B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211244738.3A CN115311288B (en) 2022-10-12 2022-10-12 Method for detecting damage of automobile film

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211244738.3A CN115311288B (en) 2022-10-12 2022-10-12 Method for detecting damage of automobile film

Publications (2)

Publication Number Publication Date
CN115311288A true CN115311288A (en) 2022-11-08
CN115311288B CN115311288B (en) 2023-03-24

Family

ID=83867792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211244738.3A Active CN115311288B (en) 2022-10-12 2022-10-12 Method for detecting damage of automobile film

Country Status (1)

Country Link
CN (1) CN115311288B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630311B (en) * 2023-07-21 2023-09-19 聊城市瀚格智能科技有限公司 Pavement damage identification alarm method for highway administration

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070122A (en) * 2019-04-15 2019-07-30 沈阳理工大学 A kind of convolutional neural networks blurred picture classification method based on image enhancement
CN113933828A (en) * 2021-10-19 2022-01-14 上海大学 Unmanned ship environment self-adaptive multi-scale target detection method and system
CN113989279A (en) * 2021-12-24 2022-01-28 武汉华康龙兴工贸有限公司 Plastic film quality detection method based on artificial intelligence and image processing
CN114170208A (en) * 2021-12-14 2022-03-11 武汉福旺家包装有限公司 Paper product defect detection method based on artificial intelligence
CN115082361A (en) * 2022-08-23 2022-09-20 山东国晟环境科技有限公司 Turbid water body image enhancement method based on image processing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070122A (en) * 2019-04-15 2019-07-30 沈阳理工大学 A kind of convolutional neural networks blurred picture classification method based on image enhancement
CN113933828A (en) * 2021-10-19 2022-01-14 上海大学 Unmanned ship environment self-adaptive multi-scale target detection method and system
CN114170208A (en) * 2021-12-14 2022-03-11 武汉福旺家包装有限公司 Paper product defect detection method based on artificial intelligence
CN113989279A (en) * 2021-12-24 2022-01-28 武汉华康龙兴工贸有限公司 Plastic film quality detection method based on artificial intelligence and image processing
CN115082361A (en) * 2022-08-23 2022-09-20 山东国晟环境科技有限公司 Turbid water body image enhancement method based on image processing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邱东芳等: "透射率和大气光自适应估计的暗通道去雾", 《计算机应用》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630311B (en) * 2023-07-21 2023-09-19 聊城市瀚格智能科技有限公司 Pavement damage identification alarm method for highway administration

Also Published As

Publication number Publication date
CN115311288B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN113989279B (en) Plastic film quality detection method based on artificial intelligence and image processing
CN110097034B (en) Intelligent face health degree identification and evaluation method
CN110349126B (en) Convolutional neural network-based marked steel plate surface defect detection method
CN115294113B (en) Quality detection method for wood veneer
CN113592861B (en) Bridge crack detection method based on dynamic threshold
CN115908411B (en) Concrete curing quality analysis method based on visual detection
CN114494210B (en) Plastic film production defect detection method and system based on image processing
CN110210448B (en) Intelligent face skin aging degree identification and evaluation method
CN114549497B (en) Method for detecting surface defects of walking board based on image recognition and artificial intelligence system
CN113935666B (en) Building decoration wall tile abnormity evaluation method based on image processing
CN107490582B (en) Assembly line workpiece detection system
CN108563979B (en) Method for judging rice blast disease conditions based on aerial farmland images
CN116645367B (en) Steel plate cutting quality detection method for high-end manufacturing
CN114757900A (en) Artificial intelligence-based textile defect type identification method
CN112907519A (en) Metal curved surface defect analysis system and method based on deep learning
CN114118144A (en) Anti-interference accurate aerial remote sensing image shadow detection method
CN114494179A (en) Mobile phone back damage point detection method and system based on image recognition
CN117372432B (en) Electronic cigarette surface defect detection method and system based on image segmentation
CN116758045B (en) Surface defect detection method and system for semiconductor light-emitting diode
CN115311288B (en) Method for detecting damage of automobile film
CN114037691A (en) Carbon fiber plate crack detection method based on image processing
CN116703894A (en) Lithium battery diaphragm quality detection system
CN116883408A (en) Integrating instrument shell defect detection method based on artificial intelligence
CN116402822B (en) Concrete structure image detection method and device, electronic equipment and storage medium
CN116152234B (en) Template end face defect identification method based on image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant