CN115311288B - Method for detecting damage of automobile film - Google Patents

Method for detecting damage of automobile film Download PDF

Info

Publication number
CN115311288B
CN115311288B CN202211244738.3A CN202211244738A CN115311288B CN 115311288 B CN115311288 B CN 115311288B CN 202211244738 A CN202211244738 A CN 202211244738A CN 115311288 B CN115311288 B CN 115311288B
Authority
CN
China
Prior art keywords
pixel point
value
channel
rgb image
feature vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211244738.3A
Other languages
Chinese (zh)
Other versions
CN115311288A (en
Inventor
郭九周
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Moshi Intelligent Technology Co ltd
Original Assignee
Jiangsu Moshi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Moshi Intelligent Technology Co ltd filed Critical Jiangsu Moshi Intelligent Technology Co ltd
Priority to CN202211244738.3A priority Critical patent/CN115311288B/en
Publication of CN115311288A publication Critical patent/CN115311288A/en
Application granted granted Critical
Publication of CN115311288B publication Critical patent/CN115311288B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the field of automobile film damage detection, and provides an automobile film damage detection method, which comprises the following steps: obtaining a preprocessed RGB image; extracting a dark channel image; obtaining an RGB image of the corresponding minimum sub-block when the image is terminated; obtaining an atmospheric light value; obtaining the transmissivity of each pixel point in the preprocessed RGB image; obtaining the brightness value of each pixel point; obtaining a representation value of each pixel point; obtaining a suspected damaged area; determining whether the pixel point is a damaged pixel point; and obtaining a damaged area through the determined damaged pixel points. The invention adopts the image-based method to carry out damage detection, and has the effects of high detection speed and high accuracy.

Description

Method for detecting damage of automobile film
Technical Field
The invention relates to the field of automobile film damage detection, in particular to an automobile film damage detection method.
Background
The automobile film mainly has the functions of blocking ultraviolet rays, blocking partial heat, decorating the appearance of a vehicle and the like. However, when the automobile film is damaged or broken during the production and sale processes, the damaged film may be stuck to the surface of the automobile due to human detection negligence, which may affect the appearance of the automobile film and may not effectively protect the surface of the automobile. Therefore, the detection of the damage of the automobile film before the film pasting is a very critical step.
When the automobile film is detected manually, the detection precision is low, comprehensive detection of the automobile film is difficult to achieve, and the method for detecting the damage of the automobile film is provided by the invention aiming at the problems of low efficiency and accuracy, large workload and the like of manual detection.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a method for detecting the damage of an automobile film.
In order to achieve the purpose, the invention adopts the following technical scheme that the method for detecting the damage of the automobile film comprises the following steps:
preprocessing the image covered on the dark background by the acquired film to obtain a preprocessed RGB image;
extracting a dark channel image of the preprocessed RGB image;
equally dividing the dark channel image serving as an initial sub-block into a plurality of sub-blocks, calculating the weight of each sub-block, selecting the sub-block with the maximum weight from all the sub-blocks as a new initial sub-block for equally dividing, sequentially iterating, terminating iteration when the area of the equally divided sub-block is smaller than an area threshold value, and obtaining the RGB image of the corresponding minimum sub-block when termination is performed;
obtaining an atmospheric light value through channel values of each pixel point in R, G and B channels in the RGB image of the minimum subblock;
obtaining the transmissivity of each pixel point in the pre-processed RGB image through the atmospheric light value and the channel value of each pixel point in the pre-processed RGB image in R, G and B channels;
processing the preprocessed RGB image to obtain the brightness value of each pixel point;
obtaining a characteristic value of each pixel point by preprocessing the transmissivity and the brightness value of each pixel point in the RGB image;
clustering all the pixel points by using the characteristic value of each pixel point to obtain a suspected damaged area;
forming a feature vector by the transmittance and the brightness value of each pixel point in the suspected damage area; forming a basic feature vector by preprocessing the transmittance average value and the brightness average value of all pixel points in the RGB image; obtaining the confidence coefficient of each pixel point in the suspected damaged area by using the obtained feature vector and the basic feature vector, and determining whether the pixel point is a damaged pixel point according to the confidence coefficient;
and obtaining a damaged area through the determined damaged pixel points.
Further, the method for detecting the damage of the automobile film utilizes the characteristic value of each pixel point to cluster all the pixel points, and the method for obtaining the suspected damaged area comprises the following steps:
clustering all pixel points in the preprocessed RGB image according to the magnitude of the characteristic value of each pixel point to obtain two clustering clusters;
calculating the mean value of the characteristic values of all the pixel points in each cluster through the characteristic values of all the pixel points, selecting the cluster with the large mean value of the characteristic values, and enabling all the pixel points in the cluster to form a suspected damaged area of the film.
Further, in the method for detecting the damage of the automobile film, the expression of the weight of each sub-block is as follows:
Figure 860198DEST_PATH_IMAGE002
in the formula:
Figure 100002_DEST_PATH_IMAGE003
indicates the divided up ^ th->
Figure 687209DEST_PATH_IMAGE004
The weight of each sub-block>
Figure 943528DEST_PATH_IMAGE004
Indicates the divided up ^ th->
Figure 576503DEST_PATH_IMAGE004
The number of the individual blocks is one,
Figure 100002_DEST_PATH_IMAGE005
,/>
Figure 536631DEST_PATH_IMAGE006
is a first->
Figure 613171DEST_PATH_IMAGE004
All pixel point channels in each sub block>
Figure 100002_DEST_PATH_IMAGE007
Corresponding to the standard deviation of the channel value>
Figure 586813DEST_PATH_IMAGE008
Is a first->
Figure 522014DEST_PATH_IMAGE004
All pixel point channels in each sub block>
Figure 179392DEST_PATH_IMAGE007
The mean value of the corresponding channel value->
Figure 100002_DEST_PATH_IMAGE009
Represents->
Figure 613784DEST_PATH_IMAGE007
Is an R channel, a G channel or a B channel.
Further, in the method for detecting damage of the automobile film, the transmittance expression of the pixel point is as follows:
Figure 100002_DEST_PATH_IMAGE011
in the formula:
Figure 576186DEST_PATH_IMAGE012
indicates the fifth->
Figure 100002_DEST_PATH_IMAGE013
The transmittance of each pixel point is->
Figure 754226DEST_PATH_IMAGE014
Indicating a pre-processed RGB image ^ h>
Figure 476895DEST_PATH_IMAGE013
The channel value of each pixel point on the channel c->
Figure DEST_PATH_IMAGE015
Represents the value of atmospheric light, is greater than or equal to>
Figure 82189DEST_PATH_IMAGE016
Represents->
Figure 100002_DEST_PATH_IMAGE017
Is R channel, G channel orB channel, <' > based on>
Figure 266308DEST_PATH_IMAGE018
Represents the corresponding dark channel image of the pre-processed RGB image, based on the RGB image data stored in the memory, based on the pre-processed RGB image data stored in the memory>
Figure 100002_DEST_PATH_IMAGE019
Represents that the pixel is based on>
Figure 451302DEST_PATH_IMAGE013
A central filtering window.
Further, the method for detecting the damage of the automobile film comprises the following steps of obtaining an atmospheric light value expression through channel values of each pixel point in R, G and B channels in the RGB image of the minimum sub-block:
Figure 100002_DEST_PATH_IMAGE021
in the formula:
Figure 34336DEST_PATH_IMAGE022
the fifth/or fifth in the RGB image representing the smallest subblock>
Figure 100002_DEST_PATH_IMAGE023
The channel value of each pixel point on the channel c, N represents the total number of the pixel points in the RGB image of the minimum subblock, and the value is greater than or equal to the value>
Figure 30105DEST_PATH_IMAGE023
The fifth/or fifth in the RGB image representing the smallest subblock>
Figure 13105DEST_PATH_IMAGE023
And (5) each pixel point.
Further, in the method for detecting the damage of the automobile film, the expression of the characterization value of the pixel point is as follows:
Figure 100002_DEST_PATH_IMAGE025
in the formula:
Figure 736210DEST_PATH_IMAGE026
indicating the ^ th or ^ th in the preprocessed RGB image>
Figure 241928DEST_PATH_IMAGE013
Characteristic value of each pixel point, and>
Figure 100002_DEST_PATH_IMAGE027
indicating the ^ th or ^ th in the preprocessed RGB image>
Figure 923445DEST_PATH_IMAGE013
The brightness value of each pixel point is greater or less>
Figure 82156DEST_PATH_IMAGE028
Represents a first parameter of the model, is greater than or equal to>
Figure DEST_PATH_IMAGE029
Representing a second parameter of the model.
Further, the method for detecting the damage of the automobile film by using the obtained characteristic vector and the basic characteristic vector to obtain the confidence of each pixel point in the suspected damaged area comprises the following steps:
calculating the similarity of the feature vector and the basic feature vector of each pixel point in the suspected damage area according to the obtained feature vector and the basic feature vector;
and obtaining the confidence of each pixel point in the suspected damaged area according to the similarity of the feature vector of each pixel point in the suspected damaged area and the basic feature vector.
Further, in the method for detecting damage of the automobile film, the expression of similarity between the feature vector of each pixel point in the suspected damaged area and the basic feature vector is as follows:
Figure 100002_DEST_PATH_IMAGE031
in the formula:
Figure 763279DEST_PATH_IMAGE032
indicates a suspected damaged area is ^ h->
Figure 100002_DEST_PATH_IMAGE033
Similarity of the feature vector of an individual pixel point and the base feature vector->
Figure 556792DEST_PATH_IMAGE034
Represents a base feature vector, is>
Figure 100002_DEST_PATH_IMAGE035
Indicates a suspected damaged area is ^ h->
Figure 707413DEST_PATH_IMAGE033
Characteristic vector of each pixel point, and->
Figure 133846DEST_PATH_IMAGE036
A feature vector is represented.
Further, in the method for detecting damage of the automobile film, the expression of the confidence of each pixel point in the suspected damaged area is as follows:
Figure 526650DEST_PATH_IMAGE038
in the formula:
Figure DEST_PATH_IMAGE039
indicates a suspected damaged area->
Figure 159624DEST_PATH_IMAGE033
Confidence of each pixel point, based on the confidence level of the pixel point>
Figure 776419DEST_PATH_IMAGE040
Representing confidence model parameters.
The invention has the beneficial effects that: the method carries out damage detection on the automobile film based on the image data, extracts the damaged area based on the transmissivity and the brightness information of each pixel point, establishes a damage confidence coefficient analysis model to carry out damage judgment on the pixel points in the damaged area again, can realize accurate detection on the damage condition of the surface of the automobile film, simultaneously carries out damage detection on the automobile film by adopting an image-based method, can avoid the secondary damage to the surface of the film caused by artificial contact, and has the effects of high detection speed, high accuracy and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of the present embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
An embodiment of a method for detecting a damaged film of an automobile of the present invention, as shown in fig. 1, includes:
the applicable scenarios of the embodiment are as follows: the colorless frosted automobile film is subjected to damage detection or comprehensive damage detection on the film before the automobile film is adhered. This embodiment is mainly applicable to and carries out the damage detection to colourless dull polish car pad pasting.
101. And obtaining a pre-processing RGB image.
Preprocessing an image of the acquired film covered on a dark background to obtain a preprocessed RGB image;
the embodiment mainly carries out damage detection to colorless dull polish pad pasting based on the characteristic information of the pixel of the image of gathering, consequently, this embodiment will wait to detect the pad pasting and cover on a dark background, then wait to detect the pad pasting top and deploy image acquisition equipment, carry out image acquisition to car glass pad pasting surface through the camera to follow-up damaged area to the pad pasting carries out accurate discernment. The setting of the camera position and the camera shooting range is set by an implementer according to the actual situation. In this embodiment, the camera is located right above the surface of the to-be-detected film, collects an orthographic image of the surface of the to-be-detected film, and the dark background implementer can select the dark background automatically in the actual application process, and the dark background is set as a pure black background in this embodiment.
After the front-view image of the surface of the film to be detected is collected, because the noise existing in the environment is more, a large amount of image noise points can be generated in the image collection process, and the quality of the image on the surface of the film is influenced. The preprocessing comprises image filtering denoising and image equalization processing, an implementer can select a corresponding existing image preprocessing method, a Gaussian filter is adopted to carry out noise removal on the image, and histogram equalization is adopted to process the image so as to eliminate the problems of uneven illumination of the surface of the collected film and the like. The specific pretreatment process is a known technology, and the embodiment is not described in detail.
Therefore, the high-quality image to be detected of the film can be obtained according to the method and used as a subsequent image to be detected for analyzing the surface image of the film so as to identify the damaged area.
The main purpose of this embodiment is to detect the damaged condition of the surface of the film through the image data, and therefore, for the image data obtained in the above steps, this embodiment will establish a damaged detection model of the film, construct a pixel feature vector based on the transmittance and luminance value information of each pixel, and perform cluster analysis on each pixel, so as to extract the damaged area, where the damaged detection model of the film specifically is:
102. dark channel images are extracted.
For the preprocessed RGB image data, the transmittance of each pixel point is extracted and analyzed first in this embodiment, and the extracted and analyzed transmittance is used as a characteristic parameter for detecting the damage of the automobile glass film, specifically:
in this embodiment, the film is analyzed as fog, and a model is formed according to the existing fog pattern, so that the image model after the preprocessing collected in this embodiment can be expressed as:
Figure 955728DEST_PATH_IMAGE042
in the formula:
Figure DEST_PATH_IMAGE043
representing a pre-processed RGB image, i.e. a foggy image, is taken>
Figure 840638DEST_PATH_IMAGE044
Represents a black background image without a covering film, i.e., a fog-free image, which is an RGB image obtained by image-capturing and processing a dark background in the same environment as the film is not yet placed, A represents an atmospheric light value, and/or>
Figure DEST_PATH_IMAGE045
Representing the transmittance at the pixel point.
Further, the preprocessed image model is subjected to deformation processing:
Figure DEST_PATH_IMAGE047
then, carrying out minimum value operation twice on the images in the deformed model to obtain a final processing model:
Figure DEST_PATH_IMAGE049
in the formula:
Figure 169595DEST_PATH_IMAGE050
based on the pixel point->
Figure 973603DEST_PATH_IMAGE013
A central filter window->
Figure DEST_PATH_IMAGE051
Is an image->
Figure 63044DEST_PATH_IMAGE043
A fifth or fifth letter>
Figure 594388DEST_PATH_IMAGE013
The pixel value (i.e. channel value) of each pixel point on the c channel, wherein c represents the R channel, the G channel or the B channel, and/or the corresponding channel>
Figure 30049DEST_PATH_IMAGE052
Is an image->
Figure 430724DEST_PATH_IMAGE044
Is/are>
Figure 381362DEST_PATH_IMAGE013
The channel value of each pixel point in the c channel.
Then, the embodiment performs dark channel processing on the dark background image when the film is not covered to obtain a corresponding dark channel image:
Figure 654081DEST_PATH_IMAGE054
in the formula:
Figure DEST_PATH_IMAGE055
is an image->
Figure 101505DEST_PATH_IMAGE044
The corresponding dark channel image, x is the pixel point in the dark channel image, and is greater or less than>
Figure 965425DEST_PATH_IMAGE056
Is an image->
Figure 341042DEST_PATH_IMAGE044
Is/are>
Figure 978304DEST_PATH_IMAGE013
The channel value of each pixel point on the c channel.
103. The RGB image of the smallest subblock is obtained.
According to the dark channel prior algorithm, the method comprises the following steps:
Figure 857398DEST_PATH_IMAGE058
namely, the method comprises the following steps:
Figure 423377DEST_PATH_IMAGE060
according to the dark channel prior and the final processing model, a transmittance formula of each pixel point in the image to be detected (the preprocessed RGB image) of the film can be obtained:
Figure DEST_PATH_IMAGE061
in the formula:
Figure 974707DEST_PATH_IMAGE062
represents a dark channel image corresponding to the pre-processed image to be detected of the film, and then is used for judging whether the image is dark or bright>
Figure 589228DEST_PATH_IMAGE015
Indicating the value of atmospheric light.
Then, calculating the atmospheric light value, wherein the calculation of the atmospheric light value is set as follows:
extracting a dark channel image of the preprocessed RGB image;
equally dividing the dark channel image serving as an initial sub-block into a plurality of sub-blocks, calculating the weight of each sub-block, selecting the sub-block with the maximum weight from all the sub-blocks as a new initial sub-block for equally dividing, sequentially iterating, terminating iteration when the area of the equally divided sub-block is smaller than an area threshold value, and obtaining the RGB image of the corresponding minimum sub-block when termination is performed;
first, the dark channel image is
Figure 853987DEST_PATH_IMAGE018
Equally divided into a plurality of sub-blocks, set by the implementer, the embodiment is set to divide the dark channel image into ^ and/or-based>
Figure DEST_PATH_IMAGE063
Sub-block, setting sub-block weight calculation model:
Figure 755647DEST_PATH_IMAGE064
in the formula:
Figure 89546DEST_PATH_IMAGE003
indicates the divided up ^ th->
Figure 320807DEST_PATH_IMAGE004
The weight of each sub-block>
Figure 659646DEST_PATH_IMAGE004
Indicates the divided up ^ th->
Figure 973953DEST_PATH_IMAGE004
The number of the individual blocks is one,
Figure 293683DEST_PATH_IMAGE005
,/>
Figure 266318DEST_PATH_IMAGE006
is the first->
Figure 161462DEST_PATH_IMAGE004
All pixel point channels in each sub-block>
Figure 413714DEST_PATH_IMAGE007
Corresponding to the standard deviation of the channel value>
Figure 191046DEST_PATH_IMAGE008
Is the first->
Figure 986613DEST_PATH_IMAGE004
All pixel point channels in each sub-block>
Figure 798580DEST_PATH_IMAGE007
The mean value of the corresponding channel value->
Figure 736580DEST_PATH_IMAGE009
Represents->
Figure 502673DEST_PATH_IMAGE007
Is an R channel, a G channel or a B channel.
The higher the weight of the sub-block is, the higher the brightness value of the corresponding sub-block is, and the smaller the gradient change of the pixel value of the pixel point in the sub-block is.
Further, the sub-block with the largest weight is processed again
Figure 331958DEST_PATH_IMAGE063
And sub-block division, namely calculating the weight of each sub-block according to the sub-block weight calculation model, setting a sub-block division termination condition, stopping sub-block division when the area of the divided sub-blocks is smaller than an area threshold, wherein the area of the sub-blocks is the sum of the number of pixels in the sub-blocks, and the area threshold can be set by an implementer. So far, the minimum sub-block corresponding to the sub-block division termination can be obtained and recorded as->
Figure DEST_PATH_IMAGE065
. The dark channel image is then->
Figure 965808DEST_PATH_IMAGE018
Is greater than or equal to>
Figure 871447DEST_PATH_IMAGE065
Setting the pixel value of the corresponding pixel point to be 1, setting the pixel values of the pixel points at other positions to be zero, and obtaining the dark channel image
Figure 357792DEST_PATH_IMAGE018
The corresponding binary image is combined with the image to be detected>
Figure 757812DEST_PATH_IMAGE043
Multiplying to obtain the image to be detected>
Figure 91841DEST_PATH_IMAGE043
And the corresponding minimum sub-block (the RGB image corresponding to the minimum sub-block) is used as an ROI area for calculating the atmospheric light value.
104. And obtaining the atmospheric light value.
Obtaining an atmospheric light value through channel values of each pixel point in R, G and B channels in the RGB image of the minimum subblock;
based on this, the atmospheric light value is calculated:
Figure 417649DEST_PATH_IMAGE066
in the formula:
Figure 876443DEST_PATH_IMAGE015
represents an atmospheric light value, <' > based on>
Figure DEST_PATH_IMAGE067
Fifth ÷ in an RGB image (i.e. the ROI area) representing the smallest sub-block>
Figure 750595DEST_PATH_IMAGE023
The pixel value of each pixel point on the channel c, wherein the channel c represents the channel R, the channel G or the channel B, the N represents the total number of the pixel points in the RGB image of the minimum subblock, and the value is greater or less than the threshold value>
Figure 673551DEST_PATH_IMAGE023
The fifth/or fifth in the RGB image representing the smallest subblock>
Figure 671725DEST_PATH_IMAGE023
And (5) each pixel point.
Therefore, the atmospheric light value during image acquisition can be calculated and used for calculating and analyzing the transmissivity of the image pixel points.
105. And obtaining the transmissivity of each pixel point.
Obtaining the transmissivity of each pixel point in the pre-processed RGB image through the atmospheric light value and the channel value of each pixel point in the pre-processed RGB image in R, G and B channels;
substituting the obtained atmospheric light value into the transmittance formula of the pixel points to calculate the transmittance of each pixel point in the image to be detected of the film
Figure 414553DEST_PATH_IMAGE012
The characteristic parameters are used as the characteristic parameters of the pixel points and are used for identifying and detecting the damage of the adhesive film;
106. and obtaining the brightness value of each pixel point.
Processing the preprocessed RGB images to obtain the brightness value of each pixel point;
further, in the embodiment, when the automobile film to be detected is covered on the black background, the brightness value of the collected image is wholly improved due to the covering of the frosted film, and for the preprocessed film image to be detected, the embodiment performs HSV conversion on the preprocessed film image to obtain the brightness value of each pixel point
Figure 920490DEST_PATH_IMAGE027
The HSV is used as a characteristic parameter for detecting the breakage of the film, the known technology is converted from HSV, and relevant explanation is not provided in the embodiment;
107. and obtaining the characteristic value of each pixel point.
Obtaining a characteristic value of each pixel point by preprocessing the transmissivity and the brightness value of each pixel point in the RGB image;
and finally, establishing a pixel point characteristic value based on the characteristic parameters, wherein the pixel point characteristic value is used for carrying out characteristic description on the pixel point and is as follows:
Figure 901215DEST_PATH_IMAGE068
in the formula:
Figure 67361DEST_PATH_IMAGE026
indicating the ^ th or ^ th in the preprocessed RGB image>
Figure 828644DEST_PATH_IMAGE013
The characteristic value of each pixel point is judged and judged>
Figure 607113DEST_PATH_IMAGE012
Representing the second in a pre-processed RGB image
Figure 504662DEST_PATH_IMAGE013
The transmittance of each pixel point is->
Figure 844638DEST_PATH_IMAGE027
Indicating the ^ th or ^ th in the preprocessed RGB image>
Figure 562059DEST_PATH_IMAGE013
The brightness value of each pixel point is greater or less>
Figure 144219DEST_PATH_IMAGE028
Represents a first parameter of the model, is greater than or equal to>
Figure 896274DEST_PATH_IMAGE029
Represents a second parameter of the model, which is set by the implementer, which is set to ≥ by the present embodiment>
Figure DEST_PATH_IMAGE069
. The higher the characterization value of the pixel is, the higher the probability of being classified as a damaged pixel is.
Therefore, the method can be used for extracting the feature vectors of all the pixel points of the image to be detected and identifying the damaged area.
108. Obtaining a suspected damaged area.
Clustering all the pixel points by using the characteristic value of each pixel point to obtain a suspected damaged area;
after the characteristic value of each pixel point is obtained based on the method, the embodiment performs cluster analysis on each pixel point based on the characteristic value, and obtains different cluster categories to realize identification of the damaged area. The existing clustering algorithm is many, the clustering process is the existing known technology, the clustering process is not in the protection range of the embodiment and is not elaborated, an implementer can select the clustering algorithm by himself, the embodiment adopts K-means to perform clustering analysis, the implementer of the K value sets the K value to be K =2 by himself, and the embodiment sets the K value to be K =2. For two cluster categories, the present embodiment will calculate the mean value of the characterization values of all the pixels in the two categories
Figure 800295DEST_PATH_IMAGE070
Wherein Z is the number of pixels in the corresponding category, based on the value of the pixel in the corresponding category>
Figure DEST_PATH_IMAGE071
Represents a category, <' > is>
Figure 162268DEST_PATH_IMAGE072
Represents->
Figure 298851DEST_PATH_IMAGE071
Is at the very beginning of a category>
Figure 420260DEST_PATH_IMAGE072
After obtaining the mean values of the characterization values corresponding to the two categories, the present embodiment uses the category with the larger mean value as the category corresponding to the damage of the film, uses each connected domain composed of the pixels included in the category as a suspected damaged area, and uses the other category as the category corresponding to the normal film, so that the extraction of the suspected damaged area on the surface of the film can be realized.
109. It is determined whether the pixel is a broken pixel.
Forming a feature vector by the transmittance and the brightness value of each pixel point in the suspected damage area; forming a basic feature vector by preprocessing the transmittance average value and the brightness average value of all pixel points in the RGB image; obtaining the confidence coefficient of each pixel point in the suspected damaged area by using the obtained feature vector and the basic feature vector, and determining whether the pixel point is a damaged pixel point according to the confidence coefficient;
through the clustering process, a suspected damaged area can be preliminarily obtained, further, in the embodiment, damage confidence coefficient analysis is carried out on all pixel points in the suspected damaged area again, and only the extracted pixel points in the suspected damaged area are subjected to damage confidence coefficient analysis, so that the calculation amount can be effectively reduced, the calculation of irrelevant pixel points is avoided, and the precision of damage identification is ensured. For an image to be detected, firstly, obtaining a characteristic parameter mean value of all pixel points in the image to be detected
Figure DEST_PATH_IMAGE073
And
Figure 240055DEST_PATH_IMAGE074
constructing a base feature vector->
Figure DEST_PATH_IMAGE075
And the method is used for calculating the damage confidence of each pixel point in the suspected damage area. And then calculating the similarity between the feature vector of each pixel point in the suspected damaged area and the basic feature vector:
Figure DEST_PATH_IMAGE077
in the formula:
Figure 276275DEST_PATH_IMAGE032
indicates a suspected damaged area is ^ h->
Figure 216550DEST_PATH_IMAGE033
Similarity of the feature vector of an individual pixel point and the base feature vector->
Figure 926886DEST_PATH_IMAGE034
Represents a base feature vector, is>
Figure 28834DEST_PATH_IMAGE035
Indicates a suspected damaged area is ^ h->
Figure 215665DEST_PATH_IMAGE033
Characteristic vector of each pixel point, and->
Figure 694051DEST_PATH_IMAGE078
,/>
Figure 727735DEST_PATH_IMAGE036
The feature vector is represented.
The larger the function value is, the more similar the function value is, and the smaller the possibility that the corresponding pixel point is damaged is. Establishing a damage confidence coefficient model based on the similarity index, and calculating the damage confidence coefficient of the pixel points in the suspected damage area, wherein the damage confidence coefficient model is as follows:
Figure 220158DEST_PATH_IMAGE080
in the formula:
Figure 886763DEST_PATH_IMAGE039
indicates a suspected damaged area is ^ h->
Figure 418107DEST_PATH_IMAGE033
The damage confidence of each pixel point is judged>
Figure 539254DEST_PATH_IMAGE040
The confidence model parameter is represented, and the embodiment sets the confidence model parameter to 0.5, which can be set by an implementer. Normalizing the model to ensure that the function value is [0,1], wherein the higher the function value of the model is, the greater the damage confidence of the corresponding pixel point is, and the damage confidence of each pixel point in the suspected damage area is further processed by the embodimentLine count and set confidence threshold->
Figure DEST_PATH_IMAGE081
And when the damage confidence coefficient is higher than the threshold value, the confidence coefficient of the pixel point which is the damage pixel point is considered to be higher, the pixel point is taken as the damage pixel point, and otherwise, the pixel point is a normal pixel point. The confidence threshold is set at the discretion of the practitioner, which the present embodiment sets to @>
Figure 701114DEST_PATH_IMAGE082
110. A damaged area is obtained.
And obtaining a damaged area through the determined damaged pixel points.
Therefore, all the pixel points of the image to be detected can be classified and judged, and the damage condition of the surface of the film can be analyzed. The damage condition of each pixel point is judged based on the extracted feature vectors of the pixel points, the damaged area can be accurately extracted, the damaged position and the damaged area on the surface of the film can be prompted, reference opinions can be provided for related operators, and the repair and processing of the film by the operators are facilitated.
The method carries out damage detection on the automobile film based on the image data, extracts the damaged area based on the transmissivity and the brightness information of each pixel point, establishes a damage confidence coefficient analysis model to carry out damage judgment on the pixel points in the damaged area again, can realize accurate detection on the damage condition of the surface of the automobile film, simultaneously carries out damage detection on the automobile film by adopting an image-based method, can avoid the secondary damage to the surface of the film caused by artificial contact, and has the effects of high detection speed, high accuracy and the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (8)

1. A method for detecting breakage of an automobile film is characterized by comprising the following steps:
preprocessing the image covered on the dark background by the acquired film to obtain a preprocessed RGB image;
extracting a dark channel image of the preprocessed RGB image;
equally dividing the dark channel image serving as an initial subblock into a plurality of subblocks, calculating the weight of each subblock, selecting the subblock with the maximum weight from all the subblocks as a new initial subblock for equally dividing, sequentially iterating, and terminating iteration when the area of the equally divided subblock is smaller than an area threshold value to obtain an RGB image of the corresponding minimum subblock at the termination time;
obtaining an atmospheric light value through channel values of each pixel point in R, G and B channels in the RGB image of the minimum subblock;
obtaining the transmissivity of each pixel point in the pre-processed RGB image through the atmospheric light value and the channel value of each pixel point in the pre-processed RGB image in R, G and B channels;
processing the preprocessed RGB image to obtain the brightness value of each pixel point;
obtaining a characteristic value of each pixel point by preprocessing the transmissivity and the brightness value of each pixel point in the RGB image;
clustering all the pixel points by using the characteristic value of each pixel point to obtain a suspected damaged area;
forming a feature vector by the transmittance and the brightness value of each pixel point in the suspected damage area; forming a basic feature vector by preprocessing the transmittance average value and the brightness average value of all pixel points in the RGB image; obtaining the confidence coefficient of each pixel point in the suspected damaged area by using the obtained feature vector and the basic feature vector, and determining whether the pixel point is a damaged pixel point according to the confidence coefficient;
obtaining a damaged area through the determined damaged pixel points;
the expression of the weight of each sub-block is:
Figure DEST_PATH_IMAGE001
in the formula:
Figure 279450DEST_PATH_IMAGE002
represents the division into
Figure DEST_PATH_IMAGE003
The weight of the individual sub-block is,
Figure 164361DEST_PATH_IMAGE003
represents the division into
Figure 745515DEST_PATH_IMAGE003
The number of the individual blocks is one,
Figure 782479DEST_PATH_IMAGE004
Figure DEST_PATH_IMAGE005
is as follows
Figure 386767DEST_PATH_IMAGE003
All pixel point channels in individual block
Figure 668844DEST_PATH_IMAGE006
Corresponding to the standard deviation of the channel values,
Figure DEST_PATH_IMAGE007
is as follows
Figure 9564DEST_PATH_IMAGE003
All pixel point channels in individual block
Figure 984473DEST_PATH_IMAGE006
The mean value of the corresponding channel values,
Figure 872795DEST_PATH_IMAGE008
to represent
Figure 191518DEST_PATH_IMAGE006
Is an R channel, a G channel, or a B channel;
and the clustering adopts a Kmeans clustering algorithm.
2. The method for detecting the damage of the automobile film according to claim 1, wherein the method for clustering all the pixels by using the characterization value of each pixel to obtain the suspected damaged area comprises the following steps:
clustering all pixel points in the preprocessed RGB image according to the magnitude of the characteristic value of each pixel point to obtain two clustering clusters;
calculating the mean value of the characteristic values of all the pixel points in each cluster through the characteristic values of all the pixel points, selecting the cluster with the large mean value of the characteristic values, and enabling all the pixel points in the cluster to form a suspected damaged area of the film.
3. The method for detecting the damage of the automobile film according to claim 1, wherein the transmittance expression of the pixel points is as follows:
Figure DEST_PATH_IMAGE009
in the formula:
Figure 684948DEST_PATH_IMAGE010
is shown as
Figure DEST_PATH_IMAGE011
The transmittance of each of the pixels is measured by the transmittance sensor,
Figure 729959DEST_PATH_IMAGE012
representing pre-processed RGB image
Figure 105576DEST_PATH_IMAGE011
The channel value of each pixel point on the c channel,
Figure DEST_PATH_IMAGE013
which is indicative of the value of the atmospheric light,
Figure 775461DEST_PATH_IMAGE014
to represent
Figure 185714DEST_PATH_IMAGE016
Is an R channel, a G channel or a B channel,
Figure DEST_PATH_IMAGE017
representing the dark channel image corresponding to the pre-processed RGB image,
Figure 407485DEST_PATH_IMAGE018
representing by pixel points
Figure 270399DEST_PATH_IMAGE011
A central filtering window.
4. The automobile film sticking damage detection method according to claim 3, wherein an expression of the atmospheric light value obtained by the channel value of each pixel point in the R, G and B channels in the minimum sub-block RGB image is as follows:
Figure DEST_PATH_IMAGE019
in the formula:
Figure 370073DEST_PATH_IMAGE020
in RGB image representing minimum subblock
Figure DEST_PATH_IMAGE021
The channel value of each pixel point on the c channel, N represents the total number of pixel points in the RGB image of the minimum sub-block,
Figure 805472DEST_PATH_IMAGE021
in RGB image representing minimum subblock
Figure 27506DEST_PATH_IMAGE021
And (5) each pixel point.
5. The method for detecting the damage of the automobile film according to claim 1, wherein the expression of the characterization value of the pixel point is as follows:
Figure 377715DEST_PATH_IMAGE022
in the formula:
Figure DEST_PATH_IMAGE023
representing the second in a pre-processed RGB image
Figure 785475DEST_PATH_IMAGE011
The characteristic value of each pixel point is represented,
Figure 170320DEST_PATH_IMAGE024
representing the second in a pre-processed RGB image
Figure 32097DEST_PATH_IMAGE011
The brightness value of each pixel point is calculated,
Figure DEST_PATH_IMAGE025
a first parameter of the model is represented,
Figure 305821DEST_PATH_IMAGE026
representing a second parameter of the model.
6. The method for detecting the damage of the automobile film according to claim 1, wherein the method for obtaining the confidence of each pixel point in the suspected damaged area by using the obtained feature vector and the basic feature vector comprises the following steps:
calculating the similarity of the feature vector and the basic feature vector of each pixel point in the suspected damaged area according to the obtained feature vector and the basic feature vector;
and obtaining the confidence coefficient of each pixel point in the suspected damaged area through the similarity of the feature vector of each pixel point in the suspected damaged area and the basic feature vector.
7. The method for detecting the damage of the automobile film according to claim 6, wherein the expression of the similarity between the feature vector of each pixel point in the suspected damaged area and the basic feature vector is as follows:
Figure DEST_PATH_IMAGE027
in the formula:
Figure 747298DEST_PATH_IMAGE028
indicating a suspected damaged area
Figure 485184DEST_PATH_IMAGE030
The similarity between the feature vector of each pixel and the basic feature vector,
Figure DEST_PATH_IMAGE031
the base feature vector is represented by a vector of features,
Figure 986704DEST_PATH_IMAGE032
indicating a suspected damaged area
Figure 311506DEST_PATH_IMAGE030
The feature vectors of the individual pixels are then,
Figure DEST_PATH_IMAGE033
a feature vector is represented.
8. The method for detecting the damage of the automobile film according to claim 7, wherein the confidence of each pixel point in the suspected damage area is expressed as:
Figure 320788DEST_PATH_IMAGE034
in the formula:
Figure DEST_PATH_IMAGE035
indicating a suspected damaged area
Figure 821171DEST_PATH_IMAGE030
The confidence of each of the pixel points is calculated,
Figure 337601DEST_PATH_IMAGE036
representing confidence model parameters.
CN202211244738.3A 2022-10-12 2022-10-12 Method for detecting damage of automobile film Active CN115311288B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211244738.3A CN115311288B (en) 2022-10-12 2022-10-12 Method for detecting damage of automobile film

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211244738.3A CN115311288B (en) 2022-10-12 2022-10-12 Method for detecting damage of automobile film

Publications (2)

Publication Number Publication Date
CN115311288A CN115311288A (en) 2022-11-08
CN115311288B true CN115311288B (en) 2023-03-24

Family

ID=83867792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211244738.3A Active CN115311288B (en) 2022-10-12 2022-10-12 Method for detecting damage of automobile film

Country Status (1)

Country Link
CN (1) CN115311288B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630311B (en) * 2023-07-21 2023-09-19 聊城市瀚格智能科技有限公司 Pavement damage identification alarm method for highway administration

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989279A (en) * 2021-12-24 2022-01-28 武汉华康龙兴工贸有限公司 Plastic film quality detection method based on artificial intelligence and image processing
CN114170208A (en) * 2021-12-14 2022-03-11 武汉福旺家包装有限公司 Paper product defect detection method based on artificial intelligence

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070122B (en) * 2019-04-15 2022-05-06 沈阳理工大学 Convolutional neural network fuzzy image classification method based on image enhancement
CN113933828A (en) * 2021-10-19 2022-01-14 上海大学 Unmanned ship environment self-adaptive multi-scale target detection method and system
CN115082361B (en) * 2022-08-23 2022-10-28 山东国晟环境科技有限公司 Turbid water body image enhancement method based on image processing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114170208A (en) * 2021-12-14 2022-03-11 武汉福旺家包装有限公司 Paper product defect detection method based on artificial intelligence
CN113989279A (en) * 2021-12-24 2022-01-28 武汉华康龙兴工贸有限公司 Plastic film quality detection method based on artificial intelligence and image processing

Also Published As

Publication number Publication date
CN115311288A (en) 2022-11-08

Similar Documents

Publication Publication Date Title
CN113989279B (en) Plastic film quality detection method based on artificial intelligence and image processing
CN110097034B (en) Intelligent face health degree identification and evaluation method
CN110349126B (en) Convolutional neural network-based marked steel plate surface defect detection method
CN115294113B (en) Quality detection method for wood veneer
CN113592861B (en) Bridge crack detection method based on dynamic threshold
CN114419025A (en) Fiberboard quality evaluation method based on image processing
CN114757900B (en) Artificial intelligence-based textile defect type identification method
CN110210448B (en) Intelligent face skin aging degree identification and evaluation method
CN113935666B (en) Building decoration wall tile abnormity evaluation method based on image processing
CN107610114A (en) Optical satellite remote sensing image cloud snow mist detection method based on SVMs
CN116645367B (en) Steel plate cutting quality detection method for high-end manufacturing
CN112907519A (en) Metal curved surface defect analysis system and method based on deep learning
CN114820625B (en) Automobile top block defect detection method
CN115311288B (en) Method for detecting damage of automobile film
CN114494179A (en) Mobile phone back damage point detection method and system based on image recognition
CN116309599B (en) Water quality visual monitoring method based on sewage pretreatment
CN115311283B (en) Glass tube drawing defect detection method and system
CN114972356A (en) Plastic product surface defect detection and identification method and system
CN116883408B (en) Integrating instrument shell defect detection method based on artificial intelligence
CN114037691A (en) Carbon fiber plate crack detection method based on image processing
CN116559111A (en) Sorghum variety identification method based on hyperspectral imaging technology
CN115880280A (en) Detection method for quality of steel structure weld joint
CN117152129B (en) Visual detection method and system for surface defects of battery cover plate
CN116402822B (en) Concrete structure image detection method and device, electronic equipment and storage medium
CN116758045B (en) Surface defect detection method and system for semiconductor light-emitting diode

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant