CN117333492B - Optical film quality detection method and related device based on image processing - Google Patents

Optical film quality detection method and related device based on image processing Download PDF

Info

Publication number
CN117333492B
CN117333492B CN202311635399.6A CN202311635399A CN117333492B CN 117333492 B CN117333492 B CN 117333492B CN 202311635399 A CN202311635399 A CN 202311635399A CN 117333492 B CN117333492 B CN 117333492B
Authority
CN
China
Prior art keywords
target
optical film
image
quality
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311635399.6A
Other languages
Chinese (zh)
Other versions
CN117333492A (en
Inventor
付志峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Filtai Photoelectric Co ltd
Original Assignee
Shenzhen Filtai Photoelectric Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Filtai Photoelectric Co ltd filed Critical Shenzhen Filtai Photoelectric Co ltd
Priority to CN202311635399.6A priority Critical patent/CN117333492B/en
Publication of CN117333492A publication Critical patent/CN117333492A/en
Application granted granted Critical
Publication of CN117333492B publication Critical patent/CN117333492B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention relates to the technical field of data processing, and discloses an optical film quality detection method and a related device based on image processing. The optical film quality detection method based on image processing comprises the following steps: acquiring a target mashup image; inputting the target mashed image into a trained first optical film quality evaluation model to detect quality abnormality, so as to form a target optical film quality defect parameter; adjusting model super parameters of a preset second optical film quality evaluation model according to the target optical film quality defect parameters, and respectively inputting the plurality of target optical film images to be detected into the second optical film quality evaluation model to perform film image effect recognition and quality abnormality classification to obtain a quality evaluation index of each target optical film image; the invention realizes the automation and the accuracy of the quality assessment and the quality inspection of the optical film, saves time and cost and improves the detection accuracy and efficiency.

Description

Optical film quality detection method and related device based on image processing
Technical Field
The invention relates to the technical field of data processing, in particular to an optical film quality detection method and a related device based on image processing.
Background
The optical film is widely applied to the fields such as optical lenses, display screens and the like, and the quality of the optical film directly influences the performance of the final product. Because the manufacturing and testing technology of the optical film is precise and complex, the difficulty in quality evaluation and quality control is quite high. A common method is to acquire an image of the target optical film and then analyze and evaluate the image by using a correlation algorithm. However, the prior art often fails to simultaneously respond to the accuracy and efficiency requirements of quality assessment.
The existing optical film quality evaluation model is usually calculated based on fixed model parameters, lacks elasticity, and cannot adapt to the condition change generated by different quality mashed images. In addition, since the quality assessment model employed typically does not perform effective quality anomaly detection prior to removal of the quality anomaly film image, features of the anomaly film image may be ignored, thereby affecting the accuracy of the final quality assessment result.
In addition, the existing evaluation model cannot effectively compare the quality assessment index with a preset quality standard value, and generates detailed quality inspection results and quality inspection reports according to the comparison results, so that accurate and efficient quality evaluation and control of the optical film are more difficult, and quality and performance of the optical product are affected.
Disclosure of Invention
The invention provides an optical film quality detection method and a related device based on image processing, which are used for solving the technical problem of how to improve the accuracy and efficiency of optical film quality evaluation.
The first aspect of the present invention provides an optical film quality detection method based on image processing, the optical film quality detection method based on image processing comprising:
acquiring a target mashup image; the target mashup image is generated based on a plurality of target optical film images to be detected;
inputting the target mashed image into a trained first optical film quality evaluation model to detect quality abnormality, so as to form a target optical film quality defect parameter;
adjusting model super parameters of a preset second optical film quality evaluation model according to the target optical film quality defect parameters, and respectively inputting the plurality of target optical film images to be detected into the second optical film quality evaluation model to perform film image effect recognition and quality abnormality classification to obtain a quality evaluation index of each target optical film image;
and comparing the quality evaluation index of each target optical film image with the quality standard value based on a preset quality standard value to obtain a quality inspection result of each target optical film image, and generating a quality inspection report of the optical film according to the quality inspection result of each target optical film image.
Optionally, in a first implementation manner of the first aspect of the present invention, the step of acquiring the target mashup image includes:
acquiring the types of the optical films, configuring corresponding target film analysis strategies according to the types of the optical films, and identifying a plurality of film partition characteristic distribution points according to the target film analysis strategies;
collecting a plurality of partition images of the target optical film according to the partition characteristic distribution points of the plurality of films, and performing coding processing on the plurality of partition images to generate coding data of the plurality of partition images;
performing feature clustering on the coded data of the plurality of partition images based on the target film analysis strategy, and constructing at least three groups of coded image data sets;
performing three-dimensional space digital processing on the at least three groups of coded image data sets to generate at least three corresponding groups of film point cloud digital sets, and performing three-dimensional data intersection on the at least three groups of point cloud digital sets to obtain a target mashup image set;
and normalizing the target mashup image set according to a preset normalization algorithm to obtain a standard target mashup image set, and generating a corresponding target mashup image based on the standard target mashup image set.
Optionally, in a second implementation manner of the first aspect of the present invention, the training process of the first optical film quality assessment model includes:
analyzing the optical film data of each position through a preset spectral characteristic analysis algorithm to obtain an optical film quality index;
deploying a plurality of high-frequency ultra-wideband wireless sensor nodes, collecting actual quality parameters of the optical films at all positions through the plurality of high-frequency ultra-wideband wireless sensor nodes, and storing the actual quality parameters and the optical film quality indexes into a preset database;
reading an optical film quality index and the corresponding actual quality parameter in a preset database, preprocessing the optical film quality index and the corresponding actual quality parameter through a preset first processing algorithm, and dividing the preprocessed optical film quality index and the corresponding actual quality parameter into a training set and a testing set according to a preset proportion after mixing;
acquiring an initial first deep learning model, and inputting the training set into the initial first deep learning model for training to obtain a first prediction result; wherein the initial first deep learning model is based on a deep convolutional neural network model;
And calculating an error value between the first predicted result and a second predicted result corresponding to the test set, transmitting the error value back to the initial first deep learning model through a preset second processing algorithm, and adjusting and optimizing model parameters through a preset third processing algorithm to finally obtain a first optical film quality evaluation model.
Optionally, in a third implementation manner of the first aspect of the present invention, the collecting a plurality of partition images of the target optical film according to the plurality of partition feature distribution points of the film, and performing encoding processing on the plurality of partition images to generate encoded data of the plurality of partition images includes:
collecting a plurality of partition images of the target optical film based on the plurality of film partition characteristic distribution points; the linkage relation between each film partition characteristic distribution point and each partition image of the target optical film is stored in the database in advance;
dividing the acquired multiple partition images based on a preset plane division model to obtain characteristic distribution attributes of the multiple partition images; the plane segmentation model is obtained based on convolutional neural network model training;
acquiring the characteristic distribution attribute, constructing a mapping relation between the characteristic distribution attribute and preset coding data, and generating a target coding table; the target coding table is recoded by a standard coding table based on the mapping relation between the characteristic distribution attribute and preset coding data;
Inquiring the coding values corresponding to the plurality of partition images from the target coding table to obtain target coding values corresponding to each partition image;
and generating characteristic identifiers of the partition images according to the target coding values, and carrying out characteristic identifier fusion on the partition images and the characteristic identifiers to obtain coding data of a plurality of partition images.
Optionally, in a fourth implementation manner of the first aspect of the present invention, the obtaining the feature distribution attribute, and constructing a mapping relationship between the feature distribution attribute and preset encoding data, to generate a target encoding table, includes:
collecting and identifying characteristic distribution attributes of a plurality of partition images through detection equipment, and establishing a one-to-one correspondence between the characteristic distribution attributes and preset coding data;
acquiring an initial coding table from a database; wherein the initial coding table is composed of a series of digital sequences and corresponding coding characters;
processing the characteristic value of the characteristic distribution attribute based on a preset divisor algorithm, acquiring all divisors of the characteristic value, and verifying whether all divisors of the acquired characteristic value can divide the corresponding characteristic value;
selecting numbers which are the same as the submultiple from the initial coding table as target numbers, selecting coding characters matched with the target numbers as target coding characters, and simultaneously setting unselected coding characters as residual coding characters;
New digit sequences are allocated to the residual coding characters again, and the sequence of the allocated new digit sequences is consistent with the sequence in the initial coding table;
after the new sequence number distribution of the residual coding characters is completed, distributing a new number sequence for the target coding characters according to the original sequence;
taking the remaining coding characters of the reassigned number sequences, the target coding characters and the corresponding sequence numbers as a new target coding table; wherein the target encoding table is used for generating characteristic identifiers of a plurality of partition images.
A second aspect of the present invention provides an image processing-based optical film quality detection apparatus comprising:
the acquisition module is used for acquiring target mashup images; the target mashup image is generated based on a plurality of target optical film images to be detected;
the detection module is used for inputting the target mashup image into the trained first optical film quality evaluation model to detect quality abnormality and form a target optical film quality defect parameter;
the adjusting module is used for adjusting the model super parameters of a preset second optical film quality evaluation model according to the target optical film quality defect parameters, inputting the plurality of target optical film images to be detected into the second optical film quality evaluation model for film image effect identification and quality abnormality classification, and obtaining the quality evaluation index of each target optical film image;
And the comparison module is used for comparing the quality evaluation index of each target optical film image with the quality standard value based on a preset quality standard value to obtain a quality inspection result of each target optical film image, and generating a quality inspection report of the optical film according to the quality inspection result of each target optical film image.
A third aspect of the present invention provides an optical film quality inspection apparatus based on image processing, comprising: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the image processing-based optical film quality detection apparatus to perform the image processing-based optical film quality detection method described above.
A fourth aspect of the present invention provides a computer-readable storage medium having instructions stored therein that, when executed on a computer, cause the computer to perform the above-described image processing-based optical film quality detection method.
In the technical scheme provided by the invention, the beneficial effects are as follows: the invention provides an optical film quality detection method and a related device based on image processing, which are implemented by acquiring a target mashup image; inputting the target mashed image into a trained first optical film quality evaluation model to detect quality abnormality, so as to form a target optical film quality defect parameter; adjusting model super parameters of a preset second optical film quality evaluation model according to the target optical film quality defect parameters, and respectively inputting the plurality of target optical film images to be detected into the second optical film quality evaluation model to perform film image effect recognition and quality abnormality classification to obtain a quality evaluation index of each target optical film image; and comparing the quality evaluation index of each target optical film image with the quality standard value based on a preset quality standard value to obtain a quality inspection result of each target optical film image, and generating a quality inspection report of the optical film according to the quality inspection result of each target optical film image. The invention synthesizes a plurality of target optical film images to be detected into one mashup image without detecting one by one. This reduces the time and labor costs of the test. And the trained first optical film quality evaluation model is utilized to detect quality anomalies of the target mashup image, so that the detection accuracy is improved. And adjusting the model super-parameters of a preset second optical film quality assessment model according to the target optical film quality defect parameters. And by automatically adjusting the model super parameters, the adaptability and the accuracy of the quality evaluation model can be improved, so that the accuracy of identifying the film image effect of the optical film image of the target to be detected and classifying the quality abnormality is improved. And then, inputting a plurality of target optical film images to be detected into a second optical film quality evaluation model to obtain the quality evaluation index of each target optical film image. The quality of each target optical film image is more accurately evaluated. And finally, comparing the quality evaluation indexes of the target optical film images according to a preset quality standard value to obtain a quality inspection result of each target optical film image, and generating a quality inspection report of the optical film.
Drawings
FIG. 1 is a schematic diagram of an embodiment of an optical film quality detection method based on image processing according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an optical film quality inspection apparatus according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides an optical film quality detection method based on image processing and a related device. The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, a specific flow of an embodiment of the present invention is described below with reference to fig. 1, and an embodiment of an optical film quality detection method based on image processing in an embodiment of the present invention includes:
step 101, acquiring a target mashup image; the target mashup image is generated based on a plurality of target optical film images to be detected;
it is to be understood that the execution subject of the present invention may be an optical film quality detection device based on image processing, and may also be a terminal or a server, which is not limited herein. The embodiment of the invention is described by taking a server as an execution main body as an example.
Specifically, in order to obtain the target mashup image, the following steps may be implemented:
collecting a plurality of target optical film images to be detected:
on an optical film production line, an image acquisition device or camera is used to acquire a plurality of target optical film images to be detected.
For example, images of optical film samples coated with different film materials or different process parameters are acquired.
Preprocessing each target optical film image:
firstly, image denoising processing is carried out to remove noise and irrelevant information in an image.
Then, an image enhancement operation such as adjusting contrast, brightness, etc. of the image is performed to improve image quality and visualization effect.
The preprocessing operations may be performed using image processing software such as OpenCV.
Generating a target mashup image:
and mixing all the preprocessed target optical film images according to a certain rule or algorithm to generate a target mixed image.
For example, the pixel values of each image may be superimposed on average, with the pixel values of the corresponding locations averaged.
Mashup of images may be implemented using image processing software or an image processing library of a programming language (e.g., python).
Further optimizing target mashup images:
and further image processing and optimizing are carried out on the generated target mashup image so as to improve the image quality.
For example, an image filter (e.g., a gaussian filter) is applied to smooth the image and remove noise.
Different image processing and optimization can be performed according to the specific application scene.
In one embodiment, assume that there are 3 target optical film images to be detected, corresponding to film A, film B, and film C, respectively. First, each image is preprocessed to remove noise and enhance image contrast. Then, the three preprocessed film images are mashed to generate a target mashed image. Assume that the rule of mashup is to average the pixel values for each pixel location. Finally, smoothing the target mashup image by using an image filter to remove noise and image discontinuity. Thus, a target mashup image in which the thin films a, B, and C are mashed is generated.
102, inputting the target mashed image into a trained first optical film quality evaluation model to detect quality abnormality, so as to form a target optical film quality defect parameter;
specifically, in order to input the target mashed image into the trained first optical film quality evaluation model to perform quality anomaly detection and form a target optical film quality defect parameter, the method may be implemented according to the following steps:
preparing a trained first optical film quality assessment model:
in previous work, machine learning or deep learning algorithms were used to perform quality assessment models using a large number of optical film images by training.
The model should have the ability to distinguish between anomalies in the quality of the optical film.
Extracting characteristics of target mashup images:
for a target mashup image, its features need to be extracted for model input.
Image processing and machine learning techniques, such as convolutional neural networks, may be used to extract features of the image.
The target mashup image is subjected to feature extraction and is converted into an input format acceptable by a model so as to detect quality abnormality.
Transferring the features to a first optical film quality assessment model:
the extracted features are input into a previously trained first optical film quality assessment model.
The first optical film quality evaluation model evaluates the image to determine whether the image has quality anomalies. In this way, target optical film quality defect parameters such as defect type, defect location, defect severity, etc. can be obtained.
Step 103, adjusting model super parameters of a preset second optical film quality evaluation model according to the target optical film quality defect parameters, and respectively inputting the plurality of target optical film images to be detected into the second optical film quality evaluation model to perform film image effect recognition and quality abnormality classification to obtain a quality evaluation index of each target optical film image;
specifically, the method for adjusting the model super parameters of the second optical film quality assessment model according to the quality defect parameters of the target optical film, inputting the target optical film image to be detected into the second optical film quality assessment model for image effect identification and quality abnormality classification, and obtaining the quality assessment index can be realized according to the following steps:
preparing a preset second optical film quality evaluation model:
the second optical film quality assessment model, which has been trained, should have the ability to identify film image effects and classify quality anomalies.
The model should be provided with adjustable hyper-parameters for further optimization of performance.
Adjusting the model super-parameters according to the quality defect parameters of the target optical film:
and according to the obtained target optical film quality defect parameters, adjusting the super parameters of the second optical film quality evaluation model.
The super-parameters can be adjusted according to the information of defect types, positions, severity and the like so as to improve the evaluation capability of the model on the target image.
Inputting a target optical film image to be detected for image effect identification and quality abnormality classification:
and inputting the target optical film image to be detected into the adjusted second optical film quality evaluation model.
The model will evaluate the image, identify the optical film effects of the image, and categorize the quality anomalies.
And obtaining a quality evaluation index of each target optical film image according to the output of the model, wherein the quality evaluation index is used for measuring the quality of the image.
And 104, comparing the quality evaluation index of each target optical film image with the quality standard value based on the preset quality standard value to obtain a quality inspection result of each target optical film image, and generating a quality inspection report of the optical film according to the quality inspection result of each target optical film image.
Specifically, the implementation of comparing the magnitude relation between the quality evaluation index of each target optical film image and the quality standard value based on the preset quality standard value, and generating a quality inspection report according to the quality inspection result of each target optical film image can be implemented according to the following steps:
setting a quality standard value:
and setting a quality standard value of the optical film image according to the quality requirement and the expected target.
The quality standard may be set based on experience, previous studies, or expert advice.
Comparing the quality assessment index with a quality standard value:
and comparing the quality evaluation index of each target optical film image with a preset quality standard value.
The quality of the image can be judged according to the magnitude relation between the quality assessment index and the quality standard value.
Generating a quality inspection report:
and generating a quality inspection report of the optical film according to the quality inspection result of each target optical film image.
The quality inspection report may include a quality assessment index, quality inspection results (pass or fail), and detailed quality inspection information, such as defect type, defect location, etc., for each target image.
Another embodiment of the method for detecting quality of an optical film based on image processing in the embodiment of the invention includes:
The step of obtaining the target mashup image comprises the following steps:
acquiring the types of the optical films, configuring corresponding target film analysis strategies according to the types of the optical films, and identifying a plurality of film partition characteristic distribution points according to the target film analysis strategies;
collecting a plurality of partition images of the target optical film according to the partition characteristic distribution points of the plurality of films, and performing coding processing on the plurality of partition images to generate coding data of the plurality of partition images;
performing feature clustering on the coded data of the plurality of partition images based on the target film analysis strategy, and constructing at least three groups of coded image data sets;
performing three-dimensional space digital processing on the at least three groups of coded image data sets to generate at least three corresponding groups of film point cloud digital sets, and performing three-dimensional data intersection on the at least three groups of point cloud digital sets to obtain a target mashup image set;
and normalizing the target mashup image set according to a preset normalization algorithm to obtain a standard target mashup image set, and generating a corresponding target mashup image based on the standard target mashup image set.
Specifically, the following examples are included:
Embodiment one: acquisition of lens mashup images
Obtaining the types of optical films: it is assumed that a lens mashup image needs to be acquired, and the optical film type is a convex lens.
Configuring a target film analysis strategy: and configuring a target film analysis strategy according to the characteristics of the convex lens. Strategies include defining key feature distribution points such as aperture and curvature on the lens surface, and coding processing methods for lens partitions.
Identifying characteristic distribution points of the film partition: at the lens surface, a plurality of feature distribution points, such as lens center points, lens edges, etc., are identified.
Collecting partition images and performing coding processing: and acquiring partition images of the center and the edge of the lens by utilizing the characteristic distribution points, and carrying out coding processing on each partition image to generate coding data.
Feature grouping and point cloud digital processing: based on the strategy, the coded data are subjected to characteristic grouping, and the coded data at the center and the edge are divided into two groups. And then, carrying out three-dimensional space digital processing on each group of coded data to generate a point cloud digital set.
Three-dimensional data intersection and mashing image generation: and carrying out three-dimensional data intersection on the point cloud digital sets of the center and the edge to obtain a target mashup image set. The target mashup image set is combined into one lens mashup image by a mashup image generation algorithm.
Embodiment two: acquisition of reflective film mashup images
Obtaining the types of optical films: it is assumed that a kind of reflection film mashup image needs to be acquired, and the kind of optical film is a metal reflection film.
Configuring a target film analysis strategy: according to the characteristics of the metal reflecting film, a target film analysis strategy is configured. The strategy includes defining zoned feature distribution points of the metal mirror, such as reflectivity change points, mirror concave-convex points, etc., and specifying the encoding processing method.
Identifying characteristic distribution points of the film partition: on the metal reflective film, a plurality of feature distribution points, such as points of reflectance change, points of a concave-convex surface, and the like, are identified.
Collecting partition images and performing coding processing: by utilizing the feature distribution points, we acquire the partition images of the feature points of the metal reflecting film, and encode each partition image to generate encoded data.
Feature grouping and point cloud digital processing: based on the strategy, the coded data are subjected to characteristic grouping, and the coded data with different characteristic points are divided into different groups. And then, carrying out three-dimensional space digital processing on each group of coded data to generate a point cloud digital set.
Three-dimensional data intersection and mashing image generation: and carrying out three-dimensional data intersection on the point cloud digital sets of each group to obtain a target mashup image set. The target mashup image set is combined into one reflection film mashup image by a mashup image generation algorithm.
In the embodiment of the invention, the beneficial effects are as follows: first, in the embodiments of the lens and the reflective film, the acquisition of mashup images of different optical films is realized by configuring a target film analysis strategy, identifying film partition feature distribution points, and the like. The generation mode of the mashup image is different from the traditional image processing method, and secondly, the embodiment can improve the analysis and preparation efficiency of the optical film. In an embodiment, the complex optical film analysis and preparation process is effectively simplified and automated by collecting the partitioned images and performing the steps of encoding, feature clustering, point cloud digitizing, and the like. The method not only improves the speed of data processing and analysis, but also reduces the cost of manual operation, so that the research and application of the optical film are more efficient. In addition, the present embodiment improves the quality and accuracy of the optical film image. By identifying and encoding feature distribution points, embodiments are able to accurately measure and analyze key features of an optical film. Meanwhile, in the process of point cloud digital processing and three-dimensional data intersection, errors and noise can be effectively eliminated, and the accuracy and definition of the optical film mixed image are improved.
Another embodiment of the method for detecting quality of an optical film based on image processing in the embodiment of the invention includes:
the training process of the first optical film quality evaluation model comprises the following steps:
analyzing the optical film data of each position through a preset spectral characteristic analysis algorithm to obtain an optical film quality index;
deploying a plurality of high-frequency ultra-wideband wireless sensor nodes, collecting actual quality parameters of the optical films at all positions through the plurality of high-frequency ultra-wideband wireless sensor nodes, and storing the actual quality parameters and the optical film quality indexes into a preset database;
reading an optical film quality index and the corresponding actual quality parameter in a preset database, preprocessing the optical film quality index and the corresponding actual quality parameter through a preset first processing algorithm, and dividing the preprocessed optical film quality index and the corresponding actual quality parameter into a training set and a testing set according to a preset proportion after mixing;
acquiring an initial first deep learning model, and inputting the training set into the initial first deep learning model for training to obtain a first prediction result; wherein the initial first deep learning model is based on a deep convolutional neural network model;
And calculating an error value between the first predicted result and a second predicted result corresponding to the test set, transmitting the error value back to the initial first deep learning model through a preset second processing algorithm, and adjusting and optimizing model parameters through a preset third processing algorithm to finally obtain a first optical film quality evaluation model.
Specifically, the following examples are included:
example 1: quality control of glass processing using optical film quality assessment model
In a glass processing plant, an optical film for coating a glass surface is used. To ensure that the quality of each glass piece meets the standard, an optical film quality assessment model was developed.
Firstly, the coatings at different positions are analyzed by using a spectral characteristic analysis algorithm to obtain the quality indexes of the optical film, such as reflectivity, transmissivity and the like.
Then, a plurality of high-frequency ultra-wideband wireless sensor nodes are deployed on a production line and used for monitoring actual quality parameters of glass, such as thickness, hardness and the like in real time. These sensor nodes transmit real-time data to a central controller or a preset database.
Then, the optical film quality index and the corresponding actual quality parameters are read from a preset database, and the data are preprocessed through a preset first processing algorithm, such as normalization and feature extraction. And then dividing the preprocessed data into a training set and a testing set according to a preset proportion.
An initial first deep learning model is acquired and is based on a deep convolutional neural network model. And inputting the training set into the initial model for training, and obtaining a first prediction result, namely predicting the quality of the glass coating.
And calculating an error value between the first predicted result and a second predicted result corresponding to the test set, and processing the error value through a preset second processing algorithm, for example, calculating a root mean square error.
And transmitting the error value back to the initial model, and adjusting and optimizing the model parameters through a preset third processing algorithm.
The finally obtained first optical film quality evaluation model can accurately evaluate and predict the quality of the glass coating according to the spectral characteristics and the actual quality parameters of the coating. Thus, the glass processing plant can monitor the coating quality in real time, improve the production efficiency and reduce the defective rate.
Example 2: solar panel performance evaluation using optical film quality evaluation model
In the production process of solar panels, the quality of the optical film has an important influence on the performance and efficiency of the panel. To evaluate the performance of the solar panel, an optical film quality evaluation model may be used.
Firstly, analyzing optical film data at different positions on a solar panel through a spectral characteristic analysis algorithm to obtain optical film quality indexes such as transmittance and reflectivity.
Then, a plurality of high frequency ultra wideband wireless sensor nodes are arranged on the solar panel for collecting actual quality parameters such as current, voltage, etc. The sensor nodes transmit real-time data to a preset database.
And then, reading the optical film quality index and the corresponding actual quality parameters from a preset database, and preprocessing the data through a preset first processing algorithm. And then, dividing the preprocessed data into a training set and a testing set according to a preset proportion.
An initial first deep learning model is obtained, based on a deep convolutional neural network model. And inputting the training set into the initial model for training, and obtaining a first prediction result. The model can learn the relation between the optical film quality index and the actual quality parameter of the solar panel, and conduct performance prediction.
And calculating an error value between the first predicted result and a second predicted result corresponding to the test set, and processing the error value through a preset second processing algorithm, such as calculating an average absolute error.
And transmitting the error value back to the initial model, and optimizing and adjusting the model parameters through a preset third processing algorithm.
Through iterative training and optimization, the finally established first optical film quality evaluation model can accurately evaluate the performance of the solar panel. Thus, the solar panel manufacturer can predict the performance of the panel according to the quality index and the actual quality parameter of the optical film, and perform quality control and performance optimization.
Another embodiment of the method for detecting quality of an optical film based on image processing in the embodiment of the invention includes:
the collecting a plurality of partition images of the target optical film according to the plurality of film partition characteristic distribution points, and performing coding processing on the plurality of partition images to generate coding data of the plurality of partition images, including:
collecting a plurality of partition images of the target optical film based on the plurality of film partition characteristic distribution points; the linkage relation between each film partition characteristic distribution point and each partition image of the target optical film is stored in the database in advance;
dividing the acquired multiple partition images based on a preset plane division model to obtain characteristic distribution attributes of the multiple partition images; the plane segmentation model is obtained based on convolutional neural network model training;
Acquiring the characteristic distribution attribute, constructing a mapping relation between the characteristic distribution attribute and preset coding data, and generating a target coding table; the target coding table is recoded by a standard coding table based on the mapping relation between the characteristic distribution attribute and preset coding data;
inquiring the coding values corresponding to the plurality of partition images from the target coding table to obtain target coding values corresponding to each partition image;
and generating characteristic identifiers of the partition images according to the target coding values, and carrying out characteristic identifier fusion on the partition images and the characteristic identifiers to obtain coding data of a plurality of partition images.
Specifically, the implementation steps are as follows:
and collecting a plurality of partition images of the target optical film according to the plurality of film partition characteristic distribution points. Examples: it is assumed that an optical film is divided into 4 regions, each indicated by A, B, C, D. The images of each region are acquired separately by a camera or other image acquisition device.
And carrying out coding processing on the collected multiple partition images to generate coded data. Examples: and carrying out image segmentation on each partition image by using a preset plane segmentation model to obtain the characteristic distribution attribute of each partition. And establishing a mapping relation between the characteristic distribution attributes and preset coding data, and generating a target coding table. And inquiring the target coding table to obtain a coding value corresponding to each partition image.
And fusing the coding value of the partition image with the characteristic identifier to generate final coding data. Examples: and fusing the coding value of each partition image and the identification (such as A, B, C, D) of the partition to form final coding data.
Encoded data of a plurality of partition images is output.
Examples: the final coded data can be used for identifying and identifying the optical films in different areas, is convenient to store and manage, and can also be used for subsequent data analysis and processing.
For example, different coated glass covering products are produced for a household appliance manufacturer. Each product has a different zone image representing a different coating area. The characteristic distribution attribute of different partition images is obtained by collecting the partition image of each product and dividing the image by using a pre-trained convolutional neural network model. And then, establishing a mapping relation between the characteristic distribution attribute and preset coding data to generate a target coding table. Inquiring the coding value corresponding to each partition image from the target coding table, and fusing the coding value with the partition identification to generate final coding data for identifying and managing the coated glass covering products of different areas. In this way, the manufacturer can track the quality and performance of the product by encoding the data, as well as perform subsequent data analysis and processing.
Another embodiment of the method for detecting quality of an optical film based on image processing in the embodiment of the invention includes:
the obtaining the characteristic distribution attribute, and constructing a mapping relation between the characteristic distribution attribute and preset coding data, and generating a target coding table, including:
collecting and identifying characteristic distribution attributes of a plurality of partition images through detection equipment, and establishing a one-to-one correspondence between the characteristic distribution attributes and preset coding data;
acquiring an initial coding table from a database; wherein the initial coding table is composed of a series of digital sequences and corresponding coding characters;
processing the characteristic value of the characteristic distribution attribute based on a preset divisor algorithm, acquiring all divisors of the characteristic value, and verifying whether all divisors of the acquired characteristic value can divide the corresponding characteristic value;
selecting numbers which are the same as the submultiple from the initial coding table as target numbers, selecting coding characters matched with the target numbers as target coding characters, and simultaneously setting unselected coding characters as residual coding characters;
new digit sequences are allocated to the residual coding characters again, and the sequence of the allocated new digit sequences is consistent with the sequence in the initial coding table;
After the new sequence number distribution of the residual coding characters is completed, distributing a new number sequence for the target coding characters according to the original sequence;
taking the remaining coding characters of the reassigned number sequences, the target coding characters and the corresponding sequence numbers as a new target coding table; wherein the target encoding table is used for generating characteristic identifiers of a plurality of partition images.
Specifically, the specific implementation steps are as follows:
and collecting and identifying the characteristic distribution attributes of the plurality of partition images through the detection equipment, and establishing a one-to-one correspondence with preset coding data. Examples: for each of the partition images, feature distribution attributes such as color distribution, texture features, and the like are extracted by using an image processing algorithm. Then, these feature distribution attributes are matched and mapped with preset encoded data.
An initial encoding table is obtained from a database. Examples: a series of numerical sequences and corresponding coded characters, such as {1: a, 2: B, 3: C, 4: D, }, are stored in a database.
And processing the characteristic value of the characteristic distribution attribute based on a preset divisor algorithm, obtaining all divisors of the characteristic value, and verifying whether the divisors can divide the corresponding characteristic value. Examples: for a partitioned image with a feature value of 10, the divisors are 1, 2, 5, and 10. It is verified whether the divisor can divide the characteristic value 10.
And selecting numbers which are the same as the submultiple from the initial coding table as target numbers, selecting coding characters matched with the target numbers as target coding characters, and setting unselected coding characters as residual coding characters. Examples: for a partition image with a feature value of 10, a number 2 is selected as a target number, the corresponding code character is B, B is set as a target code character, the remaining code characters are A, C, D, and the like.
And reassigning new digit sequences to the remaining code characters, wherein the assigned new digit sequences are in the same order as the original code table. Examples: for the remaining encoded characters A, C, D, they are assigned a new sequence of numbers in the original sequence, e.g. a for 1, c for 3, d for 4.
After the new sequence number assignment of the remaining code characters is completed, a new number sequence is assigned to the target code character according to the original sequence. Examples: for the target code character B, it is assigned a new sequence of digits in the original sequence, e.g., B is assigned a value of 2.
And taking the remaining coding characters of the reassigned number sequences, the target coding characters and the corresponding sequence numbers as a new target coding table. Examples: the reassigned new target encoding table is { 1:A, 2:B, 3:C, 4:D }.
In the embodiment of the invention, the beneficial effects are as follows: the embodiment can generate the target coding table according to the mapping relation between the characteristic distribution attribute and the preset coding data. In this way, each partition image can generate a corresponding feature identifier through the target coding table according to the feature distribution attribute for subsequent coded data generation and identification.
The method for detecting the quality of an optical film based on image processing in the embodiment of the present invention is described above, and the apparatus for detecting the quality of an optical film based on image processing in the embodiment of the present invention is described below, referring to fig. 2, one embodiment of the apparatus for detecting the quality of an optical film based on image processing in the embodiment of the present invention includes:
the acquisition module is used for acquiring target mashup images; the target mashup image is generated based on a plurality of target optical film images to be detected;
the detection module is used for inputting the target mashup image into the trained first optical film quality evaluation model to detect quality abnormality and form a target optical film quality defect parameter;
the adjusting module is used for adjusting the model super parameters of a preset second optical film quality evaluation model according to the target optical film quality defect parameters, inputting the plurality of target optical film images to be detected into the second optical film quality evaluation model for film image effect identification and quality abnormality classification, and obtaining the quality evaluation index of each target optical film image;
And the comparison module is used for comparing the quality evaluation index of each target optical film image with the quality standard value based on a preset quality standard value to obtain a quality inspection result of each target optical film image, and generating a quality inspection report of the optical film according to the quality inspection result of each target optical film image.
The present invention also provides an image processing-based optical film quality detection apparatus including a memory and a processor, the memory storing computer-readable instructions that, when executed by the processor, cause the processor to perform the steps of the image processing-based optical film quality detection method in the above embodiments.
The present invention also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and may also be a volatile computer readable storage medium, where instructions are stored in the computer readable storage medium, when the instructions are executed on a computer, cause the computer to perform the steps of the image processing-based optical film quality detection method.
The beneficial effects are that: the invention provides an optical film quality detection method and a related device based on image processing, which are implemented by acquiring a target mashup image; inputting the target mashed image into a trained first optical film quality evaluation model to detect quality abnormality, so as to form a target optical film quality defect parameter; adjusting model super parameters of a preset second optical film quality evaluation model according to the target optical film quality defect parameters, and respectively inputting the plurality of target optical film images to be detected into the second optical film quality evaluation model to perform film image effect recognition and quality abnormality classification to obtain a quality evaluation index of each target optical film image; and comparing the quality evaluation index of each target optical film image with the quality standard value based on a preset quality standard value to obtain a quality inspection result of each target optical film image, and generating a quality inspection report of the optical film according to the quality inspection result of each target optical film image. The invention synthesizes a plurality of target optical film images to be detected into one mashup image without detecting one by one. This reduces the time and labor costs of the test. And the trained first optical film quality evaluation model is utilized to detect quality anomalies of the target mashup image, so that the detection accuracy is improved. And adjusting the model super-parameters of a preset second optical film quality assessment model according to the target optical film quality defect parameters. And by automatically adjusting the model super parameters, the adaptability and the accuracy of the quality evaluation model can be improved, so that the accuracy of identifying the film image effect of the optical film image of the target to be detected and classifying the quality abnormality is improved. And then, inputting a plurality of target optical film images to be detected into a second optical film quality evaluation model to obtain the quality evaluation index of each target optical film image. The quality of each target optical film image is more accurately evaluated. And finally, comparing the quality evaluation indexes of the target optical film images according to a preset quality standard value to obtain a quality inspection result of each target optical film image, and generating a quality inspection report of the optical film.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (5)

1. The optical film quality detection method based on image processing is characterized by comprising the following steps:
acquiring a target mashup image; the target mashup image is generated based on a plurality of target optical film images to be detected;
inputting the target mashed image into a trained first optical film quality evaluation model to detect quality abnormality, so as to form a target optical film quality defect parameter;
adjusting model super parameters of a preset second optical film quality evaluation model according to the target optical film quality defect parameters, and respectively inputting the plurality of target optical film images to be detected into the second optical film quality evaluation model to perform film image effect recognition and quality abnormality classification to obtain a quality evaluation index of each target optical film image;
Comparing the quality evaluation index of each target optical film image with the magnitude relation of the quality standard value based on a preset quality standard value to obtain a quality inspection result of each target optical film image, and generating a quality inspection report of the optical film according to the quality inspection result of each target optical film image;
the step of obtaining the target mashup image comprises the following steps:
acquiring the types of the optical films, configuring corresponding target film analysis strategies according to the types of the optical films, and identifying a plurality of film partition characteristic distribution points according to the target film analysis strategies;
collecting a plurality of partition images of the target optical film according to the partition characteristic distribution points of the plurality of films, and performing coding processing on the plurality of partition images to generate coding data of the plurality of partition images;
performing feature clustering on the coded data of the plurality of partition images based on the target film analysis strategy, and constructing at least three groups of coded image data sets;
performing three-dimensional space digital processing on the at least three groups of coded image data sets to generate at least three corresponding groups of film point cloud digital sets, and performing three-dimensional data intersection on the at least three groups of point cloud digital sets to obtain a target mashup image set;
Normalizing the target mashup image set according to a preset normalization algorithm to obtain a standard target mashup image set, and generating a corresponding target mashup image based on the standard target mashup image set;
the collecting a plurality of partition images of the target optical film according to the plurality of film partition characteristic distribution points, and performing coding processing on the plurality of partition images to generate coding data of the plurality of partition images, including:
collecting a plurality of partition images of the target optical film based on the plurality of film partition characteristic distribution points; the linkage relation between each film partition characteristic distribution point and each partition image of the target optical film is stored in the database in advance;
dividing the acquired multiple partition images based on a preset plane division model to obtain characteristic distribution attributes of the multiple partition images; the plane segmentation model is obtained based on convolutional neural network model training;
acquiring the characteristic distribution attribute, constructing a mapping relation between the characteristic distribution attribute and preset coding data, and generating a target coding table; the target coding table is recoded by a standard coding table based on the mapping relation between the characteristic distribution attribute and preset coding data;
Inquiring the coding values corresponding to the plurality of partition images from the target coding table to obtain target coding values corresponding to each partition image;
generating characteristic identifiers of the partition images according to the target coding values, and carrying out characteristic identifier fusion on the partition images and the characteristic identifiers to obtain coding data of a plurality of partition images;
the obtaining the characteristic distribution attribute, and constructing a mapping relation between the characteristic distribution attribute and preset coding data, and generating a target coding table, including:
collecting and identifying characteristic distribution attributes of a plurality of partition images through detection equipment, and establishing a one-to-one correspondence between the characteristic distribution attributes and preset coding data;
acquiring an initial coding table from a database; wherein the initial coding table is composed of a series of digital sequences and corresponding coding characters;
processing the characteristic value of the characteristic distribution attribute based on a preset divisor algorithm, acquiring all divisors of the characteristic value, and verifying whether all divisors of the acquired characteristic value can divide the corresponding characteristic value;
selecting numbers which are the same as the submultiple from the initial coding table as target numbers, selecting coding characters matched with the target numbers as target coding characters, and simultaneously setting unselected coding characters as residual coding characters;
New digit sequences are allocated to the residual coding characters again, and the sequence of the allocated new digit sequences is consistent with the sequence in the initial coding table;
after the new sequence number distribution of the residual coding characters is completed, distributing a new number sequence for the target coding characters according to the original sequence;
taking the remaining coding characters of the reassigned number sequences, the target coding characters and the corresponding sequence numbers as a new target coding table; wherein the target encoding table is used for generating characteristic identifiers of a plurality of partition images.
2. The method according to claim 1, wherein the training process of the first optical film quality evaluation model comprises:
analyzing the optical film data of each position through a preset spectral characteristic analysis algorithm to obtain an optical film quality index;
deploying a plurality of high-frequency ultra-wideband wireless sensor nodes, collecting actual quality parameters of the optical films at all positions through the plurality of high-frequency ultra-wideband wireless sensor nodes, and storing the actual quality parameters and the optical film quality indexes into a preset database;
reading an optical film quality index and the corresponding actual quality parameter in a preset database, preprocessing the optical film quality index and the corresponding actual quality parameter through a preset first processing algorithm, and dividing the preprocessed optical film quality index and the corresponding actual quality parameter into a training set and a testing set according to a preset proportion after mixing;
Acquiring an initial first deep learning model, and inputting the training set into the initial first deep learning model for training to obtain a first prediction result; wherein the initial first deep learning model is based on a deep convolutional neural network model;
and calculating an error value between the first predicted result and a second predicted result corresponding to the test set, transmitting the error value back to the initial first deep learning model through a preset second processing algorithm, and adjusting and optimizing model parameters through a preset third processing algorithm to finally obtain a first optical film quality evaluation model.
3. An image processing-based optical film quality detection apparatus, characterized by comprising:
the acquisition module is used for acquiring target mashup images; the target mashup image is generated based on a plurality of target optical film images to be detected;
the detection module is used for inputting the target mashup image into the trained first optical film quality evaluation model to detect quality abnormality and form a target optical film quality defect parameter;
the adjusting module is used for adjusting the model super parameters of a preset second optical film quality evaluation model according to the target optical film quality defect parameters, inputting the plurality of target optical film images to be detected into the second optical film quality evaluation model for film image effect identification and quality abnormality classification, and obtaining the quality evaluation index of each target optical film image;
The comparison module is used for comparing the quality evaluation index of each target optical film image with the quality standard value based on a preset quality standard value to obtain a quality inspection result of each target optical film image, and generating a quality inspection report of the optical film according to the quality inspection result of each target optical film image;
the acquisition module is specifically configured to:
acquiring the types of the optical films, configuring corresponding target film analysis strategies according to the types of the optical films, and identifying a plurality of film partition characteristic distribution points according to the target film analysis strategies;
collecting a plurality of partition images of the target optical film according to the partition characteristic distribution points of the plurality of films, and performing coding processing on the plurality of partition images to generate coding data of the plurality of partition images;
performing feature clustering on the coded data of the plurality of partition images based on the target film analysis strategy, and constructing at least three groups of coded image data sets;
performing three-dimensional space digital processing on the at least three groups of coded image data sets to generate at least three corresponding groups of film point cloud digital sets, and performing three-dimensional data intersection on the at least three groups of point cloud digital sets to obtain a target mashup image set;
Normalizing the target mashup image set according to a preset normalization algorithm to obtain a standard target mashup image set, and generating a corresponding target mashup image based on the standard target mashup image set;
the collecting a plurality of partition images of the target optical film according to the plurality of film partition characteristic distribution points, and performing coding processing on the plurality of partition images to generate coding data of the plurality of partition images, including:
collecting a plurality of partition images of the target optical film based on the plurality of film partition characteristic distribution points; the linkage relation between each film partition characteristic distribution point and each partition image of the target optical film is stored in the database in advance;
dividing the acquired multiple partition images based on a preset plane division model to obtain characteristic distribution attributes of the multiple partition images; the plane segmentation model is obtained based on convolutional neural network model training;
acquiring the characteristic distribution attribute, constructing a mapping relation between the characteristic distribution attribute and preset coding data, and generating a target coding table; the target coding table is recoded by a standard coding table based on the mapping relation between the characteristic distribution attribute and preset coding data;
Inquiring the coding values corresponding to the plurality of partition images from the target coding table to obtain target coding values corresponding to each partition image;
generating characteristic identifiers of the partition images according to the target coding values, and carrying out characteristic identifier fusion on the partition images and the characteristic identifiers to obtain coding data of a plurality of partition images;
the obtaining the characteristic distribution attribute, and constructing a mapping relation between the characteristic distribution attribute and preset coding data, and generating a target coding table, including:
collecting and identifying characteristic distribution attributes of a plurality of partition images through detection equipment, and establishing a one-to-one correspondence between the characteristic distribution attributes and preset coding data;
acquiring an initial coding table from a database; wherein the initial coding table is composed of a series of digital sequences and corresponding coding characters;
processing the characteristic value of the characteristic distribution attribute based on a preset divisor algorithm, acquiring all divisors of the characteristic value, and verifying whether all divisors of the acquired characteristic value can divide the corresponding characteristic value;
selecting numbers which are the same as the submultiple from the initial coding table as target numbers, selecting coding characters matched with the target numbers as target coding characters, and simultaneously setting unselected coding characters as residual coding characters;
New digit sequences are allocated to the residual coding characters again, and the sequence of the allocated new digit sequences is consistent with the sequence in the initial coding table;
after the new sequence number distribution of the residual coding characters is completed, distributing a new number sequence for the target coding characters according to the original sequence;
taking the remaining coding characters of the reassigned number sequences, the target coding characters and the corresponding sequence numbers as a new target coding table; wherein the target encoding table is used for generating characteristic identifiers of a plurality of partition images.
4. An image processing-based optical film quality inspection apparatus, characterized by comprising: a memory and at least one processor, the memory having instructions stored therein;
the at least one processor invokes the instructions in the memory to cause the image processing-based optical film quality detection apparatus to perform the image processing-based optical film quality detection method of any one of claims 1-2.
5. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the image processing-based optical film quality detection method of any one of claims 1-2.
CN202311635399.6A 2023-12-01 2023-12-01 Optical film quality detection method and related device based on image processing Active CN117333492B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311635399.6A CN117333492B (en) 2023-12-01 2023-12-01 Optical film quality detection method and related device based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311635399.6A CN117333492B (en) 2023-12-01 2023-12-01 Optical film quality detection method and related device based on image processing

Publications (2)

Publication Number Publication Date
CN117333492A CN117333492A (en) 2024-01-02
CN117333492B true CN117333492B (en) 2024-03-15

Family

ID=89277849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311635399.6A Active CN117333492B (en) 2023-12-01 2023-12-01 Optical film quality detection method and related device based on image processing

Country Status (1)

Country Link
CN (1) CN117333492B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6115421A (en) * 1996-04-25 2000-09-05 Matsushita Electric Industrial Co., Ltd. Moving picture encoding apparatus and method
CN111510717A (en) * 2019-01-31 2020-08-07 杭州海康威视数字技术股份有限公司 Image splicing method and device
CN112163609A (en) * 2020-09-22 2021-01-01 武汉科技大学 Image block similarity calculation method based on deep learning
CN112508018A (en) * 2020-12-14 2021-03-16 北京澎思科技有限公司 License plate recognition method and device and storage medium
CN115684189A (en) * 2021-07-26 2023-02-03 柯尼卡美能达株式会社 Thin film inspection device, thin film inspection method, and recording medium
CN116664585A (en) * 2023-08-02 2023-08-29 瑞茜时尚(深圳)有限公司 Scalp health condition detection method and related device based on deep learning
WO2023186833A1 (en) * 2022-03-28 2023-10-05 Carl Zeiss Smt Gmbh Computer implemented method for the detection of anomalies in an imaging dataset of a wafer, and systems making use of such methods

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10311096B2 (en) * 2012-03-08 2019-06-04 Google Llc Online image analysis

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6115421A (en) * 1996-04-25 2000-09-05 Matsushita Electric Industrial Co., Ltd. Moving picture encoding apparatus and method
CN111510717A (en) * 2019-01-31 2020-08-07 杭州海康威视数字技术股份有限公司 Image splicing method and device
CN112163609A (en) * 2020-09-22 2021-01-01 武汉科技大学 Image block similarity calculation method based on deep learning
CN112508018A (en) * 2020-12-14 2021-03-16 北京澎思科技有限公司 License plate recognition method and device and storage medium
CN115684189A (en) * 2021-07-26 2023-02-03 柯尼卡美能达株式会社 Thin film inspection device, thin film inspection method, and recording medium
WO2023186833A1 (en) * 2022-03-28 2023-10-05 Carl Zeiss Smt Gmbh Computer implemented method for the detection of anomalies in an imaging dataset of a wafer, and systems making use of such methods
CN116664585A (en) * 2023-08-02 2023-08-29 瑞茜时尚(深圳)有限公司 Scalp health condition detection method and related device based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Multilabel Annotation of Multispectral Remote Sensing Images using Error-Correcting Output Codes and Most Ambiguous Examples;Anamaria Radoi et al .;IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING,;第12卷(第7期);2121-2134 *
基于局部DCT系数的图像压缩感知编码与重构;潘榕 等;自动化学报;第37卷(第6期);674-681 *

Also Published As

Publication number Publication date
CN117333492A (en) 2024-01-02

Similar Documents

Publication Publication Date Title
JP6573226B2 (en) DATA GENERATION DEVICE, DATA GENERATION METHOD, AND DATA GENERATION PROGRAM
CN102473660B (en) Automatic fault detection and classification in a plasma processing system and methods thereof
CN116188475B (en) Intelligent control method, system and medium for automatic optical detection of appearance defects
CN105069790A (en) Rapid imaging detection method for gear appearance defect
CN103221807B (en) Fast processing and the uneven factor detected in web shaped material
CN109284779A (en) Object detecting method based on the full convolutional network of depth
Xie et al. Fabric defect detection method combing image pyramid and direction template
CN115601332A (en) Embedded fingerprint module appearance detection method based on semantic segmentation
CN111160451A (en) Flexible material detection method and storage medium thereof
CN115294033A (en) Tire belt layer difference level and misalignment defect detection method based on semantic segmentation network
CN116757713B (en) Work estimation method, device, equipment and storage medium based on image recognition
CN111325734B (en) Bone age prediction method and device based on visual model
CN117333492B (en) Optical film quality detection method and related device based on image processing
CN109190505A (en) The image-recognizing method that view-based access control model understands
CN116071348B (en) Workpiece surface detection method and related device based on visual detection
CN111402236A (en) Hot-rolled strip steel surface defect grading method based on image gray value
CN114077877B (en) Newly-added garbage identification method and device, computer equipment and storage medium
CN115482227A (en) Machine vision self-adaptive imaging environment adjusting method
CN109117818A (en) Material structure characteristic intelligent recognition analysis system and analysis method
CN113313149B (en) Dish identification method based on attention mechanism and metric learning
CN115601747A (en) Method and system for calculating confluency of adherent cells
CN109165587A (en) intelligent image information extraction method
CN117392042A (en) Defect detection method, defect detection apparatus, and storage medium
CN112767365A (en) Flaw detection method
Ding et al. Knowledge-based automatic extraction of multi-structured light stripes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant