CN116485749A - Self-encoder-based method for identifying dirt in lens module - Google Patents
Self-encoder-based method for identifying dirt in lens module Download PDFInfo
- Publication number
- CN116485749A CN116485749A CN202310440421.5A CN202310440421A CN116485749A CN 116485749 A CN116485749 A CN 116485749A CN 202310440421 A CN202310440421 A CN 202310440421A CN 116485749 A CN116485749 A CN 116485749A
- Authority
- CN
- China
- Prior art keywords
- lens
- encoder
- picture
- circle
- self
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 20
- 238000012549 training Methods 0.000 claims abstract description 15
- 238000013528 artificial neural network Methods 0.000 claims abstract description 10
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 7
- 238000003708 edge detection Methods 0.000 claims description 5
- 238000005520 cutting process Methods 0.000 claims description 3
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 238000011109 contamination Methods 0.000 claims description 2
- 238000004519 manufacturing process Methods 0.000 abstract description 5
- 238000001514 detection method Methods 0.000 abstract description 4
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 230000003287 optical effect Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 230000004913 activation Effects 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 238000012512 characterization method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000265 homogenisation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of camera detection, in particular to a self-encoder-based method for identifying dirt in a lens module. The problem that manual intervention is needed when the lens image is acquired and the automatic production line is not suitable for production line is solved. The technical proposal is as follows: the method comprises the following steps: s1: collecting sample pictures of a plurality of clean lenses; s2: constructing a self-coding neural network, wherein the neural network structure consists of an encoder and a decoder; s3: preprocessing an input image, and intercepting a lens area in the image; s4: training a self-coding network; s5: training a lens smudge classifier. The beneficial effects of the invention are as follows: the invention can automatically identify whether the lens in the finished product module is dirty or not without manual intervention.
Description
Technical Field
The invention relates to the technical field of camera detection, in particular to a self-encoder-based method for identifying dirt in a lens module.
Background
The method for detecting the smudge of the optical lens in the patent CN103245676B is as follows: 20 days 05 month 2015, it states: the invention discloses a method for detecting whether an optical lens in a solution has dirt adhesion or not, which comprises the following steps of defining the optical lens, wherein the optical lens is divided into a surrounding area and an optical area; focusing the image capturing unit on the surrounding area of the optical lens to generate a first original image, focusing the image capturing unit on the optical area of the optical lens to generate a second original image, respectively performing image homogenization on the first original image and the second original image to obtain a first homogenized image and a second homogenized image, performing image processing to obtain first image data and second image data, and comparing each pixel gray scale value of the first image data and the second image data with a first threshold value to judge whether the edge detection area and the central detection area corresponding to the optical lens exist dirt or not. "and patent CN113141462a," a method for processing contamination of camera lens, a mobile terminal, and a computer storage medium, the publication date is: 2021-07-20, which describes: the application relates to a camera lens smudge processing method, a mobile terminal and a computer storage medium, wherein the camera lens smudge processing method comprises the steps of shooting a first image by using a target camera according to a current shooting mode; judging whether the lens of the target camera is dirty or not according to the first image; if yes, executing the dirt processing operation according to the current shooting mode and/or shooting state. By the mode, dirt of the camera lens can be found in time and corresponding treatment is carried out, and shooting effect and efficiency are improved. "
At present, the lens module may cause a certain dirt on the lens in the production and assembly process, thereby affecting the quality of the finished product. Conventional lens smudge detection is simply a test of individual lenses and is often difficult to determine on the finished lenses. The prior art mainly adopts the traditional image processing technology to judge whether the lens is dirty or not, such as CN103245676B, and the lens is easy to be influenced by the environment by calculating the pixel mean value for analysis. For example, CN113141462a requires manual intervention when acquiring lens images, and is not suitable for automated production in production lines.
Disclosure of Invention
The invention aims to provide a self-encoder-based method for identifying dirt in a lens module.
In order to achieve the aim of the invention, the invention adopts the technical scheme that: the method comprises the following steps:
s1: collecting sample pictures of a plurality of clean lenses;
s2: constructing a self-coding neural network, wherein the neural network structure consists of an encoder and a decoder;
s3: preprocessing an input image, and intercepting a lens area in the image;
s4: training a self-coding network;
s5: training a lens smudge classifier.
The step S2 specifically comprises the following steps:
the input x passes through an encoder E to obtain f=E (x), and then passes through a decoder D to obtain an output x' =D (f), wherein the encoder and the decoder are in symmetrical structures;
wherein: x is the picture input to the encoder E, x+' is the picture output by the decoder D.
The step S3 specifically comprises the following steps:
s3.1: aiming at an input picture, performing an edge detection algorithm on the image;
s3.2: the result of step S3.1 is subjected to hough transform to find the circular region in the graph, and a predefined circle (x 0 ,y 0 ,r 0 ) The smallest distance is used as the lens area picture to obtain the circle (x) d ,y d ,r d );
Wherein (x) o ,y o ) Representing the center coordinates of the circle where the lens is located, r o Representing the radius;
obtaining a set of circles through a circle fitting algorithmS, number C, traversing the collection, for one element (x c ,y c ,r c ) Wherein (x) c ,y c ) Representing the center coordinates of the circle, r c The radius is indicated as such,
(1) Initialization of delta = + and infinity of the two points, d= + infinity is provided
Wherein: delta is used for measuring the radius difference, and D is used for measuring the circle center difference; traversing the fitted circle set, and finding a circle with the smallest radius difference value and circle center difference value, namely a circle closest to the predefined circle in position;
(2) Calculating r 0 -r c Absolute value delta of (2) c Calculation ofIf delta c <Delta and D c <D, update Δ=Δ c ,D c =D
D C Representing the distance between the center of a circle in the set and the predefined center; traversing each circle in the set, calculating a radius difference value and a circle center distance with a known circle, and finding a circle with the smallest difference value with the known circle in the set;
(3) Traversing the set S, wherein the parameter corresponding to the minimum value calculated in the step (2) is a circle where the lens is located, and obtaining the circle where the lens is located as (x) d ,y d ,r d );
Wherein (x) d ,y d ) Representing the center coordinates of the circle where the lens is located, r d Representing the radius;
s3.3: cutting out a lens area picture from the picture according to the result of the step S3.2, wherein the picture width and height are r d +d ', where d' is a predefined parameter, scaling the width and height to w×h, where w=h, by placing 0 pixels outside the circular area in the figure
Wherein W is the picture width and H is the picture height.
The step S4 specifically comprises the following steps:
s4.1: reading an input picture from the sample set, and inputting the picture x intercepted after the processing of the step S3 into a network to obtain an output x';
s4.2: calculate the reconstruction error, loss= l x-x l
The II x-x' refers to the L1 loss function, and the calculation formula is as follows:
n is the number of training samples, x_i and x_i' represent the values of the ith pixel in x and x+_, respectively, and P represents the pixel set;
s4.3: updating parameters of an encoder and a decoder by adopting a gradient descent method;
s4.4: repeating steps S4.1-S4.3 until the model converges to obtain the encoder parameter E_θ.
The step S5 specifically comprises the following steps:
s5.1: randomly selecting M pictures from the sample set, inputting the encoder parameters E_θ obtained in the step S4, and obtaining feature sets { f_1, f_2, …, f_M };
wherein: m is the number of pictures in the sample set, f_i represents the characteristics of the ith sample picture;
s5.2: taking the feature set in the step S5.1 as the input of a one-class SVM, and training out the classifier OSVM φ 。
Compared with the prior art, the invention has the beneficial effects that: the invention can automatically identify whether the lens in the finished product module is dirty or not without manual intervention. S1, only clean sample pictures are required to be collected in the early sample collection stage, dirt sample pictures are not required, and sample collection difficulty is reduced. S3, the local picture of the lens in the module can be automatically positioned without manual intervention. The local pictures of the lenses are used for analysis, and interference of other components in the module on the identification result can be reduced. S4, the self-encoder is used for extracting the picture features, the defect of poor effect of feature descriptors caused by manually extracting the features is avoided, and the encoder trained through data iteration can describe the picture features more accurately. S5, the classifier can be trained by using one-class SVM only by using one-class SVM, and the problem that the negative sample is difficult to collect in practical application is avoided.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
FIG. 1 is a schematic view of a clean lens sample according to the present invention.
Fig. 2 is a schematic structural diagram of a neural network according to the present invention.
FIG. 3 is a schematic representation of the deployment of the present invention.
Fig. 4 is a schematic diagram of an input image read by the present invention.
FIG. 5 is a schematic representation of the Canny edge result of the present invention.
FIG. 6 is a graph showing the calculation results of the lens region according to the present invention.
Fig. 7 is a partial schematic view of a lens according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. Of course, the specific embodiments described herein are for purposes of illustration only and are not intended to limit the invention.
Example 1
A method for identifying dirt in a lens module based on a self-encoder comprises the following steps: the method comprises the following steps:
s1: collecting sample pictures of a plurality of clean lenses; as shown in fig. 1;
s2: constructing a self-coding neural network, wherein the neural network structure consists of an encoder and a decoder; the neural network structure is shown in fig. 2;
s3: preprocessing an input image, and intercepting a lens area in the image;
s4: training a self-coding network;
s5: training a lens smudge classifier.
The step S2 specifically comprises the following steps:
the input x passes through an encoder E to obtain f=E (x), and then passes through a decoder D to obtain an output x' =D (f), wherein the encoder and the decoder are in symmetrical structures;
wherein: x is the picture input to the encoder E, x+' is the picture output by the decoder D.
The step S3 specifically comprises the following steps:
s3.1: aiming at an input picture, performing an edge detection algorithm on the image; in particular to a Canny edge detection method.
S3.2: the result of step S3.1 is subjected to hough transform to find the circular region in the graph, and a predefined circle (x 0 ,y 0 ,r 0 ) The smallest distance is used as the lens area picture to obtain the circle (x) d ,y d ,r d );
Wherein (x) o ,y o ) Representing the center coordinates of the circle where the lens is located, r o Representing the radius;
obtaining a set S of a group of circles with the number of C through a circular fitting algorithm, traversing the set, and obtaining a set (x c ,y c ,r c ) Wherein (x) c ,y c ) Representing the center coordinates of the circle, r c The radius is indicated as such,
(1) Initialization of delta = + and infinity of the two points, d= + infinity is provided
Wherein: delta is used for measuring the radius difference, and D is used for measuring the circle center difference; traversing the fitted set of circles to find the circle with the smallest radius and center difference, i.e. the circle closest to the predefined circle
(2) Calculating r 0 -r c Absolute value delta of (2) c Calculation ofIf delta c <Delta and D c <D, update Δ=Δ c ,D c =D
D C Representing the distance between the center of a circle in the set and the predefined center; traversing each circle in the set, calculating a radius difference value and a circle center distance with a known circle, and finding a circle with the smallest difference value with the known circle in the set;
(3) Traversing the set S, wherein the parameter corresponding to the minimum value calculated in the step (2) is a circle where the lens is located, and obtaining the circle where the lens is located as (x) d ,y d ,r d );
Wherein (x) d ,y d ) Representing the center coordinates of the circle where the lens is located, r d Representing the radius;
s3.3: cutting out a lens area picture from the picture according to the result of the step S3.2, wherein the picture width and height are r d +d ', where d' is a predefined parameter, scaling the width and height to w×h, where w=h, by placing 0 pixels outside the circular area in the figure
Wherein W is the picture width and H is the picture height.
The step S4 specifically comprises the following steps:
s4.1: reading an input picture from the sample set, and inputting the picture x intercepted after the processing of the step S3 into a network to obtain an output x';
s4.2: calculate the reconstruction error, loss= l x-x l
The II x-x' refers to the L1 loss function, and the calculation formula is as follows:
n is the number of training samples, x_i and x_i' represent the values of the ith pixel in x and x+_, respectively, and P represents the pixel set;
s4.3: updating parameters of an encoder and a decoder by adopting a gradient descent method;
s4.4: repeating steps S4.1-S4.3 until the model converges to obtain the encoder parameter E_θ.
The step S5 specifically comprises the following steps:
s5.1: randomly selecting M pictures from the sample set, inputting the encoder parameters E_θ obtained in the step S4, and obtaining feature sets { f_1, f_2, …, f_M };
wherein: m is the number of pictures in the sample set, f_i represents the characteristics of the ith sample picture;
s5.2: taking the feature set in the step S5.1 as the input of a one-class SVM, and training out the classifier OSVM φ 。
Example 2
On the basis of example 1,
step S2: the encoder first passesFull connection layer L 1 The input image is compressed from dimension W×H to 512, wherein W and H are the width and height of the image respectively, and then a ReLU activation function is added to increase the nonlinear factor of the model, so that the model expressive force is improved. Then add the full connection layer L 2 The dimension continues to be compressed to 256 and the ReLU activation function is cascaded. After all connection layer L 3 The output dimension holds 256 and concatenates the ReLU activation functions. After all connection layer L 4 The output dimension is 128 and the ReLU activation functions are cascaded. After all connection layer L 5 The output dimension is 128 and the ReLU activation functions are cascaded. Finally, a full connection layer L 6 The output dimension is 64 the network model is shown in the following table
The decoder, like the encoder, restores the characteristics of the encoder output to the image dimension through several fully connected layers and the ReLU activation function. The network model is shown in the following table:
numbering device | Operation of | Input dimension | Output dimension |
1 | Full connection layer, reLU | 64 | 128 |
2 | Full connection layer, reLU | 128 | 128 |
3 | Full connection layer, reLU | 128 | 256 |
4 | Full connection layer, reLU | 256 | 256 |
5 | Full connection layer, reLU | 256 | 512 |
6 | Full connection layer | 512 | W×H |
The algorithm is deployed to the device, as illustrated in FIG. 3, with memory for storing trained self-encoding network model parameters and classifier OSVM φ Parameters of (2); the method comprises the steps of inputting video images for receiving cameras, and carrying out calculation classification on the input images by a calculation center to obtain a result and outputting the result, wherein the calculation center is a calculation device provided with a linux or windows operating system.
Reading an input picture, as shown in fig. 4, calling step 3 of the training stage to obtain a lens region picture I roi Wherein step S3.1 calculates Canny edge results as shown in fig. 5: the result of calculating the circle of the lens is shown in FIG. 6, wherein the result of the white circle mark is the calculated lens, the intercepted partial image is shown in FIG. 7, and the encoder E is called θ The local image I of the lens to be intercepted roi Deriving a characterization f as input to an encoder roi Invoking classifier OSVM φ Will f roi And inputting the result of the classification into a classifier, and outputting the result of whether the lens is dirty or not.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.
Claims (5)
1. A method for identifying dirt in a lens module based on a self-encoder is characterized by comprising the following steps:
s1: collecting sample pictures of a plurality of clean lenses;
s2: constructing a self-coding neural network, wherein the neural network structure consists of an encoder and a decoder;
s3: preprocessing an input image, and intercepting a lens area in the image;
s4: training a self-coding network;
s5: training a lens smudge classifier.
2. The method for identifying dirt in a lens module based on a self-encoder as claimed in claim 1, wherein the step S2 is specifically:
the input x passes through an encoder E to obtain f=E (x), and then passes through a decoder D to obtain an output x' =D (f), wherein the encoder and the decoder are in symmetrical structures;
wherein: x is the picture input to the encoder E, x+' is the picture output by the decoder D.
3. The method for identifying dirt in a lens module based on a self-encoder as claimed in claim 1, wherein the step S3 is specifically:
s3.1: aiming at an input picture, performing an edge detection algorithm on the image;
s3.2: the result of step S3.1 is subjected to hough transform to find the circular region in the graph, and a predefined circle (x 0 ,y 0 ,r 0 ) The smallest distance is used as the lens area picture to obtain the circle (x) d ,y d ,r d );
Wherein (x) o ,y o ) Representing the center coordinates of the circle where the lens is located, r o Representing the radius;
obtaining a set S of a group of circles with the number of C through a circular fitting algorithm, traversing the set, and obtaining a set (x c ,y c ,r c ) Wherein (x) c ,y c ) Representing the center coordinates of the circle, r c The radius is indicated as such,
(1) Initialization of delta = + and infinity of the two points, d= + infinity is provided
Wherein: delta is used for measuring the radius difference, and D is used for measuring the circle center difference;
(2) Calculating r 0 -r c Absolute value delta of (2) c Calculation ofIf delta c <Delta and D c <D, update Δ=Δ c ,D c =D
D C Representing the distance between the center of a circle in the set and the predefined center;
(3) Traversing the set S, wherein the parameter corresponding to the minimum value calculated in the step (2) is a circle where the lens is located, and obtaining the circle where the lens is located as (x) d ,y d ,r d );
Wherein (x) d ,y d ) Representing the center coordinates of the circle where the lens is located, r d Representing the radius;
s3.3: cutting out a lens area picture from the picture according to the result of the step S3.2, wherein the picture width and height are r d +d ', where d' is a predefined parameter, placing 0 on the pixels outside the circular area in the figure, scaling the width and height to w×h, where w=H
Wherein W is the picture width and H is the picture height.
4. The method for recognizing dirt in a lens module based on a self-encoder as claimed in claim 1, wherein the step S4 specifically comprises:
s4.1: reading an input picture from the sample set, and inputting the picture x intercepted after the processing of the step S3 into a network to obtain an output x';
s4.2: calculate the reconstruction error, loss= l x-x l
The II x-x' refers to the L1 loss function, and the calculation formula is as follows:
n is the number of training samples, x_i and x_i' represent the values of the ith pixel in x and x+_, respectively, and P represents the pixel set;
s4.3: updating parameters of an encoder and a decoder by adopting a gradient descent method;
s4.4: repeating steps S4.1-S4.3 until the model converges to obtain the encoder parameter E_θ.
5. The method for recognizing contamination in a lens module based on a self-encoder as recited in claim 4, wherein step S5 specifically comprises:
s5.1: randomly selecting M pictures from the sample set, inputting the encoder parameters E_θ obtained in the step S4, and obtaining feature sets { f_1, f_2, …, f_M };
wherein: m is the number of pictures in the sample set, f_i represents the characteristics of the ith sample picture;
s5.2: taking the feature set in the step S5.1 as the input of a one-class SVM, and training out the classifier OSVM φ 。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310440421.5A CN116485749A (en) | 2023-04-23 | 2023-04-23 | Self-encoder-based method for identifying dirt in lens module |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310440421.5A CN116485749A (en) | 2023-04-23 | 2023-04-23 | Self-encoder-based method for identifying dirt in lens module |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116485749A true CN116485749A (en) | 2023-07-25 |
Family
ID=87218930
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310440421.5A Pending CN116485749A (en) | 2023-04-23 | 2023-04-23 | Self-encoder-based method for identifying dirt in lens module |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116485749A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116994074A (en) * | 2023-09-27 | 2023-11-03 | 安徽大学 | Camera dirt detection method based on deep learning |
-
2023
- 2023-04-23 CN CN202310440421.5A patent/CN116485749A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116994074A (en) * | 2023-09-27 | 2023-11-03 | 安徽大学 | Camera dirt detection method based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111325713B (en) | Neural network-based wood defect detection method, system and storage medium | |
CN107609575B (en) | Calligraphy evaluation method, calligraphy evaluation device and electronic equipment | |
CN110378313B (en) | Cell cluster identification method and device and electronic equipment | |
CN111862194A (en) | Deep learning plant growth model analysis method and system based on computer vision | |
CN111462076A (en) | Method and system for detecting fuzzy area of full-slice digital pathological image | |
CN111369523B (en) | Method, system, equipment and medium for detecting cell stack in microscopic image | |
CN111062938B (en) | Plate expansion plug detection system and method based on machine learning | |
CN114549981A (en) | Intelligent inspection pointer type instrument recognition and reading method based on deep learning | |
CN113706490B (en) | Wafer defect detection method | |
CN112991306B (en) | Cleavage stage embryo cell position segmentation and counting method based on image processing | |
CN116485749A (en) | Self-encoder-based method for identifying dirt in lens module | |
CN113393426A (en) | Method for detecting surface defects of rolled steel plate | |
CN115861210B (en) | Transformer substation equipment abnormality detection method and system based on twin network | |
CN116228780B (en) | Silicon wafer defect detection method and system based on computer vision | |
CN117455917B (en) | Establishment of false alarm library of etched lead frame and false alarm on-line judging and screening method | |
CN116740728B (en) | Dynamic acquisition method and system for wafer code reader | |
CN113705564A (en) | Pointer type instrument identification reading method | |
CN117095246A (en) | Polarization imaging-based deep learning pointer instrument reading identification method | |
CN111950556A (en) | License plate printing quality detection method based on deep learning | |
CN112686162B (en) | Method, device, equipment and storage medium for detecting clean state of warehouse environment | |
US11988509B2 (en) | Portable field imaging of plant stomata | |
CN110189301B (en) | Foreign matter detection method for generator stator core steel sheet stacking platform | |
CN114862786A (en) | Retinex image enhancement and Ostu threshold segmentation based isolated zone detection method and system | |
CN108898107B (en) | Automatic partition naming method | |
CN109272540B (en) | SFR automatic extraction and analysis method of image of graphic card |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |