CN115393358A - Lens detection method and multi-station detection device - Google Patents

Lens detection method and multi-station detection device Download PDF

Info

Publication number
CN115393358A
CN115393358A CN202211330558.7A CN202211330558A CN115393358A CN 115393358 A CN115393358 A CN 115393358A CN 202211330558 A CN202211330558 A CN 202211330558A CN 115393358 A CN115393358 A CN 115393358A
Authority
CN
China
Prior art keywords
images
group
lens
defect
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211330558.7A
Other languages
Chinese (zh)
Other versions
CN115393358B (en
Inventor
杜英
袁帅鹏
李雪梅
张瑞强
杨炳辉
蒋书民
曹彬
胡江洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fitow Tianjin Detection Technology Co Ltd
Original Assignee
Fitow Tianjin Detection Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fitow Tianjin Detection Technology Co Ltd filed Critical Fitow Tianjin Detection Technology Co Ltd
Priority to CN202211330558.7A priority Critical patent/CN115393358B/en
Publication of CN115393358A publication Critical patent/CN115393358A/en
Application granted granted Critical
Publication of CN115393358B publication Critical patent/CN115393358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T3/02
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features

Abstract

The invention discloses a lens detection method and a multi-station detection device, which belong to the technical field of lens defect detection and are characterized by comprising the following steps: s1, obtaining a lens image of a lens to be detected, wherein the lens image comprises a first group of images and a second group of images; s2, analyzing the lens image; s201, extracting defect characteristics for the first time; s202, extracting defect characteristics again; s203, merging defect characteristics; s204, affine transforming all defect features of the first group of images to all defect feature positions of the second group of images, respectively expanding the defect features of the corresponding positions of the first group of images and the second group of images, carrying out differential operation on the expanded defect features to obtain a difference image, identifying non-0 pixel points on the difference image as defects of the lens, and identifying pixel points converted from non-0 to 0 as dust. According to the invention, the dust can be rapidly identified by utilizing the difference value calculation, so that the accuracy of lens detection is improved.

Description

Lens detection method and multi-station detection device
Technical Field
The invention belongs to the technical field of lens defect detection, and particularly relates to a lens detection method and a multi-station detection device.
Background
As is well known, a lens is a transparent material having one or more curved surfaces, which is made of an optical material such as glass or resin. Before the lens is used, in order to guarantee its quality, need to detect it, at present, the quality defect of lens mainly includes: scratch, point particles, bright spots, broken edges, bubbles, fur foreign matters, cracks, fog, film foreign matters, hanging edges, film edges can not be reached, demoulding and the like; however, in the actual detection process, the imaging effect of the defects such as the dot particles, the bright spots and the like and the dust are close, that is, the interference of the dust needs to be eliminated in the detection process. Therefore, it is important to design and develop a lens detection method and a multi-station detection device which overcome the above defects.
Disclosure of Invention
The invention provides a lens detection method and a multi-station detection device for solving the technical problems in the prior art, and the lens detection method and the multi-station detection device can quickly identify dust defects through differential operation, so that the accuracy of lens detection is improved.
A first object of the present invention is to provide a lens inspection method, comprising:
s1, obtaining a lens image of a lens to be detected, wherein the lens image comprises a first group of images and a second group of images;
s2, analyzing the lens image; the method comprises the following steps:
s201, primary defect feature extraction: respectively carrying out local threshold processing on the first group of images and the second group of images to obtain defect characteristics of the first group of images and defect characteristics of the second group of images;
s202, defect features are extracted again:
firstly, traversing and comparing each pixel in a first group of images, selecting the value with the maximum gray level of the pixel point at each position, and synthesizing into a new first comparison image; then, sequentially carrying out multiplication operation and global threshold processing on the first comparison image to obtain the defect characteristics of the first comparison image;
firstly, traversing and comparing each pixel in the second group of images, selecting the value with the maximum gray level of the pixel point at each position, and synthesizing into a new second comparison image; then, carrying out multiplication operation and global threshold processing on the second comparison image in sequence to obtain the defect characteristics of the second comparison image;
s203, defect feature merging: combining the defect features of the first group of images obtained in the step S201 and the defect features of the first comparison images obtained in the step S202 to obtain all defect features of the first group of images, and combining the defect features of the second group of images obtained in the step S201 and the defect features of the second comparison images obtained in the step S202 to obtain all defect features of the second group of images;
s204, affine transforming all defect features of the first group of images to all defect feature positions of the second group of images, respectively expanding the defect features of the corresponding positions of the first group of images and the second group of images, carrying out differential operation on the expanded defect features to obtain a difference image, regarding pixels which are not 0 in the difference image as defects of the lens, and regarding pixels which are converted from non-0 to 0 as dust.
Preferably, in S1, the first group of images is images before the lens to be measured is dedusted, and the second group of images is images after the lens to be measured is dedusted.
Preferably, in S1, the first group of images includes an image of a lens to be measured and a positioning mark image, and the second group of images includes an image of a lens to be measured and a positioning mark image.
Preferably, the positioning marks are four positioning holes located around the lens to be measured, and the four positioning holes are located at four top corners of the rectangular frame.
Preferably, the affine transformation process is as follows:
a. determining a reference point coordinate difference value: respectively fitting the outline of the lens to be detected in the first group of images and the outline of the lens to be detected in the second group of images into a circle, extracting the circle center of the first group of images as a source point coordinate, and extracting the circle center of the second group of images as a target point coordinate; calculating the difference value of the coordinates of the source point and the coordinates of the target point to obtain the coordinate difference value of the reference point;
b. determining the difference value of the included angles: calculating a first included angle between a rectangular frame formed by four positioning holes of the positioning identification images of the first group of images and a horizontal line, and calculating a second included angle between a rectangular frame formed by four positioning holes of the positioning identification images of the second group of images and the horizontal line; calculating the difference value of the first included angle and the second included angle to obtain the included angle difference value;
c. determining a transformation matrix by using the coordinate difference value of the reference point and the included angle difference value;
d. affine transforming all the defective features of the first set of images to all the defective feature locations of the second set of images by means of a transformation matrix.
Preferably, S2 further comprises: and detecting the defect that the edge of the film layer of the lens is not reached by the diffraction principle.
Preferably, S2 further comprises: lens demoulding and edge hanging defect analysis.
A second object of the present invention is to provide a multi-station inspection apparatus for inspecting a lens, comprising:
a first station: acquiring a first group of images, extracting the position of the lens to be detected and the position of the positioning hole through the first group of images, and extracting defect characteristics on the lens to be detected;
a second station: acquiring a second group of images, extracting the position of the lens to be detected and the position of the positioning hole through the second group of images, and extracting defect characteristics on the lens to be detected;
a third station: polishing by using a bowl light source to obtain a third image, and analyzing the third image to obtain the defect that the film layer of the lens cannot reach the edge;
a fourth station: respectively shining light by using two inclined point light sources, shooting the projection of the lens on a background plate through the lens, and acquiring the defects of demoulding and edge hanging of the lens;
an image processing module: the method comprises the following steps:
primary extraction of defect features: respectively carrying out local threshold processing on the first group of images and the second group of images to obtain defect characteristics of the first group of images and defect characteristics of the second group of images;
and (4) extracting defect features again:
firstly, traversing and comparing each pixel in a first group of images, selecting the value with the maximum gray level of the pixel point at each position, and synthesizing into a new first comparison image; then, sequentially carrying out multiplication operation and global threshold processing on the first comparison image to obtain the defect characteristics of the first comparison image;
firstly, traversing and comparing each pixel in the second group of images, selecting the value with the maximum gray level of the pixel point at each position, and synthesizing into a new second comparison image; then, sequentially carrying out multiplication operation and global threshold processing on the second comparison image to obtain the defect characteristics of the second comparison image;
defect feature merging: combining the defect features of the first group of images and the defect features of the first contrast image to obtain all the defect features of the first group of images, and combining the defect features of the second group of images and the defect features of the second contrast image to obtain all the defect features of the second group of images;
affine transforming all defect features of a first group of images to all defect feature positions of a second group of images, respectively expanding the defect features of the corresponding positions of the first group of images and the second group of images, carrying out differential operation on the expanded defect features to obtain a difference image, identifying non-0 pixel points on the difference image as the defects of the lens, and identifying pixel points transformed from non-0 to 0 as dust.
The invention has the advantages and positive effects that:
according to the invention, through differential operation, dust interference can be rapidly eliminated to identify defect characteristics, so that the accuracy of lens detection is improved;
the invention can further improve the efficiency and the accuracy of detection by simultaneously detecting multiple defects at multiple stations on the premise of meeting the requirement of simultaneously detecting multiple defects.
Detailed Description
In order to further understand the contents, features and effects of the present invention, the following examples are illustrated, and the following detailed descriptions are given:
the technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. Based on the technical solutions in the present invention, all other embodiments obtained by a person of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.
A lens inspection method, comprising:
s1, obtaining a lens image of a lens to be detected, wherein the lens image comprises a first group of images and a second group of images; aiming at the same lens to be detected, two groups of different images are obtained under different conditions; the first purpose of this application is to get rid of the dust interference, detect the actual defect of lens, for this reason, as preferred embodiment, the first group image is the image before the lens that awaits measuring removes dust, and the second group image is the image after the lens that awaits measuring removes dust. In practical cases, the second set of images may also be images of the lens to be tested in a dust environment different from the first set of images; in order to facilitate subsequent image processing: the first group of images comprise images of the lens to be measured and the positioning mark images, and the second group of images comprise images of the lens to be measured and the positioning mark images. The positioning marks are four positioning holes positioned on the periphery of the lens to be measured, and the four positioning holes are positioned at four vertex angles of the rectangular frame.
S2, analyzing the lens image; the method comprises the following steps:
s201, respectively carrying out defect analysis on the first group of images and the second group of images, and carrying out local threshold processing to obtain defect characteristics of the first group of images and defect characteristics of the second group of images, wherein the defect characteristics are in a strip shape or a sheet shape, the area is large, and the brightness is low.
S202, comparing the first group of images pixel by pixel, selecting the pixel value of the brightest image of each pixel point to synthesize a new image, naming the new image as a first comparison image, carrying out multiplication operation on the first comparison image, enhancing the brightness of the point-shaped object to enable the defect comparison to be more obvious, carrying out global threshold processing to obtain the defect characteristics of the first comparison image. And comparing the second group of images pixel by pixel, selecting the pixel value of the image with the brightest pixel point to synthesize a new image, naming the new image as a second comparison image, multiplying the second comparison image to enhance the brightness of the point object so as to enable the defect contrast to be more obvious, and carrying out global threshold processing to obtain the defect characteristics of the second comparison image, wherein the defect characteristics are represented as point particles, the area is smaller, and the brightness is higher.
And S203, merging the defect features of the first group of images obtained in the S201 and the defect features of the first contrast images obtained in the S202 to obtain all defect features of the first group of images, and merging the defect features of the second group of images obtained in the S201 and the defect features of the second contrast images obtained in the S202 to obtain all defect features of the second group of images.
S204, affine transforming all defect features of the first group of images to all defect feature positions of the second group of images, respectively expanding all defect features of corresponding positions of the first group of images and the second group of images, and performing differential operation on the expanded defect features to obtain a difference image, wherein: the non-0 pixel points on the difference image are defects of the lens, and the pixel points converted from non-0 to 0 are dust.
The affine transformation process comprises the following steps:
a. determining a reference point coordinate difference value: respectively fitting the outline of the lens to be detected in the first group of images and the outline of the lens to be detected in the second group of images into a circle, extracting the circle center of the first group of images as a source point coordinate, and extracting the circle center of the second group of images as a target point coordinate; calculating a difference value between the coordinates of the source point and the coordinates of the target point to obtain a coordinate difference value of the reference point;
b. determining the difference value of the included angles: calculating a first included angle between a rectangular frame formed by four positioning holes of the positioning identification images of the first group of images and a horizontal line, and calculating a second included angle between a rectangular frame formed by four positioning holes of the positioning identification images of the second group of images and the horizontal line; calculating the difference value of the first included angle and the second included angle to obtain an included angle difference value;
c. determining a transformation matrix by using the coordinate difference value of the reference point and the included angle difference value;
d. affine transforming all the defective features of the first set of images to all the defective feature locations of the second set of images by means of a transformation matrix.
When processing the image, firstly, translating the central point of the second group of images or the first group of images according to the coordinate difference value of the reference point, so that the circles of the second group of images or the first group of images are superposed; and then rotating according to the included angle difference, so that the defect positions can be superposed.
S2 further comprises: and detecting the defect that the edge of the film layer of the lens is not reached by the diffraction principle.
S2 further comprises: lens demoulding and edge hanging defect analysis.
A second object of the present invention is to provide a multi-station inspection apparatus for inspecting a lens, comprising:
a first station: acquiring a first group of images, extracting the position of a lens to be detected and the position of a positioning hole through the first group of images, and extracting defect characteristics on the lens to be detected;
a second station: acquiring a second group of images, extracting the position of the lens to be detected and the position of the positioning hole through the second group of images, and extracting defect characteristics on the lens to be detected;
a third station: polishing by using a bowl light source to obtain a third image, and analyzing the third image to obtain the defect that the film layer of the lens cannot reach the edge;
a fourth station: respectively shining light by using two inclined point light sources, shooting the projection of the lens on a background plate through the lens, and acquiring the defects of demoulding and edge hanging of the lens;
the specific working process of the image processing module is as follows:
and respectively carrying out defect analysis on the first group of images and the second group of images, and carrying out local threshold processing to obtain defect characteristics of the first group of images and the second group of images, wherein the defect characteristics are in a strip shape or a sheet shape, the area is larger, and the brightness is lower.
And comparing the first group of images pixel by pixel, selecting the pixel value of the brightest image of each pixel point to synthesize a new image, multiplying the new image, enhancing the brightness of the point-like object to enable the defect contrast to be more obvious, and processing the global threshold value to obtain the defect characteristics of the first group of images. And comparing the second group of images pixel by pixel, selecting the pixel value of the image with the brightest pixel point to synthesize a new image, multiplying the new image to enhance the brightness of the point-like object, so that the defect contrast is more obvious, and performing global threshold processing to obtain the defect characteristics of the second group of images, wherein the defect characteristics are represented as point particles, the area is smaller, and the brightness is higher.
And combining the defect features of the first group of images and the defect features of the first group of images to obtain all the defect features of the first group of images, and combining the defect features of the second group of images and the defect features of the second group of images to obtain all the defect features of the second group of images.
Affine transforming all defect characteristics of the first group of images to all defect characteristic positions of the second group of images, respectively expanding the defect characteristics of the corresponding positions of the first group of images and the second group of images, and carrying out differential operation on the expanded defect characteristics to obtain a difference image, wherein non-0 pixel points on the difference image are defects of the lens, and the pixel points converted from non-0 to 0 are dust.
The preferred embodiment described above mainly comprises four inspection stations:
1. the first station and the second station mainly detect the defects of scratch, point particles, bright spots, broken edges, bubbles, fur foreign matters, cracks, fog, film foreign matters and the like. The defects such as point particles and bright spots detected by the two stations have similar image effects with dust, and the dust and the defects cannot be distinguished from one image. In order to avoid the influence of dust on the detection result, the first station and the second station respectively shoot the same lens before and after dust removal, and difference calculation is carried out on the two pictures. The method comprises the following specific steps:
1. and drawing at a first station, extracting the position of the lens and the position of the positioning hole on the clamping jaw, and extracting the defect characteristics on the lens.
2. And (3) removing dust from the lens, and removing or changing the position of the dust.
3. And drawing at a second station, extracting the positions of the lenses and the positions of the positioning holes on the clamping jaws, and extracting defect characteristics on the lenses.
4. And calculating a transformation matrix according to the positions of the lens and the positions of the positioning holes extracted in the first step and the third step, and performing affine transformation on the defect characteristics extracted in the first step to enable the defect characteristics of the first detection station to be overlapped with the defect characteristics of the second detection station.
5. And D, calculating difference of defect characteristics obtained in the third step and the fourth step, wherein if the positions of the defects extracted twice are overlapped, the defects are regarded as the defects of the lens, and if the positions of the defects are not overlapped, the defects are regarded as dust.
And step one and step three, when the defect characteristics of the lens are extracted, a mode of combining a local threshold and an overall threshold is adopted. The local threshold is used for extracting defects with unobvious defects but large areas, and the overall threshold is used for extracting defects with small areas but obvious features.
And fourthly, when the transformation matrix is calculated, the source point coordinates use the center of a circle of the lens contour fitting circle extracted by the first detection station, the rotation angle is the angle of the rectangle formed by the four positioning holes of the first detection station, the target point coordinates use the center of a circle of the lens contour fitting circle extracted by the second detection station, and the rotation angle is the angle of the rectangle formed by the four positioning holes of the second detection station.
2. And the three detection stations are polished by a bowl light source, the defects that the film layer of the shooting lens can not reach the edge are overcome, the normally coated lens is black under the bowl light, the lens without the coating is white, and the defects can be detected by judging the image brightness of the lens area.
3. The four detection stations are respectively polished by two inclined point light sources, the camera shoots the projection of the lens on the background plate through the lens, and the characteristics of the lens demoulding and the edge hanging defects on the projection are more obvious.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the present invention in any way, and all simple modifications, equivalent changes and modifications made to the above embodiment according to the technical spirit of the present invention are within the scope of the technical solution of the present invention.

Claims (8)

1. A method for inspecting a lens, comprising:
s1, obtaining a lens image of a lens to be detected, wherein the lens image comprises a first group of images and a second group of images;
s2, analyzing the lens image; the method comprises the following steps:
s201, primary defect feature extraction: respectively carrying out local threshold processing on the first group of images and the second group of images to obtain defect characteristics of the first group of images and defect characteristics of the second group of images;
s202, defect features are extracted again:
firstly, traversing and comparing each pixel in a first group of images, selecting the value with the maximum gray level of the pixel point at each position, and synthesizing into a new first comparison image; then, sequentially carrying out multiplication operation and global threshold processing on the first comparison image to obtain the defect characteristics of the first comparison image;
firstly, traversing and comparing each pixel in the second group of images, selecting the value with the maximum gray level of the pixel point at each position, and synthesizing into a new second comparison image; then, sequentially carrying out multiplication operation and global threshold processing on the second comparison image to obtain the defect characteristics of the second comparison image;
s203, defect feature merging: combining the defect features of the first group of images obtained in the step S201 and the defect features of the first contrast image obtained in the step S202 to obtain all defect features of the first group of images, and combining the defect features of the second group of images obtained in the step S201 and the defect features of the second contrast image obtained in the step S202 to obtain all defect features of the second group of images;
s204, affine transforming all defect features of the first group of images to all defect feature positions of the second group of images, respectively expanding the defect features of the corresponding positions of the first group of images and the second group of images, carrying out differential operation on the expanded defect features to obtain a difference image, identifying non-0 pixel points on the difference image as defects of the lens, and identifying pixel points converted from non-0 to 0 as dust.
2. The lens inspection method according to claim 1, wherein in S1, the first group of images is images before the lens to be inspected is dedusted, and the second group of images is images after the lens to be inspected is dedusted.
3. The lens detection method according to claim 2, wherein in S1, the first group of images includes an image of a lens to be detected and a positioning mark image, and the second group of images includes an image of a lens to be detected and a positioning mark image.
4. The lens inspection method according to claim 3, wherein the positioning marks are four positioning holes located around the lens to be inspected, and the four positioning holes are located at four corners of the rectangular frame.
5. The lens detection method according to claim 4, wherein the affine transformation is performed by:
a. determining a reference point coordinate difference value: respectively fitting the outline of the lens to be detected in the first group of images and the outline of the lens to be detected in the second group of images into a circle, extracting the circle center of the first group of images as a source point coordinate, and extracting the circle center of the second group of images as a target point coordinate; calculating the difference value of the coordinates of the source point and the coordinates of the target point to obtain the coordinate difference value of the reference point;
b. determining the difference value of the included angles: calculating a first included angle between a rectangular frame formed by four positioning holes of the positioning identification images of the first group of images and a horizontal line, and calculating a second included angle between a rectangular frame formed by four positioning holes of the positioning identification images of the second group of images and the horizontal line; calculating the difference value of the first included angle and the second included angle to obtain an included angle difference value;
c. determining a transformation matrix by using the coordinate difference value of the reference point and the included angle difference value;
d. and affine transforming all the defect characteristics of the first group of images to all the defect characteristic positions of the second group of images by the transformation matrix.
6. The lens inspection method of claim 1, wherein S2 further comprises: and detecting the defect that the edge of the film layer of the lens is not reached by the diffraction principle.
7. The lens inspection method of claim 1, wherein S2 further comprises: lens demoulding and edge hanging defect analysis.
8. The utility model provides a lens detects uses multistation detection device which characterized in that includes:
a first station: acquiring a first group of images, extracting the position of a lens to be detected and the position of a positioning hole through the first group of images, and extracting defect characteristics on the lens to be detected;
a second station: acquiring a second group of images, extracting the position of the lens to be detected and the position of the positioning hole through the second group of images, and extracting the defect characteristics on the lens to be detected;
a third station: polishing by using a bowl light source to obtain a third image, and analyzing the third image to obtain the defect that the film layer of the lens cannot reach the edge;
a fourth station: respectively shining light by using two inclined point light sources, shooting the projection of the lens on a background plate through the lens, and acquiring the defects of demoulding and edge hanging of the lens;
an image processing module: the method comprises the following steps:
primary extraction of defect features: respectively carrying out local threshold processing on the first group of images and the second group of images to obtain defect characteristics of the first group of images and defect characteristics of the second group of images;
and (4) extracting defect features again:
firstly, traversing and comparing each pixel in a first group of images, selecting the value with the maximum gray level of the pixel point at each position, and synthesizing into a new first comparison image; then, sequentially carrying out multiplication operation and global threshold processing on the first comparison image to obtain the defect characteristics of the first comparison image;
firstly, traversing and comparing each pixel in the second group of images, selecting the value with the maximum gray level of the pixel point at each position, and synthesizing into a new second comparison image; then, sequentially carrying out multiplication operation and global threshold processing on the second comparison image to obtain the defect characteristics of the second comparison image;
defect feature merging: combining the defect features of the first group of images and the defect features of the first contrast image to obtain all the defect features of the first group of images, and combining the defect features of the second group of images and the defect features of the second contrast image to obtain all the defect features of the second group of images;
affine transforming all defect features of a first group of images to all defect feature positions of a second group of images, respectively expanding the defect features of the corresponding positions of the first group of images and the second group of images, carrying out differential operation on the expanded defect features to obtain a difference image, identifying non-0 pixel points on the difference image as the defects of the lens, and identifying pixel points transformed from non-0 to 0 as dust.
CN202211330558.7A 2022-10-28 2022-10-28 Lens detection method and multi-station detection device Active CN115393358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211330558.7A CN115393358B (en) 2022-10-28 2022-10-28 Lens detection method and multi-station detection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211330558.7A CN115393358B (en) 2022-10-28 2022-10-28 Lens detection method and multi-station detection device

Publications (2)

Publication Number Publication Date
CN115393358A true CN115393358A (en) 2022-11-25
CN115393358B CN115393358B (en) 2023-01-31

Family

ID=84115018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211330558.7A Active CN115393358B (en) 2022-10-28 2022-10-28 Lens detection method and multi-station detection device

Country Status (1)

Country Link
CN (1) CN115393358B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447851A (en) * 2015-11-12 2016-03-30 刘新辉 Glass panel sound hole defect detection method and system
CN111028213A (en) * 2019-12-04 2020-04-17 北大方正集团有限公司 Image defect detection method and device, electronic equipment and storage medium
CN111986190A (en) * 2020-08-28 2020-11-24 哈尔滨工业大学(深圳) Printed matter defect detection method and device based on artifact elimination
CN113034474A (en) * 2021-03-30 2021-06-25 无锡美科微电子技术有限公司 Test method for wafer map of OLED display
CN113252568A (en) * 2021-06-10 2021-08-13 菲特(天津)检测技术有限公司 Lens surface defect detection method, system, product and terminal based on machine vision
CN113822890A (en) * 2021-11-24 2021-12-21 中科慧远视觉技术(北京)有限公司 Microcrack detection method, device and system and storage medium
CN115082485A (en) * 2022-08-23 2022-09-20 南通华烨塑料工业有限公司 Method and system for detecting bubble defects on surface of injection molding product

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447851A (en) * 2015-11-12 2016-03-30 刘新辉 Glass panel sound hole defect detection method and system
CN111028213A (en) * 2019-12-04 2020-04-17 北大方正集团有限公司 Image defect detection method and device, electronic equipment and storage medium
CN111986190A (en) * 2020-08-28 2020-11-24 哈尔滨工业大学(深圳) Printed matter defect detection method and device based on artifact elimination
CN113034474A (en) * 2021-03-30 2021-06-25 无锡美科微电子技术有限公司 Test method for wafer map of OLED display
CN113252568A (en) * 2021-06-10 2021-08-13 菲特(天津)检测技术有限公司 Lens surface defect detection method, system, product and terminal based on machine vision
CN113822890A (en) * 2021-11-24 2021-12-21 中科慧远视觉技术(北京)有限公司 Microcrack detection method, device and system and storage medium
CN115082485A (en) * 2022-08-23 2022-09-20 南通华烨塑料工业有限公司 Method and system for detecting bubble defects on surface of injection molding product

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
GANG HE等: "Research on Surface Defect Detection of Camera Module Lens Based on YOLOv5s-Small-Target", 《ELECTRONICS》 *
HAN XU等: "Annotation-free defect detection for glasses based on convolutional auto-encoder with skip connections", 《MATERIALS LETTERS》 *
MILAD ESHKEVARI等: "Automatic dimensional defect detection for glass vials based on machine vision: A heuristic segmentation method", 《JOURNAL OF MANUFACTURING PROCESSES》 *
向弋川: "基于机器视觉的光学镜片表面缺陷检测系统研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ辑》 *
张欣等: "探讨眼镜镜片表面质量智能化检测技术", 《玻璃搪瓷与眼镜》 *
曹宇等: "一种改进阈值分割算法在镜片缺陷检测中的应用", 《激光与光电子学进展》 *
王国鹏等: "基于YOLOv2网络模型的手机镜片缺陷实时检测方法", 《自动化与信息工程》 *

Also Published As

Publication number Publication date
CN115393358B (en) 2023-01-31

Similar Documents

Publication Publication Date Title
CN109900711A (en) Workpiece, defect detection method based on machine vision
CN101995223B (en) Chip appearance detection method and system
CN110261410A (en) A kind of detection device and method of glass lens defect
CN113570605B (en) Defect detection method and system based on liquid crystal display panel
CN112037203A (en) Side surface defect detection method and system based on complex workpiece outer contour registration
CN108257171A (en) Car radar assembling aperture detection method based on light vision
CN112414623A (en) Method and system for detecting part air tightness leakage defect based on artificial intelligence
CN110426395B (en) Method and device for detecting surface of solar EL battery silicon wafer
CN113237889A (en) Multi-scale ceramic detection method and system
CN114565848B (en) Liquid medicine level detection method and system in complex scene
CN116678826A (en) Appearance defect detection system and method based on rapid three-dimensional reconstruction
Fu et al. Medicine glass bottle defect detection based on machine vision
CN114577805A (en) MiniLED backlight panel defect detection method and device
CN116012292A (en) Wafer appearance defect detection method based on machine vision
CN114719749A (en) Metal surface crack detection and real size measurement method and system based on machine vision
CN109668897A (en) The vision detection system and its detection method of cyclic annular precision component surface micro defect
CN115393358B (en) Lens detection method and multi-station detection device
CN110412055A (en) A kind of lens white haze defect inspection method based on multiple light courcess dark-ground illumination
CN113808104A (en) Block-based metal surface defect detection method and system
CN109785290A (en) Normalized steel plate defect detection method is shone based on local light
CN110322395B (en) Part outline shape detection method and device based on image processing and affine transformation
CN107248151A (en) A kind of LCD panel intelligent detecting method and system based on machine vision
CN116908185A (en) Method and device for detecting appearance defects of article, electronic equipment and storage medium
CN108663376B (en) Seamless steel tube quality detection device and detection method
CN117169247A (en) Metal surface defect multi-dimensional detection method and system based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant