CN117094989B - Lens quality management method and system for optical sighting telescope - Google Patents

Lens quality management method and system for optical sighting telescope Download PDF

Info

Publication number
CN117094989B
CN117094989B CN202311331172.2A CN202311331172A CN117094989B CN 117094989 B CN117094989 B CN 117094989B CN 202311331172 A CN202311331172 A CN 202311331172A CN 117094989 B CN117094989 B CN 117094989B
Authority
CN
China
Prior art keywords
result
feature
dark
detection image
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311331172.2A
Other languages
Chinese (zh)
Other versions
CN117094989A (en
Inventor
闻智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong Pengsheng Machinery Co ltd
Original Assignee
Nantong Pengsheng Machinery Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong Pengsheng Machinery Co ltd filed Critical Nantong Pengsheng Machinery Co ltd
Priority to CN202311331172.2A priority Critical patent/CN117094989B/en
Publication of CN117094989A publication Critical patent/CN117094989A/en
Application granted granted Critical
Publication of CN117094989B publication Critical patent/CN117094989B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/958Inspecting transparent materials or objects, e.g. windscreens
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Chemical & Material Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Eyeglasses (AREA)

Abstract

The invention provides a lens quality management method and a lens quality management system for an optical sighting telescope, which relate to the technical field of data testing, wherein the method comprises the following steps: the invention solves the technical problems of low lens quality caused by insufficient management and control of testing the lens of an optical sighting telescope in the prior art, realizes rationalized and accurate testing of the lens of the optical sighting telescope and improves the lens quality.

Description

Lens quality management method and system for optical sighting telescope
Technical Field
The invention relates to the technical field of data testing, in particular to a lens quality management method and system for an optical sighting telescope.
Background
With the development of scientific technology, especially in the field of lens testing, optical sighting telescope needs to perform multiple detection procedures after manufacturing, wherein lens quality detection is an extremely important one of numerous detection procedures, traditional lens quality detection methods comprise visual detection, amplified detection by means of an industrial camera, and the like, but in traditional detection methods, fingerprints and dust have great interference on detection results, especially for some fine scratches, fine fingerprints and dust, even if amplified detection by means of an instrument is difficult to distinguish rapidly, namely the technical problem of low lens quality is caused by insufficient control of the lens of the optical sighting telescope in the prior art.
Disclosure of Invention
The application provides a lens quality management method and a lens quality management system for an optical sighting telescope, which are used for solving the technical problem that in the prior art, the quality of a lens is low due to insufficient management and control for testing the lens of the optical sighting telescope.
In view of the foregoing, the present application provides a lens quality management method and system for an optical sighting telescope.
In a first aspect, the present application provides a lens quality management method for an optical sighting telescope, the method comprising: establishing communication connection with the identification unit and reading the unique identification ID of the target lens; performing control initialization on the detection equipment, wherein the control initialization comprises background alternate initialization, detection contour initialization and detection strategy initialization, and the control initialization is triggered based on the unique identification ID; adjusting the background to be a dark background, performing polishing treatment on the target lens through a side light source, and receiving reflected light through a CCD camera arranged opposite to the dark background to generate a dark detection image; adjusting the background to be an optical background, and after reconfiguring CCD camera parameters, executing image acquisition of the target lens to generate an optical detection image; establishing a mapping relation between the dark detection image and the light detection image, and respectively executing image feature extraction of the dark detection image and the light detection image based on a backbone network to generate an initial feature extraction result; performing mutual mapping based on the mapping relation by using the initial feature extraction result, generating a subtended attention area, and performing the enhanced feature extraction of the neck network on the subtended attention area; performing feature similarity matching according to the enhanced feature extraction result, and completing mapping of abnormal features on the dark detection image and the light detection image according to the feature similarity matching result; and recording the mapping result to the unique identification ID.
In a second aspect, the present application provides a lens quality management system for an optical sighting telescope, the system comprising: the connection establishment module is used for establishing communication connection with the identification unit and reading the unique identification ID of the target lens; the control initialization module is used for executing control initialization on the detection equipment, wherein the control initialization comprises background alternate initialization, detection contour initialization and detection strategy initialization, and the control initialization is triggered based on the unique identification ID; the polishing processing module is used for adjusting the background into a dark background, performing polishing processing on the target lens through a side light source, receiving reflected light through a CCD camera arranged opposite to the dark background, and generating a dark detection image; the image acquisition module is used for adjusting the background into an optical background, and after reconfiguring CCD camera parameters, executing image acquisition of the target lens to generate an optical detection image; the image feature extraction module is used for establishing a mapping relation between the dark detection image and the light detection image, respectively executing image feature extraction of the dark detection image and the light detection image based on a backbone network, and generating an initial feature extraction result; the mutual mapping module is used for carrying out mutual mapping based on the mapping relation by using the initial feature extraction result, generating a subtended attention area and carrying out the enhanced feature extraction of the neck network on the subtended attention area; the mapping module is used for executing feature similarity matching according to the reinforced feature extraction result, and mapping of the abnormal features on the dark detection image and the light detection image is completed according to the feature similarity matching result;
and the identification module is used for recording the mapping result to the unique identification ID.
One or more technical solutions provided in the present application have at least the following technical effects or advantages:
the application provides a lens quality management method and system for optical sighting telescope relates to the technical field of data testing, and solves the technical problem that the quality of the lens is low due to insufficient management and control for testing the lens of the optical sighting telescope in the prior art, so that the rationalization and precision testing of the lens of the optical sighting telescope is realized, and the quality of the lens is improved.
Drawings
FIG. 1 is a flow chart of a method for lens quality management for an optical sighting telescope;
FIG. 2 is a schematic diagram of a feature similarity matching process in a lens quality management method for an optical sighting telescope;
fig. 3 is a schematic view of a lens quality management system for an optical sighting telescope.
Reference numerals illustrate: the system comprises a connection establishment module 1, a control initialization module 2, a lighting processing module 3, an image acquisition module 4, an image feature extraction module 5, a mutual mapping module 6, a mapping module 7 and an identification module 8.
Detailed Description
The application provides a lens quality management method and a lens quality management system for an optical sighting telescope, which are used for solving the technical problem that in the prior art, due to insufficient management and control for testing the lens of the optical sighting telescope, the quality of the lens is low.
Example 1
As shown in fig. 1, an embodiment of the present application provides a lens quality management method for an optical sighting telescope, the method including:
step A100: establishing communication connection with the identification unit and reading the unique identification ID of the target lens;
in the application, the lens quality management method for the optical sighting telescope is applied to a lens quality management system of the optical sighting telescope, and the lens quality management system of the optical sighting telescope is in communication connection with an identification unit, and the identification unit is used for collecting optical sighting telescope parameters required to be subjected to quality test.
In order to ensure accuracy in the quality test of the optical sighting telescope, the ID of the lens of the optical sighting telescope currently required to be subjected to the quality test needs to be identified through an identification unit in communication connection with the system, the identified ID information of the ID information is in one-to-one correspondence with the lens of the optical sighting telescope subjected to the quality test, and the identification ID corresponding to the target lens is unique identification ID information, namely, the displacement identification ID of the target lens refers to the number of the unique identification target lens in the lens quality management system of the optical sighting telescope, and the lens quality management of the optical sighting telescope is realized for the later stage as an important reference basis.
Step A200: performing control initialization on the detection equipment, wherein the control initialization comprises background alternate initialization, detection contour initialization and detection strategy initialization, and the control initialization is triggered based on the unique identification ID;
in this application, in order to make the accuracy of the quality detection of the optical sighting telescope, it is first necessary to perform control initialization on the detection equipment for the optical sighting telescope, where the control initialization refers to a processing function of the detection equipment for performing control output, which ensures that the control output of the detection equipment is restored to a non-disturbance state, while the control initialization performed on the detection equipment may include background alternate initialization, detection contour initialization, and monitoring strategy initialization, where the background alternate initialization refers to default value given to a brightness value in a process of changing a brightness level of a test background in a process of performing a quality test on the optical sighting telescope, the monitoring contour initialization refers to monitoring a contour of a test lens in a process of performing a quality test on the optical sighting telescope, and giving the monitored contour data of the lens to default value, and the detection strategy initialization refers to performing test on the optical sighting telescope according to a procedure of performing optical sighting according to a test item, and at the same time performing control initialization on the detection equipment refers to triggering by the unique identifier ID read as described above, so as to implement verification of quality control of the optical sighting telescope.
Step A300: adjusting the background to be a dark background, performing polishing treatment on the target lens through a side light source, and receiving reflected light through a CCD camera arranged opposite to the dark background to generate a dark detection image;
in this application, in order to perform test management on the quality of the lens of the optical sighting telescope more accurately, the lens test background needs to be adjusted based on background alternate initialization, the test background is firstly adjusted to be a dark background, and the dark background refers to a background dark area in the lens test scene, namely a scene with lower brightness, so as to reduce the interference on the lens test by a method of reducing the background brightness, further, since the characteristic of dark detection on the lens is more accurate, but the position of the lens may have a situation of offset, the detection process is as follows: the method comprises the steps of executing polishing treatment on a target lens through a side surface light source, wherein the polishing treatment is to polish only the target lens to be tested, the rest parts are not subjected to light supplementing, meanwhile, the light source for polishing treatment is subjected to reflected light receiving through a CCD camera arranged opposite to a dark background, the reflected light is reflected on the lens through the polishing treatment, the reflected light is captured through the laid CCD camera, a dark detection image of the target lens is generated according to the captured reflected light, and a lens quality management tamping foundation is carried out on an optical sighting telescope for subsequent realization.
Step A400: adjusting the background to be an optical background, and after reconfiguring CCD camera parameters, executing image acquisition of the target lens to generate an optical detection image;
in this application, in order to perform test management on the quality of the lens of the optical sighting telescope more accurately, therefore, the lens test background needs to be adjusted based on background alternate initialization, the test background is firstly adjusted to be an optical background, the optical background is a scene with higher brightness for separating a main body from the background, the optical background mainly comprises light for illuminating the surrounding environment and the background of a photographed object, further, because the optical detection position of the lens is more accurate, but the recognition result is smaller, namely inaccurate, for flaw recognition, the detection process is as follows: after the CCD cameras arranged opposite to the dark background are subjected to parameter reconfiguration according to the brightness in the current light background, image acquisition is carried out on the target lens, and the reconfigured camera parameters can be a degradation aperture value, a camera exposure parameter, a camera sensitization parameter and the like, so that the acquired image of the target lens is recorded as a light detection image to be output, and the limitation on lens quality management of the optical sighting telescope is realized.
Step A500: establishing a mapping relation between the dark detection image and the light detection image, and respectively executing image feature extraction of the dark detection image and the light detection image based on a backbone network to generate an initial feature extraction result;
further, step a500 of the present application further includes:
step A510: performing gray level conversion on the dark detection image and the light detection image respectively, and extracting a gray level median region of the dark detection image and a gray level median region of the light detection image;
step A520: comparing the gray level dark detection image binary values by using the gray level median area of the dark detection image to generate initial abnormal positioning of a dark part;
step a530: comparing the gray scale light detection image binary values by using the gray scale median region of the light detection image to generate initial abnormal positioning of the light part;
step a540: and respectively executing detection image anomaly identification corresponding to the dark part initial anomaly positioning and the light part initial anomaly positioning through convolution anomaly characteristics to obtain initial characteristic extraction results.
In the application, in order to improve the accuracy of quality detection of the target lens in the later stage, firstly, the lens characteristics of the target lens are required to be determined, namely, the mapping relation between the dark detection image and the light detection image is established based on the basic parameters of the target lens, namely, a value is taken in the dark detection image, the light detection image has a value corresponding to the value, a value is taken in the light detection image, the dark detection image can have a value corresponding to the value, extraction of image characteristics is respectively carried out on the dark detection image and the light detection image based on a backbone network, the backbone network is the most central part of the target detection network, namely, firstly, gray level conversion is respectively carried out on the dark detection image and the light detection image, each pixel gray level value in the source image is changed point by point according to a certain conversion relation according to the target gray level conversion condition, extracting a gray median region of the dark detection image and a gray median region of the light detection image, wherein the gray median region of the dark detection image refers to mean value calculation of all gray values contained in the dark detection image which completes gray level conversion, the region corresponding to the gray value equal to the gray mean value is marked as the gray median region of the dark detection image, the gray median region of the light detection image refers to mean value calculation of all gray values contained in the light detection image which completes gray level conversion, the region corresponding to the gray value equal to the gray mean value is marked as the gray median region of the light detection image, the gray dark detection image is binary compared by the gray median region of the dark detection image, each pixel on the gray dark detection image is marked according to the gray value in the gray median region of the dark detection image, generating initial abnormal positioning of the dark part by taking the area, corresponding to the gray value in the dark detection image, not being in the gray median area of the dark detection image as a basis, and similarly comparing the gray value of the gray detection image with the gray median area of the light detection image, namely taking each pixel on the gray detection image as a value according to the gray value in the gray median area of the light detection image, taking the area, corresponding to the gray value in the light detection image, not being in the gray median area of the light detection image as a basis, generating initial abnormal positioning of the light part,
detecting image anomaly identification corresponding to the dark portion initial anomaly positioning and the light portion initial anomaly positioning is respectively performed by convolution anomaly characteristics,
according to the convolution abnormal characteristics, on the basis of the obtained initial abnormal positioning of the dark part and the initial abnormal positioning of the light part, the dark detection image and the light detection image are respectively divided equally, meanwhile, the first area in the equal division of the dark detection image is set as a starting point, namely the obtained first area is identified as a zero area, then the first area is traversed, the obtained information in each area is matched with the convolution abnormal characteristics, so that the dark detection abnormal characteristics are generated, the first area in the equal division of the light detection image is set as the starting point, namely the obtained first area is identified as the zero area, then the first area is traversed, the obtained information in each area is matched with the convolution abnormal characteristics, so that the light detection abnormal characteristics are generated, and the initial characteristic extraction result of a target lens is determined according to the dark detection abnormal characteristics and the light detection abnormal characteristics, so that the initial characteristic extraction result of the target lens is used as reference data when the lens quality management is carried out on the optical sighting telescope for the later period.
Step A600: performing mutual mapping based on the mapping relation by using the initial feature extraction result, generating a subtended attention area, and performing the enhanced feature extraction of the neck network on the subtended attention area;
further, step a600 of the present application further includes:
step a610: setting a region expansion proportion;
step a620: performing feature expansion on the initial feature extraction result according to the region expansion proportion to obtain a feature expansion result;
step a630: and carrying out mutual mapping according to the characteristic expansion result to generate a subtended attention area.
In the present application, in order to extract the effective features from the features in the initial feature extraction result, the initial feature extraction result is mapped with the basic data according to the mapping relationship, which means that the proportion of the region expansion to be set is firstly set according to the mapping relationship, that is, when the mapping relationship is higher, the region expansion proportion to be set is higher, further, the initial feature extraction result is expanded according to the region expansion proportion, the features which are not extracted from the initial feature extraction result are extracted and expanded through the expansion region in the region expansion proportion, the initial feature extraction result after the feature expansion is recorded as the feature expansion result, the mapping relationship between the features in the feature expansion result and the features in the initial feature extraction result is mapped according to the feature expansion result, the focus region is generated on the basis, that the features with the mapping relationship are locally focused is changed along with the change of the features to be detected, the feature extraction of the neck network is further performed according to the focus region, the main effect of the neck network is that the features output by the main network are extracted and integrated, the features are extracted and the quality of the lens is accurately improved after the quality of the lens is determined.
Step A700: performing feature similarity matching according to the enhanced feature extraction result, and completing mapping of abnormal features on the dark detection image and the light detection image according to the feature similarity matching result;
further, as shown in fig. 2, step a700 of the present application further includes:
step a710: establishing a similar matching network, wherein the similar matching network comprises a characteristic similar constraint subunit and a position similar constraint subunit;
step A720: after the reinforced feature extraction result is subjected to optical feature and dark feature identification, synchronously inputting the optical feature and dark feature identification into the similarity matching network;
step a730: performing feature similarity recognition on the reinforced feature extraction result through the feature similarity constraint subunit, and outputting a feature similarity recognition result;
step a740: the position similarity constraint subunit executes the position similarity recognition of the light features and the dark features, and outputs a position similarity recognition result;
step a750: and finishing feature similarity matching according to the feature similarity recognition result and the position similarity recognition result.
Further, step a740 of the present application includes:
step A741: establishing an auxiliary authentication subunit, and coupling the auxiliary authentication subunit to the similarity matching network, wherein the auxiliary authentication subunit is a processing unit for discrimination and auxiliary correction;
step A742: after outputting any characteristic similar recognition result and position similar recognition result, inputting the result into the auxiliary authentication subunit for accurate judgment;
step a743: if the accurate judging result meets a preset threshold value, extracting position deviation data according to the position similarity identifying result;
step a744: and feeding the position deviation data back to the similarity matching network so as to perform control optimization of subsequent similarity matching.
In the application, the enhanced feature extraction result is taken as basic data to perform feature similarity matching, the feature similarity matching process means that a similarity matching network is firstly established, the similarity matching network comprises a feature similarity constraint subunit and a position similarity constraint subunit, the feature similarity constraint subunit is a unit used for constraining similar features in the quality test process of a target lens, the position similarity constraint subunit is a unit used for constraining similar positions of features in the quality test process of the target lens, further, based on the enhanced feature extraction result, the light features and the dark features in the target lens contained in the enhanced feature extraction result are respectively identified, so that the light features and the dark features with the identification are distinguished, the light features and the dark features with the identification are synchronously input into the similarity matching network, further, the identification light features and the identification dark features in the enhanced feature extraction result are identified through the feature similarity constraint subunit in the similarity matching network, the feature comparison feature identification result is carried out according to the color relation, the feature comparison result and the like in sequence in images according to the images corresponding to the image corresponding to the identification light features.
Further, the position similarity recognition of the light features and the dark features is performed by a position similarity constraint subunit,
the auxiliary authentication subunit is a unit for performing secondary verification and correction on the output characteristic similar recognition result and the position similar recognition result, and is coupled to the similar matching network, so that the auxiliary authentication subunit and the similar matching network cooperate to generate reinforcement, the task which cannot be completed by any single system is completed, the auxiliary authentication subunit comprises a processing unit for judging and auxiliary correction, and the auxiliary authentication subunit is used for accurately judging the characteristic similar recognition result and the position similar recognition result after the random characteristic similar recognition result and the position similar recognition result are output according to the characteristic position comparison result, namely, the auxiliary authentication subunit is required to be established for accurately judging the characteristic similar recognition result and the position similar recognition result, namely, the auxiliary authentication subunit is used for accurately judging the characteristic similar recognition result and the position similar recognition result when the position similar recognition result deviates from the preset position recognition result, and the position similar recognition result is accurately determined to be within the image, and the image has the history similar recognition result when the preset position is deviated from the preset position, the history recognition result is accurately detected, the auxiliary authentication subunit is coupled to the similar matching network, and the task which cannot be completed by any single system is completed, further, the position deviation data corresponding to the position features in the dark detection image are fed back to the similar matching network, so that the control optimization of the subsequent similar matching is completed.
Meanwhile, the position similar recognition result and the feature similar recognition result are both used for screening the defects of the target lens with larger density in the same range, and finally, feature similar matching is completed according to the feature similar recognition result and the position similar recognition result.
Further, feature mapping is carried out on abnormal defect features and abnormal position features possibly existing in the dark detection image and the light detection image according to feature similarity matching results, and abnormal feature mapping results existing in the dark detection image and the light detection image of the target lens are determined, so that better lens quality management of the optical sighting telescope in the later period is ensured.
Further, step a730 of the present application includes:
step A731: judging whether the enhanced feature extraction result has a single feature result or not;
step a732: when the enhanced feature extraction result is a single feature result, generating a feature reserving instruction;
step A733: and carrying out position compensation on the single-feature result according to the received position deviation data to generate an abnormal recognition result.
In the present application, in order to ensure accuracy of the position deviation data, it is therefore first necessary to determine whether there is a single feature result in the enhanced feature extraction result, where the single feature result is feature information of the detected target lens under the dark background, and when the enhanced feature extraction result is the single feature result, the enhanced feature extraction result is regarded as including the dark feature, thereby generating a retention feature instruction, where the retention feature instruction is an instruction for performing a retention operation on the single feature result in the enhanced feature extraction result, and further, performing position compensation on the single feature result according to the position deviation data received by the position similarity recognition result, and performing position deviation compensation on the position information under the dark feature, thereby generating an abnormal recognition result, so as to achieve more accurate lens quality management on the optical sighting telescope based on the abnormal recognition result.
Step A800: and recording the mapping result to the unique identification ID.
Further, step a800 of the present application further includes:
step a810: taking dark features in the feature similarity matching result as abnormal features, taking position coordinates of the light features as abnormal coordinates, and reconstructing an abnormal recognition result;
step A820: and recording the reconstruction anomaly identification result as a mapping result to the unique identification ID.
In the application, in order to better manage the quality of the target lens of the optical sighting telescope, the target lens in the unique identification ID of the mapping result record value needs to be subjected to anomaly analysis, wherein the anomaly analysis process is to extract dark features in the feature similarity matching result as anomaly features when the defect features are abnormal, extract position coordinates of the optical features as anomaly coordinates, reconstruct the defect features and the position coordinates of the anomaly identification result, update the anomaly features and the anomaly coordinates, and finally record the reconstructed anomaly identification result as the mapping result to the unique identification ID, thereby ensuring the accuracy of the target lens defect corresponding to the unique identification ID and ensuring the high efficiency when the optical sighting telescope is subjected to lens quality management.
In summary, the lens quality management method for the optical sighting telescope provided by the embodiment of the application at least comprises the following technical effects, so that the rationalization and precision test of the lens of the optical sighting telescope is realized, and the lens quality is improved.
Example two
Based on the same inventive concept as the lens quality management method for an optical sighting telescope in the foregoing embodiments, as shown in fig. 3, the present application provides a lens quality management system for an optical sighting telescope, the system comprising:
the connection establishment module 1 is used for establishing communication connection with the identification unit and reading the unique identification ID of the target lens;
a control initialization module 2, wherein the control initialization module 2 is used for executing control initialization on the detection equipment, the control initialization comprises background alternate initialization, detection contour initialization and detection strategy initialization, and the control initialization is triggered based on the unique identification ID;
the polishing processing module 3 is used for adjusting the background into a dark background, performing polishing processing on the target lens through a side light source, receiving reflected light through a CCD camera arranged opposite to the dark background, and generating a dark detection image;
the image acquisition module 4 is used for adjusting the background into the light background, and after the CCD camera parameters are reconfigured, the image acquisition of the target lens is executed to generate a light detection image;
the image feature extraction module 5 is used for establishing a mapping relation between the dark detection image and the light detection image, respectively executing image feature extraction of the dark detection image and the light detection image based on a backbone network, and generating an initial feature extraction result;
a mutual mapping module 6, where the mutual mapping module 6 is configured to perform mutual mapping based on the mapping relationship according to the initial feature extraction result, generate a subtended attention area, and perform neck network enhancement feature extraction on the subtended attention area;
the mapping module 7 is used for executing feature similarity matching according to the enhanced feature extraction result, and mapping the abnormal features on the dark detection image and the light detection image according to the feature similarity matching result;
the identification module 8 is configured to record the mapping result to the unique identification ID by the identification module 8.
Further, the system further comprises:
the device comprises a unit building module, a position similarity constraint module and a position similarity constraint module, wherein the unit building module is used for building a similarity matching network, and the similarity matching network comprises a characteristic similarity constraint subunit and a position similarity constraint subunit;
the synchronous input module is used for synchronously inputting the optical characteristic and the dark characteristic identification of the enhanced characteristic extraction result into the similar matching network;
the first output module is used for carrying out feature similarity recognition on the reinforced feature extraction result through the feature similarity constraint subunit and outputting a feature similarity recognition result;
the second output module is used for executing the position similarity recognition of the light features and the dark features through the position similarity constraint subunit and outputting a position similarity recognition result;
and the similar matching module is used for completing the feature similar matching according to the feature similar recognition result and the position similar recognition result.
Further, the system further comprises:
the coupling module is used for establishing an auxiliary authentication subunit and coupling the auxiliary authentication subunit to the similar matching network, and the auxiliary authentication subunit is a processing unit for discrimination and auxiliary correction;
the third output module is used for inputting the random characteristic similar recognition result and the position similar recognition result into the auxiliary authentication subunit for accurate judgment after outputting the random characteristic similar recognition result and the position similar recognition result;
the first judging module is used for extracting position deviation data according to the position similarity recognition result if the accurate judging result meets a preset threshold value;
and the control optimization module is used for feeding back the position deviation data to the similar matching network so as to perform control optimization of subsequent similar matching.
Further, the system further comprises:
the reconstruction module is used for reconstructing an abnormal recognition result by taking dark features in the feature similarity matching result as abnormal features and taking position coordinates of the light features as abnormal coordinates;
and the first mapping module is used for recording the reconstruction abnormal identification result as a mapping result to the unique identification ID.
Further, the system further comprises:
the gray level conversion module is used for respectively carrying out gray level conversion on the dark detection image and the light detection image and extracting a gray level median area of the dark detection image and a gray level median area of the light detection image;
the first comparison module is used for comparing the gray level dark detection image with the gray level median area of the dark detection image in a binary manner to generate initial abnormal positioning of a dark part;
the second comparison module is used for comparing the gray level light detection image binary values by using the gray level median area of the light detection image to generate initial abnormal positioning of the light part;
the abnormal recognition module is used for respectively executing abnormal recognition of the detection image corresponding to the initial abnormal positioning of the dark part and the initial abnormal positioning of the light part through convolution abnormal characteristics to obtain an initial characteristic extraction result.
Further, the system further comprises:
the first expansion module is used for setting the area expansion proportion;
the second expansion module is used for carrying out feature expansion on the initial feature extraction result according to the region expansion proportion to obtain a feature expansion result;
and the second mapping module is used for carrying out mutual mapping according to the characteristic expansion result to generate a subtended attention area.
Further, the system further comprises:
the second judging module is used for judging whether the single feature result exists in the enhanced feature extraction result or not;
the instruction module is used for generating a feature preserving instruction when the enhanced feature extraction result is a single feature result;
and the compensation module is used for carrying out the single-feature result position compensation according to the received position deviation data and generating an abnormal recognition result.
From the foregoing detailed description of the lens quality management method for an optical sighting telescope, it will be apparent to those skilled in the art that the lens quality management system for an optical sighting telescope in this embodiment, for the device disclosed in the embodiment, corresponds to the method disclosed in the embodiment, so that the description is relatively simple, and the relevant points refer to the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. A lens quality management method for an optical sighting telescope, the method comprising:
establishing communication connection with the identification unit and reading the unique identification ID of the target lens;
performing control initialization on the detection equipment, wherein the control initialization comprises background alternate initialization, detection contour initialization and detection strategy initialization, and the control initialization is triggered based on the unique identification ID;
adjusting the background to be a dark background, performing polishing treatment on the target lens through a side light source, and receiving reflected light through a CCD camera arranged opposite to the dark background to generate a dark detection image;
adjusting the background to be an optical background, and after reconfiguring CCD camera parameters, executing image acquisition of the target lens to generate an optical detection image;
establishing a mapping relation between the dark detection image and the light detection image, and respectively executing image feature extraction of the dark detection image and the light detection image based on a backbone network to generate an initial feature extraction result;
performing mutual mapping based on the mapping relation by using the initial feature extraction result, generating a subtended attention area, and performing the enhanced feature extraction of the neck network on the subtended attention area;
performing feature similarity matching according to the enhanced feature extraction result, and completing mapping of abnormal features on the dark detection image and the light detection image according to the feature similarity matching result;
recording the mapping result to the unique identification ID;
wherein recording the mapping result to the unique identification ID includes:
establishing a similar matching network, wherein the similar matching network comprises a characteristic similar constraint subunit and a position similar constraint subunit;
after the reinforced feature extraction result is subjected to optical feature and dark feature identification, synchronously inputting the optical feature and dark feature identification into the similarity matching network;
performing feature similarity recognition on the reinforced feature extraction result through the feature similarity constraint subunit, and outputting a feature similarity recognition result;
the position similarity constraint subunit executes the position similarity recognition of the light features and the dark features, and outputs a position similarity recognition result;
finishing feature similarity matching according to the feature similarity recognition result and the position similarity recognition result;
taking dark features in the feature similarity matching result as abnormal features, taking position coordinates of the light features as abnormal coordinates, and reconstructing an abnormal recognition result;
and recording the reconstruction anomaly identification result as a mapping result to the unique identification ID.
2. The method of claim 1, wherein the method further comprises:
establishing an auxiliary authentication subunit, and coupling the auxiliary authentication subunit to the similarity matching network, wherein the auxiliary authentication subunit is a processing unit for discrimination and auxiliary correction;
after outputting any characteristic similar recognition result and position similar recognition result, inputting the result into the auxiliary authentication subunit for accurate judgment;
if the accurate judging result meets a preset threshold value, extracting position deviation data according to the position similarity identifying result;
and feeding the position deviation data back to the similarity matching network so as to perform control optimization of subsequent similarity matching.
3. The method of claim 1, wherein the method further comprises:
performing gray level conversion on the dark detection image and the light detection image respectively, and extracting a gray level median region of the dark detection image and a gray level median region of the light detection image;
comparing the gray level dark detection image binary values by using the gray level median area of the dark detection image to generate initial abnormal positioning of a dark part;
comparing the gray scale light detection image binary values by using the gray scale median region of the light detection image to generate initial abnormal positioning of the light part;
and respectively executing detection image anomaly identification corresponding to the dark part initial anomaly positioning and the light part initial anomaly positioning through convolution anomaly characteristics to obtain initial characteristic extraction results.
4. The method of claim 1, wherein the method further comprises:
setting a region expansion proportion;
performing feature expansion on the initial feature extraction result according to the region expansion proportion to obtain a feature expansion result;
and carrying out mutual mapping according to the characteristic expansion result to generate a subtended attention area.
5. The method of claim 2, wherein the method further comprises:
judging whether the enhanced feature extraction result has a single feature result or not;
when the enhanced feature extraction result is a single feature result, generating a feature reserving instruction;
and carrying out position compensation on the single-feature result according to the received position deviation data to generate an abnormal recognition result.
6. A lens quality management system for an optical sighting telescope, the system comprising:
the connection establishment module is used for establishing communication connection with the identification unit and reading the unique identification ID of the target lens;
the control initialization module is used for executing control initialization on the detection equipment, wherein the control initialization comprises background alternate initialization, detection contour initialization and detection strategy initialization, and the control initialization is triggered based on the unique identification ID;
the polishing processing module is used for adjusting the background into a dark background, performing polishing processing on the target lens through a side light source, receiving reflected light through a CCD camera arranged opposite to the dark background, and generating a dark detection image;
the image acquisition module is used for adjusting the background into an optical background, and after reconfiguring CCD camera parameters, executing image acquisition of the target lens to generate an optical detection image;
the image feature extraction module is used for establishing a mapping relation between the dark detection image and the light detection image, respectively executing image feature extraction of the dark detection image and the light detection image based on a backbone network, and generating an initial feature extraction result;
the mutual mapping module is used for carrying out mutual mapping based on the mapping relation by using the initial feature extraction result, generating a subtended attention area and carrying out the enhanced feature extraction of the neck network on the subtended attention area;
the mapping module is used for executing feature similarity matching according to the reinforced feature extraction result, and mapping of the abnormal features on the dark detection image and the light detection image is completed according to the feature similarity matching result;
the identification module is used for recording the mapping result to the unique identification ID;
the device comprises a unit building module, a position similarity constraint module and a position similarity constraint module, wherein the unit building module is used for building a similarity matching network, and the similarity matching network comprises a characteristic similarity constraint subunit and a position similarity constraint subunit;
the synchronous input module is used for synchronously inputting the optical characteristic and the dark characteristic identification of the enhanced characteristic extraction result into the similar matching network;
the first output module is used for carrying out feature similarity recognition on the reinforced feature extraction result through the feature similarity constraint subunit and outputting a feature similarity recognition result;
the second output module is used for executing the position similarity recognition of the light features and the dark features through the position similarity constraint subunit and outputting a position similarity recognition result;
the similarity matching module is used for completing feature similarity matching according to the feature similarity recognition result and the position similarity recognition result;
the reconstruction module is used for reconstructing an abnormal recognition result by taking dark features in the feature similarity matching result as abnormal features and taking position coordinates of the light features as abnormal coordinates;
and the first mapping module is used for recording the reconstruction abnormal identification result as a mapping result to the unique identification ID.
CN202311331172.2A 2023-10-16 2023-10-16 Lens quality management method and system for optical sighting telescope Active CN117094989B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311331172.2A CN117094989B (en) 2023-10-16 2023-10-16 Lens quality management method and system for optical sighting telescope

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311331172.2A CN117094989B (en) 2023-10-16 2023-10-16 Lens quality management method and system for optical sighting telescope

Publications (2)

Publication Number Publication Date
CN117094989A CN117094989A (en) 2023-11-21
CN117094989B true CN117094989B (en) 2024-01-26

Family

ID=88770128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311331172.2A Active CN117094989B (en) 2023-10-16 2023-10-16 Lens quality management method and system for optical sighting telescope

Country Status (1)

Country Link
CN (1) CN117094989B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612399A (en) * 2022-03-03 2022-06-10 深圳闪回科技有限公司 Picture identification system and method for mobile phone appearance mark
CN116386028A (en) * 2023-04-06 2023-07-04 扬州市管件厂有限公司 Image layering identification method and device for processing tee pipe fitting
CN116593137A (en) * 2023-07-14 2023-08-15 苏州然玓光电科技有限公司 Interferometer-based optical lens quality testing method and system
CN116645362A (en) * 2023-06-29 2023-08-25 日照鲁光电子科技有限公司 Intelligent quality detection method and system for silicon carbide wafer

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114612399A (en) * 2022-03-03 2022-06-10 深圳闪回科技有限公司 Picture identification system and method for mobile phone appearance mark
CN116386028A (en) * 2023-04-06 2023-07-04 扬州市管件厂有限公司 Image layering identification method and device for processing tee pipe fitting
CN116645362A (en) * 2023-06-29 2023-08-25 日照鲁光电子科技有限公司 Intelligent quality detection method and system for silicon carbide wafer
CN116593137A (en) * 2023-07-14 2023-08-15 苏州然玓光电科技有限公司 Interferometer-based optical lens quality testing method and system

Also Published As

Publication number Publication date
CN117094989A (en) 2023-11-21

Similar Documents

Publication Publication Date Title
WO2023134793A2 (en) Machine vision-based machine tool part online inspection method
CN100414558C (en) Automatic fingerprint distinguishing system and method based on template learning
CN107223258A (en) Image-pickup method and equipment
CN104143185A (en) Blemish zone detecting method
JP2019087181A (en) Device and method for image inspection
CN109086675A (en) A kind of recognition of face and attack detection method and its device based on optical field imaging technology
CN110532746B (en) Face checking method, device, server and readable storage medium
CN107169957A (en) A kind of glass flaws on-line detecting system and method based on machine vision
Liu et al. Iterating tensor voting: A perceptual grouping approach for crack detection on EL images
CN114937004B (en) Method for detecting surface air hole defects of mechanical part based on computer vision
CN105488486A (en) Face recognition method and device for preventing photo attack
CN117094989B (en) Lens quality management method and system for optical sighting telescope
CN116124783A (en) Flaw detection method and device for weaving blank cloth
CN114998942A (en) High-precision optical fingerprint identification method and system
CN112329845B (en) Method and device for changing paper money, terminal equipment and computer readable storage medium
Jöchl et al. Device (in) dependence of deep learning-based image age approximation
CN206805574U (en) Image capture device
CN110827281A (en) Camera module optical center detection method
CN116930192B (en) High-precision copper pipe defect detection method and system
US7333640B2 (en) Extraction of minutiae from a fingerprint image
CN117474916B (en) Image detection method, electronic equipment and storage medium
Li et al. ICSNET: iris center localization and segmentation in non-cooperative environment with visible illumination
Meghana et al. Retina Based Biometric Recognition System
CN117392130B (en) On-line fault diagnosis system based on infrared image
CN117078734A (en) System and method for identifying cable structure size

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant