CN110956624A - Image definition evaluation method for three-dimensional object - Google Patents

Image definition evaluation method for three-dimensional object Download PDF

Info

Publication number
CN110956624A
CN110956624A CN201911212471.8A CN201911212471A CN110956624A CN 110956624 A CN110956624 A CN 110956624A CN 201911212471 A CN201911212471 A CN 201911212471A CN 110956624 A CN110956624 A CN 110956624A
Authority
CN
China
Prior art keywords
image
focusing ring
images
definition
interested
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911212471.8A
Other languages
Chinese (zh)
Other versions
CN110956624B (en
Inventor
谭治英
赵宝来
费仲文
徐孝彬
赵鸣晖
沙刘力
袁文强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Campus of Hohai University
Original Assignee
Changzhou Campus of Hohai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Campus of Hohai University filed Critical Changzhou Campus of Hohai University
Priority to CN201911212471.8A priority Critical patent/CN110956624B/en
Publication of CN110956624A publication Critical patent/CN110956624A/en
Application granted granted Critical
Publication of CN110956624B publication Critical patent/CN110956624B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image definition evaluation method based on a three-dimensional object, which comprises the following steps: starting a steering engine to enable a focusing ring to rotate, and triggering a camera to acquire a group of images at fixed time intervals; preprocessing each image, wherein the preprocessing comprises median filtering to remove noise, and threshold segmentation is used for carrying out binarization processing on the images; carrying out Canny edge detection on the preprocessed image; extracting gray values in a fixed range in the interested edge normal direction, and taking the obtained gray values as a point set; performing curve fitting by using a least square method; outputting an index coefficient D of each interested area, and taking an average value as a definition evaluation index of the image; and comparing the indexes of each image to obtain the clearest image and rotating the focusing ring to the specified position. The invention adopts a new evaluation function to locally evaluate the interested region of the image, and can realize the purpose of accurate evaluation aiming at the specific requirement of a specific object.

Description

Image definition evaluation method for three-dimensional object
Technical Field
The invention relates to the field of computer vision technology and image processing, and designs an image definition evaluation method for a three-dimensional object.
Background
With the application and development of machine vision, the acquisition of high-definition images is of great significance in the field of industrial detection, in the industrial detection process, an experienced machine vision engineer mainly spends a long time to adjust a lens focusing ring to obtain clear images, and focusing results are different due to human factors, so that an imaging system does not reach an optimal use state, and therefore, the research of an automatic focusing technology is of great significance. An important ring of the automatic focusing technology is the evaluation of image definition, and the quality of an image definition evaluation function directly influences the automatic focusing precision. Most of the image definition evaluation functions proposed at home and abroad aim at the whole image, different planes have different definitions for a three-dimensional object, and different planes have different definition requirements according to specific requirements, so that the aim of global definition evaluation on the whole image cannot be achieved. The invention carries out local evaluation on the interested area and can solve the problems.
Disclosure of Invention
In order to solve the problems, the invention provides an image definition evaluation method based on a three-dimensional object, which utilizes an automatic focusing device, wherein the automatic focusing device comprises a steering engine, a camera, a focusing ring and a workbench, the focusing ring is connected to the camera, the focusing ring carries out focusing through the steering engine, the workbench is positioned below the focusing ring, and an object to be measured is placed on the workbench, and the method is characterized by comprising the following steps:
(1) starting a steering engine to enable a focusing ring to rotate, triggering a camera to collect a group of images at fixed time intervals, enabling the definition of the images to be from fuzzy to clear and then to be fuzzy, and recording scale values corresponding to each photo; the manual setting is carried out by an industrial personal computer at fixed time intervals;
(2) preprocessing each image, wherein the preprocessing comprises median filtering to remove noise, and threshold segmentation is used for carrying out binarization processing on the images;
(3) performing Canny edge detection on the preprocessed image, and extracting the image edge;
(4) extracting gray values in a fixed range in the interested edge normal direction, wherein the fixed range is formed by extending pixel points with the same length from edges to two sides, and taking the obtained gray values as a point set;
(5) performing curve fitting by using a least square method to obtain an index coefficient D serving as an index for evaluating the image definition;
(6) outputting the index coefficient D of each interested area, and taking the average value as the definition evaluation index of the image;
(7) and comparing the indexes of each image to obtain the clearest image and enabling the focusing ring to rotate to the specified position.
The curve fitting method in the step (5) is as follows:
the curve function is of the form:
Figure RE-GDA0002327733370000021
wherein y represents the gray value at the x position, A, B, C, D, E is the parameter to be determined; set point set as { (x)1,y1),(x2,y2),…,(xn,yn) The objective function is:
Figure RE-GDA0002327733370000022
is provided with
Figure RE-GDA0002327733370000023
Using least squaresThe normal non-linear fitting method finds the parameter A, B, C, D, E such that L is minimized. The invention fits the curve by calling the nlnfit function in MATLAB.
The invention has the advantages that the local evaluation can be carried out on the specific area, and the requirement of accurate evaluation definition can be met aiming at specific requirements.
Drawings
Fig. 1 shows an automatic focusing mechanism designed by the present invention.
FIG. 2 is a flowchart illustrating operation of an embodiment of the present invention.
Fig. 3 is a diagram of the function variation corresponding to different exponential coefficients D.
Fig. 4 shows workpieces with different degrees of sharpness.
Fig. 5 is a pre-processed image.
Fig. 6 is an image after edge detection.
FIG. 7 is a gray value array of the four interested edges in the left image of FIG. 4 in the fixed range in the normal direction and the fitting result.
FIG. 8 is a gray value array of the four interested edges in the right image of FIG. 4 in the fixed range in the normal direction and the fitting result.
FIG. 9 is a graph of a set of images evaluating sharpness.
Detailed Description
Example (b):
as shown in figure 1, an automatic focusing device is designed, the automatic focusing device comprises a steering engine 1, a camera 2, a focusing ring 3 and a workbench 4, the focusing ring 3 is connected to the camera 2, the focusing ring 3 carries out focusing through the steering engine 1, the workbench 4 is located below the focusing ring, and an object 5 to be measured is placed on the workbench 4. Starting the steering engine 1 to enable the focusing ring 3 to rotate, collecting a group of images, evaluating the collected images to obtain the image with the highest definition, and enabling the focusing ring 3 to rotate to the corresponding position by using the steering engine 1 to finally achieve the purpose of automatic focusing.
The invention will now be described in more detail by way of example with reference to the accompanying drawings.
This example acquires a total of 23 images with different degrees of blur. As shown in fig. 4, 2 images were selected for display. The specific operation process of the invention is as follows:
the method comprises the following steps: the industrial personal computer controls the steering engine to drive the focusing ring to rotate, and triggers the camera to collect a group of images, so that the definition of the images is changed from fuzzy to clear and then fuzzy, and the scale value corresponding to each photo is recorded.
Step two: each image is preprocessed, wherein the preprocessing includes median filtering to remove noise, and the threshold segmentation is used for carrying out binarization processing on the images, as shown in fig. 5.
Step three: and carrying out Canny edge detection on the preprocessed image, and extracting the image edge. As shown in fig. 6.
Step four: extracting the gray values in the fixed range in the normal direction of the edge of interest, as shown in fig. 3, sorting the gray values from left to right or from top to bottom in the normal direction to form an array, fitting the array by using a least square fitting method, in this embodiment, selecting 40 pixels on the left and right of the edge, and extracting the gray values of the 80 pixels as a point set, as shown in fig. 7 and 8.
Step five: and performing curve fitting on the point set, wherein the function form of the curve fitting is as follows:
Figure RE-GDA0002327733370000031
the larger the value of D, the larger the slope of the curve, the larger the gray scale change, and the sharper the image. As shown in fig. 6 and 7, the gray value in the edge range corresponding to the blurred image in the left image of fig. 4 changes smoothly, the slope of the fitted curve is small, the gray value in the edge range corresponding to the sharp image in the right image of fig. 4 changes steeply, and the slope of the fitted curve is large.
Step six: the D values obtained from the four regions of each image are averaged to serve as the evaluation index of the image, and the indexes obtained from the 23 images are output and plotted in a graph, as shown in fig. 9. It can be seen that the invention has the advantages of strong unimodal property and high sensitivity. The focusing ring is rotated to the position where the image is clearest, the workpiece is moved to any position, and the parameters obtained by the algorithm are unchanged, so that the stability of the algorithm is high.

Claims (2)

1. The utility model provides an image definition evaluation method based on three-dimensional object utilizes an automatic focusing device, the automatic focusing device includes steering wheel, camera, focusing ring, workstation, the focusing ring is connected on the camera, the focusing ring is focused through the steering wheel, the workstation is located focusing ring below, places the object that awaits measuring on the workstation, its characterized in that includes following step:
(1) starting a steering engine to enable a focusing ring to rotate, triggering a camera to collect a group of images at fixed time intervals, enabling the definition of the images to be from fuzzy to clear and then to be fuzzy, and recording scale values corresponding to each photo; the manual setting is carried out by an industrial personal computer at fixed time intervals;
(2) preprocessing each image, wherein the preprocessing comprises median filtering to remove noise, and threshold segmentation is used for carrying out binarization processing on the images;
(3) performing Canny edge detection on the preprocessed image, and extracting the image edge;
(4) extracting gray values in a fixed range in the interested edge normal direction, wherein the fixed range is formed by extending pixel points with the same length from edges to two sides, and taking the obtained gray values as a point set;
(5) performing curve fitting by using a least square method to obtain an index coefficient D serving as an index for evaluating the image definition;
(6) outputting the index coefficient D of each interested area, and taking the average value as the definition evaluation index of the image;
(7) and comparing the indexes of each image to obtain the clearest image and enabling the focusing ring to rotate to the specified position.
2. A method for evaluating image sharpness based on a stereoscopic object according to claim 1, wherein: the curve fitting method in the step (5) is as follows:
the curve function is of the form:
Figure FDA0002298512600000011
wherein y represents the gray value at the x position, A, B, C, D, E is the parameter to be determined; set point set as { (x)1,y1),(x2,y2),…,(xn,yn) The objective function is:
Figure FDA0002298512600000012
is provided with
Figure FDA0002298512600000013
The least squares nonlinear fitting method is used to find the parameter A, B, C, D, E so that L is minimized.
CN201911212471.8A 2019-12-02 2019-12-02 Image definition evaluation method for three-dimensional object Active CN110956624B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911212471.8A CN110956624B (en) 2019-12-02 2019-12-02 Image definition evaluation method for three-dimensional object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911212471.8A CN110956624B (en) 2019-12-02 2019-12-02 Image definition evaluation method for three-dimensional object

Publications (2)

Publication Number Publication Date
CN110956624A true CN110956624A (en) 2020-04-03
CN110956624B CN110956624B (en) 2023-09-01

Family

ID=69979238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911212471.8A Active CN110956624B (en) 2019-12-02 2019-12-02 Image definition evaluation method for three-dimensional object

Country Status (1)

Country Link
CN (1) CN110956624B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112285876A (en) * 2020-11-04 2021-01-29 邱妙娜 Camera automatic focusing method based on image processing and bubble detection
CN113639630A (en) * 2021-04-01 2021-11-12 浙江大学台州研究院 Dimension measuring instrument system based on multi-template matching and automatic focusing functions
CN113810616A (en) * 2021-09-27 2021-12-17 季华实验室 Aperture focal length adjusting method, system, electronic device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140029000A1 (en) * 2012-07-25 2014-01-30 Ningbo University Method for detecting parfocality of zoom-stereo microscope
CN106127775A (en) * 2016-06-28 2016-11-16 乐视控股(北京)有限公司 Measurement for Digital Image Definition and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140029000A1 (en) * 2012-07-25 2014-01-30 Ningbo University Method for detecting parfocality of zoom-stereo microscope
CN106127775A (en) * 2016-06-28 2016-11-16 乐视控股(北京)有限公司 Measurement for Digital Image Definition and device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112285876A (en) * 2020-11-04 2021-01-29 邱妙娜 Camera automatic focusing method based on image processing and bubble detection
CN113639630A (en) * 2021-04-01 2021-11-12 浙江大学台州研究院 Dimension measuring instrument system based on multi-template matching and automatic focusing functions
CN113639630B (en) * 2021-04-01 2024-07-19 浙江大学台州研究院 Dimension measuring instrument system based on multi-template matching and automatic focusing functions
CN113810616A (en) * 2021-09-27 2021-12-17 季华实验室 Aperture focal length adjusting method, system, electronic device and storage medium
CN113810616B (en) * 2021-09-27 2023-05-09 季华实验室 Aperture focal length adjustment method, system, electronic device and storage medium

Also Published As

Publication number Publication date
CN110956624B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN108921176B (en) Pointer instrument positioning and identifying method based on machine vision
CN109489566B (en) Lithium battery diaphragm material slitting width detection method, detection system and device
CN107462223B (en) Automatic measuring device and method for sight distance of vehicle before turning on highway
CN109507192B (en) Magnetic core surface defect detection method based on machine vision
CN101441320B (en) High dynamic image acquisition device based on microscopic imaging detection and method thereof
CN110956624A (en) Image definition evaluation method for three-dimensional object
CN111083365B (en) Method and device for rapidly detecting optimal focal plane position
GB2569751A (en) Static infrared thermal image processing-based underground pipe leakage detection method
CN111462075B (en) Rapid refocusing method and system for full-slice digital pathological image fuzzy region
CN103458261B (en) Video scene variation detection method based on stereoscopic vision
CN102441581A (en) Machine vision-based device and method for online detection of structural steel section size
CN114549441B (en) Straw defect detection method based on image processing
CN105139391B (en) A kind of haze weather traffic image edge detection method
CN109182081A (en) A kind of unicellular separation system based on image processing model
CN106897995A (en) A kind of parts automatic identifying method towards Automatic manual transmission process
WO2021000948A1 (en) Counterweight weight detection method and system, and acquisition method and system, and crane
CN116894775B (en) Bolt image preprocessing method based on camera motion model recovery and super-resolution
CN111242888A (en) Image processing method and system based on machine vision
CN114529715B (en) Image identification method and system based on edge extraction
CN106534661A (en) Automatic focus algorithm accumulated based on strongest edge gradient Laplasse operator
CN102663781A (en) Sub-pixel level welding center extraction method based on visual sense
CN115661110A (en) Method for identifying and positioning transparent workpiece
CN113538545B (en) Monocular depth estimation method based on electro-hydraulic adjustable-focus lens and corresponding camera and storage medium
CN104966283A (en) Imaging layered registering method
Dogan et al. Automatic panorama with auto-focusing based on image fusion for microscopic imaging system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant