CN112734916B - Color confocal parallel measurement three-dimensional morphology reduction method based on image processing - Google Patents

Color confocal parallel measurement three-dimensional morphology reduction method based on image processing Download PDF

Info

Publication number
CN112734916B
CN112734916B CN202110097600.4A CN202110097600A CN112734916B CN 112734916 B CN112734916 B CN 112734916B CN 202110097600 A CN202110097600 A CN 202110097600A CN 112734916 B CN112734916 B CN 112734916B
Authority
CN
China
Prior art keywords
image
centroid
processing
images
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110097600.4A
Other languages
Chinese (zh)
Other versions
CN112734916A (en
Inventor
余卿
张雅丽
程方
王寅
尚文键
王翀
董声超
肖泽祯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaqiao University
Original Assignee
Huaqiao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaqiao University filed Critical Huaqiao University
Priority to CN202110097600.4A priority Critical patent/CN112734916B/en
Publication of CN112734916A publication Critical patent/CN112734916A/en
Application granted granted Critical
Publication of CN112734916B publication Critical patent/CN112734916B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a color confocal parallel measurement three-dimensional morphology reduction method based on image processing, which comprises the steps of firstly carrying out target extraction and image splicing on a plurality of images obtained by a line scanning parallel color confocal system to obtain spliced image information required to be processed; secondly, obtaining a centroid communication area of each measured point through morphological processing and a centroid extraction algorithm; and finally, carrying out 'H value-height' conversion on the centroid communication area of each measured point in the spliced image, obtaining the height of the corresponding measured point according to the H value, and realizing the reconstruction of the three-dimensional surface topography of the object by combining an interpolation fitting algorithm. The algorithm of the invention avoids the influence of image noise caused by stray light and defocusing light by using a mode of matting with a mask, can accurately extract a target light spot region to be processed, improves the processing precision, and simultaneously greatly shortens the processing time and improves the processing efficiency by using the image splicing.

Description

Color confocal parallel measurement three-dimensional morphology reduction method based on image processing
Technical Field
The invention relates to the field of image processing in optical detection, in particular to a color confocal parallel measurement three-dimensional morphology reduction method based on image processing.
Background
When the color confocal parallel measuring device is combined with a color camera to be used as a sensor to acquire images, the influence of stray light and defocused light is often encountered in the image processing process, and the acquired image quality is different, so that the centroids of all light spots are difficult to recognize at one time to position, extract and process the target measured point. Especially, in the process of parallel measurement, the number of the collected images is large, the processing time is long, and the processing efficiency is low.
In patent application CN109373927A, application No. 2018.09.28, a color confocal three-dimensional topography measuring method and system, a color confocal technology and a color camera are combined to obtain the three-dimensional topography of the surface of an object, but the invention only aims at a single-point color confocal measuring system, and does not apply a color conversion algorithm to parallel measurement, and cannot solve the problems of multiple images and multiple measured points.
In patent application CN111288928A, application No. 2020.03.12, entitled method, device, equipment and storage medium for measuring three-dimensional topography of object surface, an algorithm applied to color confocal parallel measurement is disclosed, but the algorithm is long in time consumption and low in measurement efficiency, influences of stray light and defocused light are easily introduced in the processing process, and the measurement precision is low.
Disclosure of Invention
The invention aims to solve the problems that the prior art can not process images aiming at a plurality of images and a plurality of measured objects, the influence of stray light and interference light is easily introduced in the image processing process, and the measurement precision is low.
Aiming at the problems, the invention provides a color confocal parallel measurement three-dimensional morphology reduction method based on image processing, which comprises the following steps:
s1, inputting K images of K different measured points obtained in the online scanning process of the sample object, wherein K is an integer greater than or equal to 2;
s2, sequentially carrying out mask matting and image splicing on the K images to generate spliced images;
s3, performing morphological opening operation processing on the spliced image, and extracting a centroid connection area of each measured point;
s4, performing color conversion on each centroid communication area to obtain the height value of each measured point;
and S5, combining the three-dimensional drawing command with the two-dimensional data interpolation fitting to finally obtain the three-dimensional shape of the whole measured object surface after the interpolation fitting.
As a further improvement of the image processing-based color confocal parallel measurement three-dimensional morphology reduction method, step S1 specifically includes: when the sample object performs line scanning movement, the color camera takes pictures to obtain K pictures corresponding to K different positions; l light spots generated by line scanning are uniformly distributed on each picture in a straight line, and the total light spots to be processed is K x L; k images are input into the computer.
As a further improvement of the image processing-based color confocal parallel measurement three-dimensional morphology reduction method of the present invention, step S2 includes the following steps:
s21, carrying out image preprocessing on the input K images, removing redundant background areas and reserving light spot areas to be processed;
s22, performing mask matting on the preprocessed image to obtain a target image required by splicing;
s23, generating a white negative film;
s24, setting an ROI area on the white film;
and S25, splicing the target images obtained in the step S22 on the white negative film obtained in the step S24 to generate a spliced image containing all the information of the measured points.
As a further improvement of the image processing-based color confocal parallel measurement three-dimensional morphology restoration method, in step S21, the image preprocessing includes rotating and cropping the image, so as to remove redundant background areas and only reserve the light spot area to be processed, thereby saving memory and improving processing efficiency.
As a further improvement of the image processing-based color confocal parallel measurement three-dimensional morphology reduction method, in step S22, the mask matting specifically includes: setting mask parameters, setting the mask value of the target light spot position to be non-null, and setting other positions to be 0; and executing mask parameters, extracting required light spot target pixels, shielding redundant background pixels and generating a shape image in linear arrangement.
As a further improvement of the image processing-based color confocal parallel measurement three-dimensional morphology reduction method of the present invention, in step S23: the size of the white film is calculated from the number K of input images and the total length of L spots in each image, so that the white film just covers all K × L spots measured.
As a further improvement of the image processing-based color confocal parallel measurement three-dimensional morphology reduction method, step S25 specifically includes: and (4) superposing each image extracted by the mask in the step (S22) to the ROI area corresponding to the white negative film one by one, and splicing to generate a complete spliced image containing information of all measured points.
As a further improvement of the image processing-based color confocal parallel measurement three-dimensional morphology reduction method of the present invention, in step S3: the morphological open operation processing is to corrode the image, and then expand the corroded result to remove the noise point of the image, so that the denoised image tends to be smooth and better to identify the mass center; the process of extracting the centroid communication area of each measured point comprises the following steps: and (3) extracting the centroid coordinates of each measured point by using a centroid extraction algorithm, and intercepting a corresponding circle by combining the light spot radius of the image to obtain a centroid communication area of each measured point.
As a further improvement of the image processing-based color confocal parallel measurement three-dimensional morphology reduction method, step S4 specifically includes: converting the RGB space into the HSI space by using an RGB-HSI color space conversion algorithm, and extracting a hue parameter H value of each centroid connected region, wherein the H value conversion formula is as follows:
Figure GDA0003705067060000041
Figure GDA0003705067060000042
and combining the corresponding curve of the H value-height to obtain the height information of each measured point.
As a further improvement of the image processing-based color confocal parallel measurement three-dimensional morphology reduction method, in the step S5, a bicubic interpolation method is adopted for fitting, and compared with a common bilinear interpolation method and a nearest neighbor interpolation method, the bicubic interpolation method is large in calculation amount, most complex in algorithm, most accurate in calculation, better in interpolation fitting result, capable of generating smoother edges, least in quality loss of processed images and best in effect.
The invention has the following beneficial effects:
compared with the traditional binarization centroid extraction algorithm, the algorithm uses a mask to extract a plurality of target images, and avoids the influence of image noise caused by stray light and defocused light in a background area; the centroid connected region is obtained by morphology, the target light spot region required to be processed can be accurately extracted, and the measurement precision is greatly improved; the color conversion algorithm is applied to parallel measurement, and data processing can be performed on a plurality of images and a plurality of measured points; meanwhile, the processing speed is improved to a certain extent by the application of image splicing, and the algorithm can effectively process a plurality of color images obtained by color confocal parallel measurement in real time to generate a measured object surface three-dimensional topography map containing all measured points.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a general flowchart of an algorithm of a color confocal parallel measurement algorithm based on image processing according to an embodiment of the present invention.
Fig. 2 is a flowchart of an image stitching algorithm provided in the embodiment of the present invention.
Fig. 3 is an exemplary diagram of patterns after image stitching according to an embodiment of the present invention.
FIG. 4 is a graph showing the result of the reduction of the three-dimensional topography of the surface of the object according to the embodiment of the present invention.
Detailed Description
For better understanding of the technical solutions of the present invention, the following detailed descriptions of the embodiments of the present invention are provided with reference to the accompanying drawings.
The embodiment of the invention discloses a color confocal parallel measurement three-dimensional morphology reduction method based on image processing on the basis of monitoring the focused light reflection of each position on the surface of a sample object by using a color confocal parallel measurement device and collecting images by using a color camera as a sensor, and a computer carrier is used for carrying out object surface three-dimensional morphology reduction on the collected images.
Referring to fig. 1, a general flowchart of a color confocal parallel measurement three-dimensional morphology reduction method based on image processing and a flowchart of an image stitching algorithm provided in fig. 2 are shown, in which an embodiment of the color confocal parallel measurement algorithm based on image processing includes the following steps:
s1, inputting K images of K different measured points obtained in the online scanning process of the sample object, wherein K is an integer greater than or equal to 2 and can be 100, 500, 1000 and the like;
s2, sequentially carrying out mask matting and image splicing on the K images to generate spliced images;
s3, performing morphological opening operation processing on the spliced image, and extracting a centroid connection area of each measured point;
s4, performing color conversion on each centroid communication area to obtain the height value of each measured point;
and S5, utilizing the three-dimensional drawing instruction to combine with the two-dimensional data interpolation fitting to finally obtain the three-dimensional shape of the whole measured object surface after the interpolation fitting.
The above steps are further explained below.
Wherein, step S1 specifically includes: when the sample object performs line scanning motion along the direction vertical to the optical fiber bundle, the color camera takes pictures to obtain K pictures corresponding to K different positions; assuming that L light spots generated by line scanning are uniformly distributed on each picture in a straight line, and K x L light spots to be processed are total; k images are input into the computer.
Wherein, step S2 includes the following four substeps:
and S21, performing image preprocessing on the input K images, wherein the image preprocessing comprises rotating and cutting the images to remove redundant background areas and reserve light spot areas to be processed so as to save memory and improve processing efficiency.
S22, performing mask matting on the preprocessed image to obtain a target image required by splicing; the mask cutout specifically is: setting mask parameters, setting the mask value of the target light spot position to be non-null, and setting other positions to be 0; and executing mask parameters, extracting required light spot target pixels, shielding redundant background pixels, and generating a shape image similar to a linear arrangement.
S23, generating a white negative film; the size of the white film is calculated from the number K of input images and the total length of L spots in each image, so that the white film just covers all K × L spots measured.
S24, the ROI area of the white film generated in the step S23 is set.
And S25, overlapping each image stripped by the mask in the step S22 to the ROI area corresponding to the white film in the step S24 one by one, and splicing to generate a complete spliced image containing information of all measured points.
Wherein, the ROI area is called Region of Interest, and Chinese means the ROI area. In machine vision and image processing, a region to be processed is outlined from a processed image in the form of a box, a circle, an ellipse, an irregular polygon, or the like, and is called a region of interest. In the invention, the ROI area of the image linearly changes in the white negative film according to a certain rule.
In the above step S3: the morphological open operation processing is to corrode the image, and then expand the corroded result to remove the noise point of the image, so that the denoised image tends to be smooth and better to identify the mass center; the process of extracting the centroid communication area of each measured point comprises the following steps: and (3) extracting the centroid coordinates of each measured point by using a centroid extraction algorithm, and intercepting a corresponding circle by combining the light spot radius of the image to obtain a centroid communication area of each measured point.
The step S4 is specifically: converting the RGB space into the HSI space by using an RGB-HSI color space conversion algorithm, and extracting a hue parameter H value of each centroid connected region, wherein the H value conversion formula is as follows:
Figure GDA0003705067060000071
Figure GDA0003705067060000072
and combining the corresponding curve of the H value-height to obtain the height information of each measured point.
In the step S5, a bicubic interpolation method is used for fitting, and compared with a conventional bilinear interpolation method and a nearest neighbor interpolation method, the bicubic interpolation method has the advantages of large calculation amount, most complex algorithm, most accurate calculation, better interpolation fitting result, capability of generating smoother edges, least loss of quality of the processed image, and best effect.
In summary, the present invention firstly uses the image stitching technology to perform the matting processing on the result acquired by the color camera by using the mask, and then cuts out the required target areas, and then stitches the target areas together. And further extracting the mass center by using a mass center extraction algorithm, realizing H value-height conversion by combining a color conversion algorithm, and carrying out reduction and interpolation fitting on the three-dimensional shape of the surface of the object. Compared with the traditional binarization centroid extraction algorithm, the algorithm uses a mask to extract a plurality of target images, and avoids the influence of stray light and defocused light in a background area; the centroid communication area is obtained by morphology, so that the measurement precision is greatly improved; the color conversion algorithm is applied to parallel measurement, and data processing can be performed on a plurality of images and a plurality of measured points; meanwhile, the processing speed is improved to a certain extent by the application of image splicing, and the algorithm can effectively process a plurality of color images obtained by color confocal parallel measurement in real time to generate a measured object surface three-dimensional topography map containing all measured points.
The above steps are explained below by way of example.
Specifically, the color confocal parallel measurement three-dimensional shape reduction method based on image processing is to use a character E on the surface of a 1-element coin as a standard reflection surface to obtain a three-dimensional shape reduction image, wherein the three-dimensional shape reduction image is a curved surface image formed by respectively using the abscissa and the ordinate of a measured point as the x and the y coordinates of a three-dimensional shape reduction curve of an object on the basis of an H value-height calibration curve, using a height value corresponding to the measured point as the z coordinate of the three-dimensional shape reduction curve of the object, connecting the heights of all the measured points and performing two-dimensional interpolation fitting. In this embodiment, in the online scanning process of the sample object, the test area of the sample object is divided into 500 test points, 500 images are obtained, and each image has 42 linearly arranged light spots generated by the linear scanning, and all the tested points are 500 × 42 in total.
Referring to fig. 1, the calibration process of the present invention specifically includes the following steps:
s1, input image: 500 images of the sample object at 500 different positions obtained during the line scan are input.
In this embodiment, when the sample performs a one-dimensional motion in a direction perpendicular to the line scanning direction during the line scanning process, 500 color images of the sample at 500 different positions can be obtained by simultaneously photographing with the color camera. The subsequent image processing is performed on the basis of these 500 images. Each image has 42 linear spots generated by linear scanning, and the total number of the spots to be processed is 500 x 42.
S2, image splicing: the image stitching operation is performed on the input batch images, and specifically comprises the following steps: s21, S22, S23, S24 and S25, please refer to fig. 2.
S21, performing image preprocessing on the input batch images, specifically:
the image preprocessing comprises operations such as rotation, cutting and the like, redundant background areas are removed, and only light spot areas to be processed are reserved, so that the memory is saved, and the processing efficiency is improved;
s22, performing mask matting on the preprocessed image to obtain a target image required by splicing, specifically:
when the mask parameters are used, the operation is only executed on the pixel points with the mask value being non-empty, and the values of other pixel points are set to be 0, so that the required light spot target pixel can be extracted, the redundant background pixel is shielded, and a shape image similar to a fiber bundle structure is directly generated. The inventor uses a self-made mask to scratch a target image, and aims to avoid the problem that the accuracy of the target image is influenced due to the fact that the mask generated according to an original image is not accurate enough due to the fact that the quality conditions of light spots at different positions in the image are not consistent.
S23, generating a negative film with proper size and white color, specifically:
the color of the negative is white and the size of the negative is calculated from the number of input images and the total length of 42 light spots contained in each image after preprocessing, so that the negative just covers all 500 x 42 light spots obtained by measurement.
S24, setting the ROI of the white negative film.
S25, carrying out image splicing on the image extracted by the mask to generate a spliced image containing information of all measured points, specifically:
the images extracted by the mask are superposed into the corresponding ROI areas of the white negative film one by one, the ROI areas of the images are linearly changed in the white negative film according to a certain rule, so that a complete spliced image containing information of all measured points is generated by splicing, and an 'E' character formed by splicing a plurality of images obtained by experiments can be seen from the image in a hidden way, wherein the image is a spliced image which is shown in a figure 3. The following processing is all performed on the basis of the spliced "E" word.
S3, extracting a centroid connected region: the spliced image is subjected to morphological processing, and the centroid connected region is extracted, specifically:
through morphological open operation processing, namely, the image is corroded firstly, and then the corroded result is expanded, so that the noise point of the image is removed, the denoised image tends to be smooth, and the centroid recognition is better carried out. Then, extracting the centroid coordinate of each measured point by using a centroid extraction algorithm, and intercepting a corresponding circle by combining the light spot radius of the image to obtain a centroid communication area of each measured point;
s4, color space conversion: and carrying out color conversion on each centroid communication area to obtain a corresponding height value, specifically:
converting the RGB space into the HSI space by using an RGB-HSI color space conversion algorithm, extracting the H value of each centroid connected region, constructing an H function based on an RGB-HSI conversion geometric formula, and converting the RGB value of the color information acquired by the color camera into a hue parameter H value related to the wavelength. Wherein, the H value conversion formula is as follows:
Figure GDA0003705067060000101
Figure GDA0003705067060000102
and then combining an 'H value-height' corresponding curve obtained by an earlier calibration experiment to obtain an 'H value-height' conversion relation, and obtaining height information of each measured point by knowing the H value.
S5, two-dimensional interpolation fitting and three-dimensional shape generation: the three-dimensional topography of the whole measured object surface after interpolation fitting is finally obtained by combining the three-dimensional drawing instruction with two-dimensional data interpolation fitting, as shown in fig. 4, the steps are specifically as follows:
and according to the actual position coordinate information of each measured point, utilizing a three-dimensional drawing instruction to combine with two-dimensional data interpolation fitting to finally obtain the three-dimensional appearance of the whole measured object surface after interpolation fitting. The inventor adopts a bicubic interpolation method for fitting, and compared with a common bilinear interpolation method and a nearest neighbor interpolation method, the bicubic interpolation method has the advantages of large calculated amount, most complex algorithm, most accurate calculation, better interpolation fitting result, capability of generating smoother edges, least loss of the quality of the processed image and optimal effect.

Claims (5)

1. A color confocal parallel measurement three-dimensional morphology reduction method based on image processing is characterized by comprising the following steps:
s1, inputting K images of K different measured points obtained in the online scanning process of the sample object, wherein K is an integer greater than or equal to 2;
s2, sequentially carrying out mask matting and image splicing on the K images to generate spliced images;
s3, performing morphological opening operation processing on the spliced image, and extracting a centroid connection area of each measured point;
s4, performing color conversion on each centroid communication area to obtain the height value of each measured point;
s5, utilizing a three-dimensional drawing command to combine with two-dimensional data interpolation fitting to finally obtain the three-dimensional appearance of the whole measured object surface after interpolation fitting;
wherein the step S2 includes the steps of:
s21, carrying out image preprocessing on the input K images, removing redundant background areas and reserving light spot areas to be processed;
s22, performing mask matting on the preprocessed image to obtain a target image required by splicing;
s23, generating a white negative film;
s24, setting an ROI area on the white film;
s25, splicing each target image obtained in the step S22 on the white negative film obtained in the step S24 to generate a spliced image containing information of all measured points;
in step S21, the image preprocessing includes rotating and cropping the image;
in step S22, the mask matting specifically includes: setting mask parameters, setting the mask value of the target light spot position to be non-null, and setting other positions to be 0; executing mask parameters, extracting required light spot target pixels, shielding redundant background pixels, and generating a shape image in linear arrangement;
in step S23: the size of the white film is calculated according to the number K of input images and the total length of L light spots in each image, so that the white film just covers all measured K x L light spots;
step S25 specifically includes: and (4) superposing each image extracted by the mask in the step (S22) to the ROI area corresponding to the white negative film one by one, and splicing to generate a complete spliced image containing information of all measured points.
2. The image-processing-based color confocal parallel measurement three-dimensional morphology restoration method according to claim 1, wherein the step S1 specifically comprises:
when the sample object performs line scanning movement, the color camera takes pictures to obtain K pictures corresponding to K different positions; l light spots generated by line scanning are uniformly distributed on each picture in a straight line, and the total light spots to be processed is K x L; k images are input into the computer.
3. The image-processing-based color confocal parallel measurement three-dimensional morphology restoration method according to claim 1, wherein in step S3:
the morphological opening operation processing comprises the following steps: corroding the image, and then expanding the corroded result; the process of extracting the centroid communication area of each measured point comprises the following steps: and (3) extracting the centroid coordinates of each measured point by using a centroid extraction algorithm, and intercepting a corresponding circle by combining the light spot radius of the image to obtain a centroid communication area of each measured point.
4. The image-processing-based color confocal parallel measurement three-dimensional morphology restoration method according to claim 1, wherein the step S4 specifically comprises:
converting the RGB space into the HSI space by using an RGB-HSI color space conversion algorithm, and extracting a hue parameter H value of each centroid connected region, wherein the H value conversion formula is as follows:
Figure FDA0003705067050000021
Figure FDA0003705067050000022
and combining the corresponding curve of the H value-height to obtain the height information of each measured point.
5. The image-processing-based color confocal parallel measurement three-dimensional morphology restoration method according to claim 1, wherein in step S5, a bicubic interpolation method is used for fitting.
CN202110097600.4A 2021-01-25 2021-01-25 Color confocal parallel measurement three-dimensional morphology reduction method based on image processing Active CN112734916B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110097600.4A CN112734916B (en) 2021-01-25 2021-01-25 Color confocal parallel measurement three-dimensional morphology reduction method based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110097600.4A CN112734916B (en) 2021-01-25 2021-01-25 Color confocal parallel measurement three-dimensional morphology reduction method based on image processing

Publications (2)

Publication Number Publication Date
CN112734916A CN112734916A (en) 2021-04-30
CN112734916B true CN112734916B (en) 2022-08-05

Family

ID=75595285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110097600.4A Active CN112734916B (en) 2021-01-25 2021-01-25 Color confocal parallel measurement three-dimensional morphology reduction method based on image processing

Country Status (1)

Country Link
CN (1) CN112734916B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113731836B (en) * 2021-08-04 2023-05-26 华侨大学 Urban solid waste on-line sorting system based on deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942794A (en) * 2014-04-16 2014-07-23 南京大学 Image collaborative cutout method based on confidence level
CN104776815A (en) * 2015-03-23 2015-07-15 中国科学院上海光学精密机械研究所 Color three-dimensional profile measuring device and method based on Dammann grating
CN109800641A (en) * 2018-12-14 2019-05-24 天津大学 Method for detecting lane lines based on threshold adaptive binaryzation and connected domain analysis
CN111220090A (en) * 2020-03-25 2020-06-02 宁波五维检测科技有限公司 Line focusing differential color confocal three-dimensional surface topography measuring system and method
CN111288928A (en) * 2020-03-12 2020-06-16 华侨大学 Object surface three-dimensional topography feature measuring method, device, equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SG11201605108VA (en) * 2014-01-09 2016-07-28 Zygo Corp Measuring topography of aspheric and other non-flat surfaces
CN211876977U (en) * 2020-03-25 2020-11-06 宁波五维检测科技有限公司 Line focusing differential color confocal three-dimensional surface topography measuring system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103942794A (en) * 2014-04-16 2014-07-23 南京大学 Image collaborative cutout method based on confidence level
CN104776815A (en) * 2015-03-23 2015-07-15 中国科学院上海光学精密机械研究所 Color three-dimensional profile measuring device and method based on Dammann grating
CN109800641A (en) * 2018-12-14 2019-05-24 天津大学 Method for detecting lane lines based on threshold adaptive binaryzation and connected domain analysis
CN111288928A (en) * 2020-03-12 2020-06-16 华侨大学 Object surface three-dimensional topography feature measuring method, device, equipment and storage medium
CN111220090A (en) * 2020-03-25 2020-06-02 宁波五维检测科技有限公司 Line focusing differential color confocal three-dimensional surface topography measuring system and method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
3D MR image restoration by combining local genetic algorithm with adaptive pre-conditioning;T. Jiang 等;《IEEE》;20001231;全文 *
三维形貌实时测量方法研究及软件设计;张恒康;《中国优秀硕士学位论文全文数据库》;20120715;全文 *

Also Published As

Publication number Publication date
CN112734916A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
RU2680765C1 (en) Automated determination and cutting of non-singular contour of a picture on an image
CN109961399B (en) Optimal suture line searching method based on image distance transformation
CN111604909A (en) Visual system of four-axis industrial stacking robot
JP6899189B2 (en) Systems and methods for efficiently scoring probes in images with a vision system
CN107392849B (en) Target identification and positioning method based on image subdivision
CN111354047B (en) Computer vision-based camera module positioning method and system
CN114240845B (en) Light cutting method surface roughness measurement method applied to cutting workpiece
CN115131587A (en) Template matching method of gradient vector features based on edge contour
CN111932673A (en) Object space data augmentation method and system based on three-dimensional reconstruction
CN112184804B (en) High-density welding spot positioning method and device for large-volume workpiece, storage medium and terminal
CN116433672B (en) Silicon wafer surface quality detection method based on image processing
CN112489042A (en) Metal product printing defect and surface damage detection method based on super-resolution reconstruction
CN112734916B (en) Color confocal parallel measurement three-dimensional morphology reduction method based on image processing
CN109241948A (en) A kind of NC cutting tool visual identity method and device
CN112329880A (en) Template fast matching method based on similarity measurement and geometric features
CN115953550A (en) Point cloud outlier rejection system and method for line structured light scanning
CN111540063A (en) Full-automatic high-precision splicing method based on multi-station laser point cloud data
CN113705564B (en) Pointer type instrument identification reading method
CN114092499A (en) Medicine box dividing method
CN113723314A (en) Sugarcane stem node identification method based on YOLOv3 algorithm
CN116596987A (en) Workpiece three-dimensional size high-precision measurement method based on binocular vision
CN115112098B (en) Monocular vision one-dimensional two-dimensional measurement method
CN116125489A (en) Indoor object three-dimensional detection method, computer equipment and storage medium
CN115601616A (en) Sample data generation method and device, electronic equipment and storage medium
CN115035071A (en) Visual detection method for black spot defect of PAD light guide plate

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant