CN112734916A - Color confocal parallel measurement three-dimensional morphology reduction algorithm based on image processing - Google Patents
Color confocal parallel measurement three-dimensional morphology reduction algorithm based on image processing Download PDFInfo
- Publication number
- CN112734916A CN112734916A CN202110097600.4A CN202110097600A CN112734916A CN 112734916 A CN112734916 A CN 112734916A CN 202110097600 A CN202110097600 A CN 202110097600A CN 112734916 A CN112734916 A CN 112734916A
- Authority
- CN
- China
- Prior art keywords
- image
- processing
- dimensional
- parallel measurement
- centroid
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012545 processing Methods 0.000 title claims abstract description 58
- 238000005259 measurement Methods 0.000 title claims abstract description 43
- 230000009467 reduction Effects 0.000 title claims abstract description 32
- 238000006243 chemical reaction Methods 0.000 claims abstract description 21
- 238000012876 topography Methods 0.000 claims abstract description 17
- 238000004891 communication Methods 0.000 claims abstract description 14
- 238000000605 extraction Methods 0.000 claims abstract description 9
- 230000000877 morphologic effect Effects 0.000 claims abstract description 9
- 238000000034 method Methods 0.000 claims description 32
- 230000008569 process Effects 0.000 claims description 17
- 238000007781 pre-processing Methods 0.000 claims description 9
- 230000006872 improvement Effects 0.000 description 9
- 238000004364 calculation method Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011088 calibration curve Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4007—Interpolation-based scaling, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration by the use of local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20192—Edge enhancement; Edge preservation
Abstract
The invention provides a color confocal parallel measurement three-dimensional morphology reduction algorithm based on image processing, which comprises the steps of firstly, carrying out target extraction and image splicing on a plurality of images obtained by a line scanning parallel color confocal system to obtain spliced image information required to be processed; secondly, obtaining a centroid communication area of each measured point through morphological processing and a centroid extraction algorithm; and finally, carrying out 'H value-height' conversion on the centroid communication area of each measured point in the spliced image, obtaining the height of the corresponding measured point according to the H value, and realizing the reconstruction of the three-dimensional surface topography of the object by combining an interpolation fitting algorithm. The algorithm of the invention avoids the influence of image noise caused by stray light and defocusing light by using a mode of matting with a mask, can accurately extract a target light spot region to be processed, improves the processing precision, and simultaneously greatly shortens the processing time and improves the processing efficiency by using the image splicing.
Description
Technical Field
The invention relates to the field of image processing in optical detection, in particular to a color confocal parallel measurement three-dimensional morphology reduction algorithm based on image processing.
Background
When the color confocal parallel measuring device is combined with a color camera to be used as a sensor to acquire images, the influence of stray light and defocused light is often encountered in the image processing process, and the acquired image quality is different, so that the centroids of all light spots are difficult to recognize at one time to position, extract and process the target measured point. Especially, in the process of parallel measurement, the number of the collected images is large, the processing time is long, and the processing efficiency is low.
In patent application CN109373927A, application No. 2018.09.28, a color confocal three-dimensional topography measuring method and system, a color confocal technology and a color camera are combined to obtain the three-dimensional topography of the surface of an object, but the invention only aims at a single-point color confocal measuring system, and does not apply a color conversion algorithm to parallel measurement, and cannot solve the problems of multiple images and multiple measured points.
In patent application CN111288928A, application No. 2020.03.12, entitled method, device, equipment and storage medium for measuring three-dimensional topography of object surface, an algorithm applied to color confocal parallel measurement is disclosed, but the algorithm is long in time consumption and low in measurement efficiency, influences of stray light and defocused light are easily introduced in the processing process, and the measurement precision is low.
Disclosure of Invention
The invention aims to solve the problems that the prior art can not process images aiming at a plurality of images and a plurality of measured objects, the influence of stray light and interference light is easily introduced in the image processing process, and the measurement precision is low.
Aiming at the problems, the invention provides a color confocal parallel measurement three-dimensional morphology reduction algorithm based on image processing, which comprises the following steps:
s1, inputting K images of K different measured points obtained in the online scanning process of the sample object, wherein K is an integer greater than or equal to 2;
s2, sequentially carrying out mask matting and image splicing on the K images to generate spliced images;
s3, performing morphological opening operation processing on the spliced image, and extracting a centroid connection area of each measured point;
s4, performing color conversion on each centroid communication area to obtain the height value of each measured point;
and S5, utilizing the three-dimensional drawing instruction to combine with the two-dimensional data interpolation fitting to finally obtain the three-dimensional shape of the whole measured object surface after the interpolation fitting.
As a further improvement of the image processing-based color confocal parallel measurement three-dimensional morphology reduction algorithm, step S1 specifically includes: when the sample object performs line scanning movement, the color camera takes pictures to obtain K pictures corresponding to K different positions; l light spots generated by line scanning are uniformly distributed on each picture in a straight line, and the total light spots to be processed is K x L; k images are input into the computer.
As a further improvement of the image processing-based color confocal parallel measurement three-dimensional topography reduction algorithm of the present invention, step S2 includes the following steps:
s21, carrying out image preprocessing on the input K images, removing redundant background areas and reserving light spot areas to be processed;
s22, performing mask matting on the preprocessed image to obtain a target image required by splicing;
s23, generating a white negative film;
s24, setting an ROI area on the white film;
and S25, splicing each target image obtained in the step S22 on the white negative film obtained in the step S24 to generate a spliced image containing information of all measured points.
As a further improvement of the image processing-based color confocal parallel measurement three-dimensional morphology reduction algorithm, in step S21, the image preprocessing includes rotating and cropping the image, so as to remove redundant background regions and only reserve the light spot region to be processed, thereby saving memory and improving processing efficiency.
As a further improvement of the image processing-based color confocal parallel measurement three-dimensional topography reduction algorithm, in step S22, the mask matting specifically includes: setting mask parameters, setting the mask value of the target light spot position to be non-null, and setting other positions to be 0; and executing mask parameters, extracting required light spot target pixels, shielding redundant background pixels, and generating a shape image in linear arrangement.
As a further improvement of the image processing-based color confocal parallel measurement three-dimensional morphology reduction algorithm of the present invention, in step S23: the size of the white film is calculated from the number K of input images and the total length of L spots in each image, so that the white film just covers all K × L spots measured.
As a further improvement of the image processing-based color confocal parallel measurement three-dimensional morphology reduction algorithm, step S25 specifically includes: and (4) superposing each image extracted by the mask in the step (S22) to the ROI area corresponding to the white negative film one by one, and splicing to generate a complete spliced image containing information of all measured points.
As a further improvement of the image processing-based color confocal parallel measurement three-dimensional morphology reduction algorithm of the present invention, in step S3: the morphological open operation processing is to corrode the image, and then expand the corroded result to remove the noise point of the image, so that the denoised image tends to be smooth and better to identify the mass center; the process of extracting the centroid communication area of each measured point comprises the following steps: and (3) extracting the centroid coordinates of each measured point by using a centroid extraction algorithm, and intercepting a corresponding circle by combining the light spot radius of the image to obtain a centroid communication area of each measured point.
As a further improvement of the image processing-based color confocal parallel measurement three-dimensional morphology reduction algorithm, step S4 specifically includes: converting the RGB space into the HSI space by using an RGB-HSI color space conversion algorithm, and extracting a hue parameter H value of each centroid connected region, wherein the H value conversion formula is as follows:
and combining the corresponding curve of the H value-height to obtain the height information of each measured point.
As a further improvement of the color confocal parallel measurement three-dimensional morphology reduction algorithm based on image processing, in step S5, a bicubic interpolation method is adopted for fitting, and compared with a common bilinear interpolation method and a nearest neighbor interpolation method, the bicubic interpolation method is large in calculation amount, the algorithm is the most complex, the calculation is the most accurate, the interpolation fitting result is better, smoother edges can be generated, the quality loss of the processed image is the least, and the effect is the best.
The invention has the following beneficial effects:
compared with the traditional binarization centroid extraction algorithm, the algorithm uses a mask to extract a plurality of target images, and avoids the influence of image noise caused by stray light and defocused light in a background area; the centroid connected region is obtained by morphology, a target light spot region required to be processed can be accurately extracted, and the measurement precision is greatly improved; the color conversion algorithm is applied to parallel measurement, and data processing can be performed on a plurality of images and a plurality of measured points; meanwhile, the processing speed is improved to a certain extent by the application of image splicing, and the algorithm can effectively process a plurality of color images obtained by color confocal parallel measurement in real time to generate a measured object surface three-dimensional topography map containing all measured points.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a general flowchart of an algorithm of a color confocal parallel measurement algorithm based on image processing according to an embodiment of the present invention.
Fig. 2 is a flowchart of an image stitching algorithm provided in the embodiment of the present invention.
Fig. 3 is an exemplary diagram of patterns after image stitching according to an embodiment of the present invention.
FIG. 4 is a graph showing the result of the reduction of the three-dimensional topography of the surface of the object according to the embodiment of the present invention.
Detailed Description
For better understanding of the technical solutions of the present invention, the following detailed descriptions of the embodiments of the present invention are provided with reference to the accompanying drawings.
The embodiment of the invention discloses a color confocal parallel measurement three-dimensional morphology reduction algorithm based on image processing on the basis of monitoring the focused light reflection of each position on the surface of a sample object by using a color confocal parallel measurement device and collecting images by using a color camera as a sensor, and a computer carrier is used for carrying out the three-dimensional morphology reduction on the surface of the collected images.
Referring to fig. 1, a general flowchart of a color confocal parallel measurement three-dimensional morphology reduction algorithm based on image processing and a flowchart of an image stitching algorithm provided in fig. 2 are shown, in an embodiment of the present invention, the color confocal parallel measurement algorithm based on image processing includes the following steps:
s1, inputting K images of K different measured points obtained in the online scanning process of the sample object, wherein K is an integer greater than or equal to 2 and can be 100, 500, 1000 and the like;
s2, sequentially carrying out mask matting and image splicing on the K images to generate spliced images;
s3, performing morphological opening operation processing on the spliced image, and extracting a centroid connection area of each measured point;
s4, performing color conversion on each centroid communication area to obtain the height value of each measured point;
and S5, utilizing the three-dimensional drawing instruction to combine with the two-dimensional data interpolation fitting to finally obtain the three-dimensional shape of the whole measured object surface after the interpolation fitting.
The above steps are further explained below.
Wherein, step S1 specifically includes: when the sample object performs line scanning motion along the direction vertical to the optical fiber bundle, the color camera takes pictures to obtain K pictures corresponding to K different positions; assuming that L light spots generated by line scanning are uniformly distributed on each picture in a straight line, and K x L light spots to be processed are total; k images are input into the computer.
Wherein, step S2 includes the following four substeps:
and S21, performing image preprocessing on the input K images, wherein the image preprocessing comprises rotating and cutting the images to remove redundant background areas and reserve light spot areas to be processed so as to save memory and improve processing efficiency.
S22, performing mask matting on the preprocessed image to obtain a target image required by splicing; the mask cutout specifically is: setting mask parameters, setting the mask value of the target light spot position to be non-null, and setting other positions to be 0; and executing mask parameters, extracting required light spot target pixels, shielding redundant background pixels, and generating a shape image similar to a linear arrangement.
S23, generating a white negative film; the size of the white film is calculated from the number K of input images and the total length of L spots in each image, so that the white film just covers all K × L spots measured.
S24, the ROI area of the white film generated in the step S23 is set.
And S25, superposing each image stripped by the mask in the step S22 on the ROI corresponding to the white film in the step S24 one by one, and splicing to generate a complete spliced image containing all measured point information.
Wherein, the ROI area is called Region of Interest, and Chinese means the ROI area. In machine vision and image processing, a region to be processed is outlined from a processed image in the form of a box, a circle, an ellipse, an irregular polygon, or the like, and is called a region of interest. In the invention, the ROI area of the image linearly changes in the white negative film according to a certain rule.
In the above step S3: the morphological open operation processing is to corrode the image, and then expand the corroded result to remove the noise point of the image, so that the denoised image tends to be smooth and better to identify the mass center; the process of extracting the centroid communication area of each measured point comprises the following steps: and (3) extracting the centroid coordinates of each measured point by using a centroid extraction algorithm, and intercepting a corresponding circle by combining the light spot radius of the image to obtain a centroid communication area of each measured point.
The step S4 is specifically: converting the RGB space into the HSI space by using an RGB-HSI color space conversion algorithm, and extracting a hue parameter H value of each centroid connected region, wherein the H value conversion formula is as follows:
and combining the corresponding curve of the H value-height to obtain the height information of each measured point.
In the step S5, a bicubic interpolation method is used for fitting, and compared with a conventional bilinear interpolation method and a nearest neighbor interpolation method, the bicubic interpolation method has the advantages of large calculation amount, most complex algorithm, most accurate calculation, better interpolation fitting result, capability of generating smoother edges, least loss of quality of the processed image, and best effect.
In summary, the present invention firstly uses the image stitching technology to perform the matting processing on the result acquired by the color camera by using the mask, and then cuts out the required target areas, and then stitches the target areas together. And further extracting the mass center by using a mass center extraction algorithm, realizing H value-height conversion by combining a color conversion algorithm, and carrying out reduction and interpolation fitting on the three-dimensional shape of the surface of the object. Compared with the traditional binarization centroid extraction algorithm, the algorithm uses a mask to extract a plurality of target images, and avoids the influence of stray light and defocused light in a background area; the centroid communication area is obtained by morphology, so that the measurement precision is greatly improved; the color conversion algorithm is applied to parallel measurement, and data processing can be performed on a plurality of images and a plurality of measured points; meanwhile, the processing speed is improved to a certain extent by the application of image splicing, and the algorithm can effectively process a plurality of color images obtained by color confocal parallel measurement in real time to generate a measured object surface three-dimensional topography map containing all measured points.
The above steps are explained below by way of example.
Specifically, the color confocal parallel measurement three-dimensional shape reduction algorithm based on image processing is a curved surface image formed by respectively taking the abscissa and the ordinate of a measured point as the x and the y coordinates of a three-dimensional shape reduction curve of an object on the basis of an 'H value-height' calibration curve, taking a height value corresponding to the measured point as the z coordinate of the three-dimensional shape reduction curve of the object, connecting the heights of all the measured points and performing two-dimensional interpolation fitting. In this embodiment, in the online scanning process of the sample object, the test area of the sample object is divided into 500 test points, 500 images are obtained, and each image has 42 linearly arranged light spots generated by the linear scanning, and all the tested points are 500 × 42 in total.
Referring to fig. 1, the calibration process of the present invention specifically includes the following steps:
s1, input image: 500 images of the sample object at 500 different positions obtained during the line scan are input.
In this embodiment, when the sample is moved in one dimension perpendicular to the line scanning direction during the line scanning process, 500 color images of the sample at 500 different positions can be obtained by taking pictures with the color camera at the same time. The subsequent image processing is performed on the basis of these 500 images. Each image has 42 linear spots generated by linear scanning, and the total number of the spots to be processed is 500 x 42.
S2, image splicing: the image stitching operation is performed on the input batch images, and specifically comprises the following steps: s21, S22, S23, S24 and S25, please refer to fig. 2.
S21, performing image preprocessing on the input batch images, specifically:
the image preprocessing comprises operations such as rotation, cutting and the like, redundant background areas are removed, and only light spot areas to be processed are reserved, so that the memory is saved, and the processing efficiency is improved;
s22, performing mask matting on the preprocessed image to obtain a target image required by splicing, specifically:
when the mask parameters are used, the operation is only executed on the pixel points with the mask value being non-empty, and the values of other pixel points are set to be 0, so that the required light spot target pixel can be extracted, the redundant background pixel is shielded, and a shape image similar to a fiber bundle structure is directly generated. The inventor uses a self-made mask to scratch a target image, and aims to avoid the problem that the accuracy of the target image is influenced due to the fact that the mask generated according to an original image is not accurate enough due to the fact that the quality conditions of light spots at different positions in the image are not consistent.
S23, generating a negative film with proper size and white color, specifically:
the color of the negative is white and the size of the negative is calculated from the number of input images and the total length of 42 light spots contained in each image after preprocessing, so that the negative just covers all 500 x 42 light spots obtained by measurement.
S24, setting the ROI of the white negative film.
S25, carrying out image splicing on the image extracted by the mask to generate a spliced image containing information of all measured points, specifically:
the images extracted by the mask are superposed into the corresponding ROI areas of the white negative film one by one, the ROI areas of the images are linearly changed in the white negative film according to a certain rule, so that a complete spliced image containing information of all measured points is generated by splicing, and an 'E' character formed by splicing a plurality of images obtained by experiments can be seen from the image in a hidden way, wherein the image is a spliced image which is shown in a figure 3. The following processing is all performed on the basis of the spliced "E" word.
S3, extracting a centroid connected region: the spliced image is subjected to morphological processing, and the centroid connected region is extracted, specifically:
through morphological open operation processing, namely, the image is corroded firstly, and then the corroded result is expanded, so that the noise point of the image is removed, the denoised image tends to be smooth, and the centroid recognition is better carried out. Then, extracting the centroid coordinate of each measured point by using a centroid extraction algorithm, and intercepting a corresponding circle by combining the light spot radius of the image to obtain a centroid communication area of each measured point;
s4, color space conversion: and carrying out color conversion on each centroid communication area to obtain a corresponding height value, specifically:
converting the RGB space into the HSI space by using an RGB-HSI color space conversion algorithm, extracting the H value of each centroid connected region, constructing an H function based on an RGB-HSI conversion geometric formula, and converting the RGB value of the color information acquired by the color camera into a hue parameter H value related to the wavelength.
Wherein, the H value conversion formula is as follows:
and then combining an 'H value-height' corresponding curve obtained by an earlier calibration experiment to obtain an 'H value-height' conversion relation, and obtaining height information of each measured point by knowing the H value.
S5, two-dimensional interpolation fitting and three-dimensional shape generation: the three-dimensional topography of the whole measured object surface after interpolation fitting is finally obtained by combining the three-dimensional drawing instruction with two-dimensional data interpolation fitting, as shown in fig. 4, the steps are specifically as follows:
and according to the actual position coordinate information of each measured point, utilizing a three-dimensional drawing instruction to combine with two-dimensional data interpolation fitting to finally obtain the three-dimensional appearance of the whole measured object surface after interpolation fitting. The inventor adopts a bicubic interpolation method for fitting, and compared with a common bilinear interpolation method and a nearest neighbor interpolation method, the bicubic interpolation method has the advantages of large calculated amount, most complex algorithm, most accurate calculation, better interpolation fitting result, capability of generating smoother edges, least loss of the quality of the processed image and optimal effect.
Claims (10)
1. A color confocal parallel measurement three-dimensional morphology reduction algorithm based on image processing is characterized by comprising the following steps:
s1, inputting K images of K different measured points obtained in the online scanning process of the sample object, wherein K is an integer greater than or equal to 2;
s2, sequentially carrying out mask matting and image splicing on the K images to generate spliced images;
s3, performing morphological opening operation processing on the spliced image, and extracting a centroid connection area of each measured point;
s4, performing color conversion on each centroid communication area to obtain the height value of each measured point;
and S5, utilizing the three-dimensional drawing instruction to combine with the two-dimensional data interpolation fitting to finally obtain the three-dimensional shape of the whole measured object surface after the interpolation fitting.
2. The image-processing-based color confocal parallel measurement three-dimensional topography reduction algorithm according to claim 1, wherein the step S1 specifically comprises:
when the sample object performs line scanning movement, the color camera takes pictures to obtain K pictures corresponding to K different positions; l light spots generated by line scanning are uniformly distributed on each picture in a straight line, and the total light spots to be processed is K x L; k images are input into the computer.
3. The image processing-based color confocal parallel measurement three-dimensional topography reduction algorithm according to claim 1, wherein the step S2 comprises the steps of:
s21, carrying out image preprocessing on the input K images, removing redundant background areas and reserving light spot areas to be processed;
s22, performing mask matting on the preprocessed image to obtain a target image required by splicing;
s23, generating a white negative film;
s24, setting an ROI area on the white film;
and S25, splicing each target image obtained in the step S22 on the white negative film obtained in the step S24 to generate a spliced image containing information of all measured points.
4. The image-processing-based color confocal parallel measurement three-dimensional morphology reduction algorithm according to claim 3, wherein in the step S21, the image preprocessing comprises rotating and cropping the image.
5. The image-processing-based color confocal parallel measurement three-dimensional topography reduction algorithm according to claim 3, wherein in the step S22, the mask matting specifically comprises:
setting mask parameters, setting the mask value of the target light spot position to be non-null, and setting other positions to be 0; and executing mask parameters, extracting required light spot target pixels, shielding redundant background pixels, and generating a shape image in linear arrangement.
6. The image processing-based color confocal parallel measurement three-dimensional topography reduction algorithm according to claim 3, wherein in step S23:
the size of the white film is calculated from the number K of input images and the total length of L spots in each image, so that the white film just covers all K × L spots measured.
7. The image-processing-based color confocal parallel measurement three-dimensional topography reduction algorithm according to claim 3, wherein the step S25 specifically comprises:
and (4) superposing each image extracted by the mask in the step (S22) to the ROI area corresponding to the white negative film one by one, and splicing to generate a complete spliced image containing information of all measured points.
8. The image-processing-based color confocal parallel measurement three-dimensional topography reduction algorithm according to claim 1, wherein in step S3:
the morphological opening operation processing comprises the following steps: corroding the image, and then expanding the corroded result; the process of extracting the centroid communication area of each measured point comprises the following steps: and (3) extracting the centroid coordinates of each measured point by using a centroid extraction algorithm, and intercepting a corresponding circle by combining the light spot radius of the image to obtain a centroid communication area of each measured point.
9. The image-processing-based color confocal parallel measurement three-dimensional topography reduction algorithm according to claim 1, wherein the step S4 is specifically:
converting the RGB space into the HSI space by using an RGB-HSI color space conversion algorithm, and extracting a hue parameter H value of each centroid connected region, wherein the H value conversion formula is as follows:
and combining the corresponding curve of the H value-height to obtain the height information of each measured point.
10. The image-processing-based color confocal parallel measurement three-dimensional morphology reduction algorithm according to claim 1, wherein in step S5, a bicubic interpolation method is used for fitting.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110097600.4A CN112734916B (en) | 2021-01-25 | 2021-01-25 | Color confocal parallel measurement three-dimensional morphology reduction method based on image processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110097600.4A CN112734916B (en) | 2021-01-25 | 2021-01-25 | Color confocal parallel measurement three-dimensional morphology reduction method based on image processing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112734916A true CN112734916A (en) | 2021-04-30 |
CN112734916B CN112734916B (en) | 2022-08-05 |
Family
ID=75595285
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110097600.4A Active CN112734916B (en) | 2021-01-25 | 2021-01-25 | Color confocal parallel measurement three-dimensional morphology reduction method based on image processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112734916B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113731836A (en) * | 2021-08-04 | 2021-12-03 | 华侨大学 | Urban solid waste online sorting system based on deep learning |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103942794A (en) * | 2014-04-16 | 2014-07-23 | 南京大学 | Image collaborative cutout method based on confidence level |
US20150192769A1 (en) * | 2014-01-09 | 2015-07-09 | Zygo Corporation | Measuring Topography of Aspheric and Other Non-Flat Surfaces |
CN104776815A (en) * | 2015-03-23 | 2015-07-15 | 中国科学院上海光学精密机械研究所 | Color three-dimensional profile measuring device and method based on Dammann grating |
CN109800641A (en) * | 2018-12-14 | 2019-05-24 | 天津大学 | Method for detecting lane lines based on threshold adaptive binaryzation and connected domain analysis |
CN111220090A (en) * | 2020-03-25 | 2020-06-02 | 宁波五维检测科技有限公司 | Line focusing differential color confocal three-dimensional surface topography measuring system and method |
CN111288928A (en) * | 2020-03-12 | 2020-06-16 | 华侨大学 | Object surface three-dimensional topography feature measuring method, device, equipment and storage medium |
CN211876977U (en) * | 2020-03-25 | 2020-11-06 | 宁波五维检测科技有限公司 | Line focusing differential color confocal three-dimensional surface topography measuring system |
-
2021
- 2021-01-25 CN CN202110097600.4A patent/CN112734916B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150192769A1 (en) * | 2014-01-09 | 2015-07-09 | Zygo Corporation | Measuring Topography of Aspheric and Other Non-Flat Surfaces |
CN103942794A (en) * | 2014-04-16 | 2014-07-23 | 南京大学 | Image collaborative cutout method based on confidence level |
CN104776815A (en) * | 2015-03-23 | 2015-07-15 | 中国科学院上海光学精密机械研究所 | Color three-dimensional profile measuring device and method based on Dammann grating |
CN109800641A (en) * | 2018-12-14 | 2019-05-24 | 天津大学 | Method for detecting lane lines based on threshold adaptive binaryzation and connected domain analysis |
CN111288928A (en) * | 2020-03-12 | 2020-06-16 | 华侨大学 | Object surface three-dimensional topography feature measuring method, device, equipment and storage medium |
CN111220090A (en) * | 2020-03-25 | 2020-06-02 | 宁波五维检测科技有限公司 | Line focusing differential color confocal three-dimensional surface topography measuring system and method |
CN211876977U (en) * | 2020-03-25 | 2020-11-06 | 宁波五维检测科技有限公司 | Line focusing differential color confocal three-dimensional surface topography measuring system |
Non-Patent Citations (2)
Title |
---|
T. JIANG 等: "3D MR image restoration by combining local genetic algorithm with adaptive pre-conditioning", 《IEEE》 * |
张恒康: "三维形貌实时测量方法研究及软件设计", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113731836A (en) * | 2021-08-04 | 2021-12-03 | 华侨大学 | Urban solid waste online sorting system based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN112734916B (en) | 2022-08-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109544456B (en) | Panoramic environment sensing method based on two-dimensional image and three-dimensional point cloud data fusion | |
CN111604909A (en) | Visual system of four-axis industrial stacking robot | |
CN107392849B (en) | Target identification and positioning method based on image subdivision | |
CN115294099B (en) | Method and system for detecting hairline defect in steel plate rolling process | |
CN114240845B (en) | Light cutting method surface roughness measurement method applied to cutting workpiece | |
JP2021168143A (en) | System and method for efficiently scoring probe in image by vision system | |
CN115131587A (en) | Template matching method of gradient vector features based on edge contour | |
CN111354047B (en) | Computer vision-based camera module positioning method and system | |
CN114331986A (en) | Dam crack identification and measurement method based on unmanned aerial vehicle vision | |
CN112734916B (en) | Color confocal parallel measurement three-dimensional morphology reduction method based on image processing | |
CN112489042A (en) | Metal product printing defect and surface damage detection method based on super-resolution reconstruction | |
CN116433672A (en) | Silicon wafer surface quality detection method based on image processing | |
CN115861351A (en) | Edge detection method, defect detection method and detection device | |
CN112329880A (en) | Template fast matching method based on similarity measurement and geometric features | |
CN115953550A (en) | Point cloud outlier rejection system and method for line structured light scanning | |
CN114549669B (en) | Color three-dimensional point cloud acquisition method based on image fusion technology | |
CN116503462A (en) | Method and system for quickly extracting circle center of circular spot | |
CN115775236A (en) | Surface tiny defect visual detection method and system based on multi-scale feature fusion | |
CN111540063A (en) | Full-automatic high-precision splicing method based on multi-station laser point cloud data | |
CN113705564B (en) | Pointer type instrument identification reading method | |
CN116596987A (en) | Workpiece three-dimensional size high-precision measurement method based on binocular vision | |
CN116817796A (en) | Method and device for measuring precision parameters of curved surface workpiece based on double telecentric lenses | |
CN116125489A (en) | Indoor object three-dimensional detection method, computer equipment and storage medium | |
CN115112098A (en) | Monocular vision one-dimensional two-dimensional measurement method | |
CN115187744A (en) | Cabinet identification method based on laser point cloud |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |