CN115065761B - Multi-lens scanning device and scanning method thereof - Google Patents

Multi-lens scanning device and scanning method thereof Download PDF

Info

Publication number
CN115065761B
CN115065761B CN202210670745.3A CN202210670745A CN115065761B CN 115065761 B CN115065761 B CN 115065761B CN 202210670745 A CN202210670745 A CN 202210670745A CN 115065761 B CN115065761 B CN 115065761B
Authority
CN
China
Prior art keywords
scanning
image
resolution
focus
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210670745.3A
Other languages
Chinese (zh)
Other versions
CN115065761A (en
Inventor
张凌
陈天君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongyi Qihang Digital Technology Beijing Co ltd
Original Assignee
Zhongyi Qihang Digital Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongyi Qihang Digital Technology Beijing Co ltd filed Critical Zhongyi Qihang Digital Technology Beijing Co ltd
Priority to CN202210670745.3A priority Critical patent/CN115065761B/en
Publication of CN115065761A publication Critical patent/CN115065761A/en
Application granted granted Critical
Publication of CN115065761B publication Critical patent/CN115065761B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • H04N1/0402Scanning different formats; Scanning with different densities of dots per unit length, e.g. different numbers of dots per inch (dpi); Conversion of scanning standards
    • H04N1/042Details of the method used
    • H04N1/044Tilting an optical element, e.g. a refractive plate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • H04N1/0402Scanning different formats; Scanning with different densities of dots per unit length, e.g. different numbers of dots per inch (dpi); Conversion of scanning standards
    • H04N1/042Details of the method used
    • H04N1/0443Varying the scanning velocity or position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • H04N1/047Detection, control or error compensation of scanning velocity or position

Abstract

The invention relates to a multi-lens scanning device, which at least comprises a scanning unit and an illumination unit, wherein a plurality of scanning units are suspended above a platform main body in a side-by-side mode through a suspension adjusting assembly, and the spacing between adjacent scanning units is set in a mode that the scannable areas of the scanning units are at least partially overlapped, so that the scanning units can supplement and correct in-focus image edge area data acquired by the scanning units adjacent to the scanning units by at least partial strip-shaped in-focus image data acquired by the scanning units; the scanning units arranged side by side can transmit the acquired image data to the processing unit for selective image processing. The invention also relates to a scanning method of the multi-lens scanning device, which effectively improves the spatial resolution of the full-focus image, further realizes the full-focus imaging with high space-time resolution, and achieves the aim of collecting the high-resolution full-focus image within a large depth of field.

Description

Multi-lens scanning device and scanning method thereof
Technical Field
The present invention relates to the field of scanning devices, and in particular, to a multi-lens scanning device and a scanning method thereof.
Background
Currently, there are many methods for acquiring multi-focused images by a scanning device, and the most common methods include a manual focusing method, an automatic focusing method based on a mechanical structure, and a focusing method based on a specific device. The manual focusing method can be used for acquiring partial focusing images with different focal depths by manually adjusting the focusing positions, the focusing positions can be adjusted according to actual demands, and the method has good scene adaptability, but the focusing precision can not be ensured due to uncertainty of manual operation, and the number of partial focusing images which can be acquired by manual operation is limited, meanwhile, the focal length adjustment process is time-consuming and labor-consuming, and the image acquisition efficiency is low. The automatic focusing mode based on the mechanical structure can control the change amount of the focal length through a preset program, namely, the focal length can be accurately focused by adjusting the position of a sensor or an imaging main lens through mechanical movement, partial in-focus images are acquired, but in the image acquisition process, multiple exposure is carried out along with the change of the position of the sensor (the change of the position of the imaging main lens), the background and noise information of each partial in-focus image are inconsistent, and the final full-focus fusion quality is seriously influenced.
The traditional single-image super-resolution method based on deep learning needs to perform secondary sampling on a high-resolution image to obtain a corresponding low-resolution image, a training data set corresponding to the high-resolution image and the low-resolution image is obtained through a large number of secondary sampling operations, the premise of establishing a reasonable training data set is that the high-resolution image is obtained in advance, in the work, the requirement of a fusion algorithm based on deep learning on hardware is high, the difficulty of the method is increased due to the fact that a large number of training sets are obtained, noise is easy to occur in the process of reconstructing high-frequency information, and the quality of full-focus fusion is affected. In addition, the low-resolution image is reconstructed from the single exposure light field image, and the high-resolution image is acquired by using another common single phase inverter. In the reconstruction process, the input information is a low-resolution image obtained by subsampling an original high-resolution image, and the input information is output as a high-resolution image reconstructed by deep learning, and the output information and the input information have substantial correlation in the structure. However, it is difficult to achieve the same field angle and focus position of the single-lens reflex camera and the light field camera in operation, so that the training set formed by the obtained high and low resolution image pairs is difficult to acquire.
Patent document publication No. CN113689334a discloses a laser imaging apparatus and a laser imaging control method for solving the problem of large laser imaging error of a low resolution image, improving laser imaging accuracy. The laser imaging device provided by the embodiment of the invention can improve the resolution of the original binary dot matrix image by adopting an image interpolation algorithm, the pixel distance between adjacent pixels is reduced, the offset of the pixels with raised edges is reduced under the condition that the number of the pixels with raised edges of the curve image or the oblique line image is the same, and the precision of laser imaging is improved. The laser imaging device improves the resolution of the original binary dot matrix image through an image interpolation algorithm, but the spatial resolution and the time resolution of the actually acquired image are still lower, and the output image still has larger imaging error.
Therefore, in order to solve the defects of low spatial resolution of light field imaging and poor fusion quality of pixel points at edge positions when multi-focus images are subjected to full focus fusion, a multi-lens scanning device and a scanning method thereof are needed, wherein the multi-lens scanning device can be used for performing single-image super-resolution on the light field low-resolution images and effectively performing full focus fusion on a plurality of subarea focus images.
Furthermore, there are differences in one aspect due to understanding to those skilled in the art; on the other hand, as the inventors studied numerous documents and patents while the present invention was made, the text is not limited to details and contents of all that are listed, but it is by no means the present invention does not have these prior art features, the present invention has all the prior art features, and the applicant remains in the background art to which the rights of the related prior art are added.
Disclosure of Invention
In order to overcome the defects in the prior art, the technical scheme of the invention provides a multi-lens scanning device which at least comprises a placing table for fixing and placing objects to be scanned, a scanning unit and an illumination unit, wherein the scanning unit is connected with a platform main body through a rack, the scanning unit finishes scanning the objects to be scanned placed on the placing table in a way of carrying out directional translation, a plurality of scanning units are suspended above the platform main body in a side-by-side way through a suspension adjusting component, and the distance between adjacent scanning units is set in a way of at least partially overlapping the scannable areas of two adjacent scanning units, so that the scanning units can supplement and correct in-focus image edge area data acquired by the adjacent scanning units by at least partially bar-shaped in-focus image data acquired by the scanning units; the plurality of scanning units arranged side by side can transmit the acquired image data to the processing unit for selective image processing, so that a full-focus image with high space-time resolution is acquired. The method has the advantages that the plurality of scanning units are arranged side by side at the set working position, so that a single scanning unit only needs to acquire clear image data of at least part of strip-shaped areas, and the scanning units are arranged at equal intervals in a mode that part of clear imaging areas of any one scanning unit are overlapped with fuzzy imaging areas of adjacent scanning units, so that the scanning areas of the two adjacent scanning units are at least partially overlapped, and therefore mutual verification, supplement and correction can be carried out by utilizing the edge area data of in-focus images of the adjacent scanning units, and the spliced large-format scanning image obtained through the full-focus fusion processing has higher resolution. When the image processing operation is carried out, a depth neural network capable of upsampling the low-resolution in-focus image is introduced, so that the resolution of the low-resolution in-focus image is improved through the neural network before full-focus fusion, the introduced depth neural network can reconstruct high-frequency information details of a high-resolution image, and the multi-focus image is subjected to full-focus fusion by combining a guide filter, so that a full-focus image with rich high-frequency detail information and sharp edges is obtained, and the large-range depth-of-field extension and the acquisition of a high-quality large-depth-of-field full-focus image are realized.
According to a preferred embodiment, a plurality of scanning units complete the acquisition of image data of the surface of the object to be scanned with a large breadth while the directional translation occurs, and the scanning units transmit the image data acquired by the scanning units to the processing unit in an orderly arrangement manner, and the processing unit selectively performs a guided filter-based full-focus fusion process and/or a deep neural network-based high-space-time resolution full-focus imaging operation under the control of the control unit. The adjustable scanning unit has the advantages that the working height of the scanning unit can be adjusted according to initial information such as the thickness of an object to be scanned, and the scanning unit can also be used for selectively adjusting the working heights of single or multiple scanning units according to the fluctuation condition of the surfaces of the objects in different strip areas. In addition, through the full-focus fusion processing and the deep neural network of the guide filter, the processing unit can reconstruct high-frequency information details of the high-resolution partial image while improving the resolution of the low-resolution partial image of the in-focus image, so that a better full-focus fusion result is achieved to a certain extent, and information verification of the image overlapping part can be accurately completed when the strip images are spliced, so that the finally obtained full-focus image has higher resolution.
According to a preferred embodiment, the full-focus fusion processing based on the guide filter by the processing unit completes the splicing and fusion of orderly arranged strip image data in a mode of reserving the edge information of the image acquired by the single scanning unit to the maximum extent, so that the adjacent two strip image data complete the splicing and fusion of the whole image while supplementing and correcting the respective edge information in a mutual verification mode. The method has the advantages that the edge information can be reserved to the maximum extent by applying the full-focus fusion algorithm based on the guide filter to carry out full-focus fusion on the multi-focus image, and high-speed and high-quality full-focus fusion is realized. In order to improve the quality of multi-focus image acquisition, the multi-focus image acquisition is carried out under single exposure by adopting a light field imaging method, so that the consistency of the background information of the multi-focus image and the completeness of the full-focus fusion input information are ensured, and meanwhile, the method has the advantages of low cost, simple system structure and wide depth expansion range, and the high-quality multi-focus image acquisition is effectively realized. The single exposure light field full-focus fusion processing system based on the guide filter is formed by combining a fusion algorithm based on the guide filter with light field imaging, so that high-quality and large-depth full-focus image acquisition is realized.
According to a preferred embodiment, the processing unit increases the convolutional depth neural network in a manner capable of improving the spatial resolution of the low-resolution light field part in the focal image so as to complete single-image super-resolution of the low-resolution image, wherein any one of the scanning units supplements the acquired low-resolution light field part in the edge region of the focal image through the image edge information acquired by the adjacent scanning units, so that the missing image information of the image edge region is supplemented to improve the resolution of the single image. The method has the advantages that the image data with high time resolution and spatial resolution are obtained by introducing a high space-time resolution full-focus imaging system based on the depth neural network, single-image super resolution of a light field low-resolution image is realized by adding the convolution depth neural network, the network is used for upsampling a light field part with low resolution on a focused image, the resolution of a multi-focus image is effectively improved, and finally the full-focus image with high space-time resolution is obtained through a full-focus fusion algorithm based on a guide filter, so that high resolution and large depth of field image acquisition on the surface of an object to be scanned is realized. The processing mode of carrying out light field full-focus fusion based on single exposure of the guide filter solves the problems of low precision, high complexity, inconsistent background information, high-frequency information loss and the like of the traditional multi-focus fusion method, realizes the acquisition of full-focus images under single exposure, and effectively acquires image information with large depth of field. On the basis of full-focus fusion based on a guide filter, the convolutional depth neural network is utilized to realize up-sampling of a low-resolution light field part on a focused image, so that the spatial resolution of the full-focus image is effectively improved, and the aim of collecting the high-resolution full-focus image in a large depth-of-field range is fulfilled.
According to a preferred embodiment, the processing unit performs single-image super-resolution of the light field low-resolution image by upsampling the low-resolution light field portion in the in-focus image. The method has the advantages that the convolution depth neural network capable of completing single-image super-resolution processing is introduced to perform up-sampling operation on the low-resolution light field image, so that the resolution of the multi-focus image is effectively improved, and the defect that the spatial resolution of the multi-focus image obtained by the existing light field imaging is low and the guide filter cannot be used for obtaining the high-resolution full-focus image is overcome.
According to a preferred embodiment, the processing unit supplements the edge area of the focal image with the image edge information acquired by the adjacent scanning unit, wherein the image edge information is acquired by the adjacent scanning unit, and the image information is acquired by the designated scanning unit and is missing from the edge area of the image.
According to a preferred embodiment, the convolutional deep neural network of the processing unit at least comprises a compression module, a reconstruction module and a loss module, wherein the data output by the compression module can be up-sampled in the reconstruction module, the reconstruction module mixes and reduces channels in a trunk structure by adding a convolutional layer, so that the whole structure can flexibly realize up-sampling by adopting a pixel shuffling strategy. The method has the advantages that the convolution compression module is introduced to compress the image to a certain extent, and the difference between the bilinear interpolation method and the real image fuzzy core is reduced through compression, so that the problem that the training data set is difficult to acquire in the prior art is solved. In addition, when the reconstruction module is constructed, the reconstruction module has better performance by selecting a structure based on a Residual Dense Network (RDN). Finally, the application also uses the pretrained convolutional neural network as a loss function to realize better detail keeping effect.
According to a preferred embodiment, the illumination unit can adjust the light emergent angle of the illumination unit according to the working position of the scanning unit and the surface condition of the object to be scanned, so that the scanning unit obtains the shadow-free surface information of the object to be scanned; the lighting unit at least comprises a first lighting module and a second lighting module which are respectively arranged at two sides of the scanning unit, and the first lighting module and the second lighting module can construct two irradiation rays with included angles to supplement light on the surface of an object to be scanned. The scanning device has the advantages that the illuminating units capable of emitting the illuminating light rays with symmetrical included angles are arranged on the two sides of the scanning unit, so that the illuminating light rays can supplement light according to the actual light supplementing requirement of the surface of an object, and therefore the scanning unit can conveniently and rapidly acquire the surface information of the object to be scanned, and especially image data of color reduction and three-dimensional space representation are required.
The application also provides a scanning method of the multi-lens scanning device, which at least comprises the following steps:
s1: positioning and placing the large-format article to be scanned on a placement table after the completion position is reset, and enabling the large-format article to be scanned to pass through a scannable area of a scanning unit at a stable speed along with the movement of the placement table by controlling the directional translation of the placement table 3, so that the scanning unit can complete the collection of a plurality of linear image information in a block dividing manner;
S2: acquiring light field information of a region to be detected by combining light field imaging;
s3: decoding the acquired light field information and acquiring a low-resolution multi-focus image by using a digital refocusing algorithm;
s4: upsampling the multi-focus image by using a deep neural network to obtain a high-resolution light field multi-focus image;
s5: and performing full-focus fusion processing on the high-resolution multi-focus image by using a full-focus fusion algorithm based on a guide filter, and synthesizing the high-resolution large-depth-of-field full-focus image. The method has the advantages that the edge information can be reserved to the maximum extent by applying the full-focus fusion algorithm based on the guide filter to carry out full-focus fusion on the multi-focus image, and high-speed and high-quality full-focus fusion is realized. In order to improve the quality of multi-focus image acquisition, the multi-focus image acquisition is carried out under single exposure by adopting a light field imaging method, so that the consistency of the background information of the multi-focus image and the completeness of the full-focus fusion input information are ensured, and meanwhile, the method has the advantages of low cost, simple system structure and wide depth expansion range, and the high-quality multi-focus image acquisition is effectively realized. The single exposure light field full-focus fusion processing system based on the guide filter is formed by combining a fusion algorithm based on the guide filter with light field imaging, so that high-quality and large-depth full-focus image acquisition is realized.
According to a preferred embodiment, the acquisition of the high-resolution light field multi-focus image is by upsampling the low-resolution light field image by means of a convolutional neural network structure incorporating a convolutional compression module. The method has the advantages that the image data with high time resolution and spatial resolution are obtained by introducing a high space-time resolution full-focus imaging system based on the depth neural network, single-image super resolution of a light field low-resolution image is realized by adding the convolution depth neural network, the network is used for upsampling a light field part with low resolution on a focused image, the resolution of a multi-focus image is effectively improved, and finally the full-focus image with high space-time resolution is obtained by using a full-focus fusion algorithm based on a guide filter, so that the image acquisition with high resolution and large depth of field on the surface of an object to be scanned is realized. The processing mode of performing light field full-focus fusion based on single exposure of the guide filter solves the problems of low precision, high complexity, inconsistent background information, high-frequency information loss and the like of the traditional multi-focus fusion method, and the image information with large depth of field can be obtained based on the acquisition operation of full-focus images under single exposure of the guide filter. On the basis of full-focus fusion based on a guide filter, the convolutional depth neural network is utilized to realize up-sampling of a light field part with low resolution on a focal image, so that the spatial resolution of the full-focus image is effectively improved, and the acquisition of the high-resolution full-focus image in a large depth-of-field range is realized.
Drawings
FIG. 1 is a schematic perspective view of a preferred multi-lens scanning apparatus according to the present application;
FIG. 2 is a schematic workflow diagram of a scanning method of a preferred multi-lens scanning device according to the present application;
FIG. 3 is a schematic view of a preferred multi-lens scanning apparatus according to the present application;
fig. 4 is a schematic diagram of a scanning unit arrangement of a preferred multi-lens scanning device according to the present application.
List of reference numerals
1: a platform body; 2: a frame; 3: a placement table; 4: a scanning unit; 5: a lighting unit; 6: a suspension adjustment assembly; 7: a purge unit; 8: a processing unit; 9: a control unit; 10: a scanning motion body; 51: a first lighting module; 52: a second lighting module; 53: a mounting bracket; 54: rotating the mounting base; 55: a rotation driving unit; 56: an induction control module; 61: a first adjustment mechanism; 62: a second adjustment mechanism.
Detailed Description
The following detailed description refers to the accompanying drawings.
Example 1
The application provides a scanning method of a multi-lens scanning device, which combines a convolutional neural network with light field imaging to realize full-focus image acquisition with high time resolution and high spatial resolution, and the scanning method of the multi-lens scanning device comprises the following steps according to the image processing operation flow of a processing unit 8 in fig. 2:
S1: the large-format article to be scanned is placed on the placement table 3, and the scanning unit 4 scans the large-format article to be scanned placed on the placement table 3 by controlling the directional translation of the scanning motion main body 10, so that the scanning unit 4 can complete the collection of a plurality of linear image information in a block dividing manner;
s2: acquiring light field information of a region (scene) to be detected by combining light field imaging;
s2: acquiring light field information of a region (scene) to be detected by combining light field imaging;
s3: decoding the acquired light field information and acquiring a low-resolution multi-focus image by using a digital refocusing algorithm;
s4: upsampling the multi-focus image by using a deep neural network to obtain a high-resolution light field multi-focus image;
s5: and performing full-focus fusion processing on the high-resolution multi-focus image by using a full-focus fusion algorithm based on a guide filter, and synthesizing the high-resolution large-depth-of-field full-focus image.
Preferably, the illumination units 5 are arranged on both sides of the scanning unit 4 in such a way that no dead angle light filling can be performed in a scannable area defined by the lens of the scanning unit 4, and that shadows formed on the object to be scanned due to surface irregularities can be eliminated by adjusting the relative irradiation angle between the scanning unit 4 and the scanning unit 4 during image scanning by the scanning unit 4. Preferably, the suspension adjustment assembly 6 can adjust the suspension height of the scanning unit 4 according to the thickness, surface flatness, etc. of the scanned object, so that the lens focus of the scanning unit 4 can be always positioned on the surface of the object, thereby acquiring accurate image data.
Preferably, the light field information acquisition process refers to the process of simultaneously acquiring the intensity information and the angle information of the scene light rays in a single exposure, and the operation has high time resolution, but is limited by the number of micro lens units and the number of corresponding sensors, so that the spatial resolution of an image imaged by the light field is low. The up-sampling operation is carried out on the low-resolution light field image through the convolutional neural network structure, the image resolution can be effectively improved, and finally the full-focus image with high space-time resolution is obtained through a full-focus fusion algorithm based on a guide filter. Preferably, the refocusing process is equivalent to scanning an actual scene, each scanning position is a position with clear focusing, and in the light field digital refocusing process, the scanning step length can be set to be small enough according to actual conditions, so that enough partial focused images can be obtained in the same acquisition depth range, the focusing scanning precision is high, the number of the multi-focused images is large, and the completeness of multi-focused image information required by a later full-focus fusion algorithm is ensured by the advantages of light field refocusing. The up-sampling method based on the deep neural network can reconstruct more detail information, and high-frequency information is recovered to a greater extent under the condition of keeping the focusing characteristic of the original partial focusing image.
Preferably, a convolution compression module is introduced into the convolution neural network adopted by the application and is used for compressing the image to a certain extent, and the difference between the bilinear interpolation method and the real image fuzzy core is reduced through compression, so that the problem of training data sets is successfully solved. In addition, when the reconstruction module is constructed, the reconstruction module has better performance by selecting a structure based on a Residual Dense Network (RDN). Finally, the application also uses the pre-trained convolutional neural network as a loss function to realize better detail-preserving up-sampling effect. Therefore, the convolutional neural network provided by the application can realize high-quality up-sampling, and has better high-frequency detail information retention effect on the light field digital refocusing image, so that the high-space-time resolution full-focus imaging technology based on the depth neural network can realize the acquisition of full-focus images with high time resolution and high spatial resolution, and further realize scene information acquisition with large depth of field and high resolution.
Example 2
The application also provides a multi-lens scanning device which comprises a platform main body 1, a stand 2, a placing table 3, a scanning unit 4, an illumination unit 5, a suspension adjusting assembly 6, a purging unit 7 and a scanning movement main body 10.
According to a specific embodiment shown in fig. 1 and 2, a table 3 for placing an object to be scanned is provided on the table body 1. The gantry 2 is disposed at both sides of the table body 1 such that the scan motion body 10 located above the gantry 2 can be supported directly above the table body 1. An illumination unit 5, a suspension adjustment assembly 6 and a purge unit 7 are installed in the scanning motion body 10, wherein the suspension adjustment assembly 6 is also connected with the scanning unit 4. The frame 2 can drive the scanning motion body 10 to reciprocate above the placing table 3, so that the scanning unit 4 can acquire the surface information of the whole large-format article to be scanned placed on the placing table 3. The scanning unit 4 scans the object to be scanned placed on the placing table 3. The scanning unit 4 is movably connected to the scanning motion main body 10 through the suspension adjusting component 6, so that the scanning unit 4 can adjust the relative working position between the scanning unit 4 and the placing table 3 by utilizing the movable suspension adjusting component 6, the scanning unit 4 can acquire the surface information of an object to be scanned, which passes through a scanning area of the scanning unit 4, while the scanning unit 4 is driven by the scanning motion main body 10 to reciprocate above the placing table 3, and meanwhile, the scanning unit 4 can adjust the working position of the scanning unit according to the uneven surface of the object to be scanned, so that the scanning unit 4 can acquire clear scanning data in a mode of accurately positioning the focus of the scanning unit on the surface of the object.
As shown in fig. 4, the multiple scanning units 4 can scan and collect information in a parallel manner in the area to be scanned, that is, the single scanning unit 4 performs scanning operation in a manner that the single scanning unit can linearly scan and obtain high space-time resolution images and image colors of at least part of the strip-shaped area of the large-format area to be scanned, so that the multiple scanning units 4 arranged side by side can complete image data collection of the whole large-format area to be scanned in a manner that respective working position adjustment is completed simultaneously, and further complete data integration of the whole scanning area in a manner that strip-shaped image data collected by the multiple scanning units 4 are spliced. Preferably, any one scanning unit 4 can always locate the focal point of its camera at the surface position of the object to be scanned when completing the acquisition of one linear rectangular image, so that at least a part of acquired image information representing the surface of the object to be scanned can be used as optimal information. Preferably, the strip-shaped area which can be scanned by the single scanning unit 4 is partially overlapped with the strip-shaped area of the adjacent scanning unit 4, that is, the scanning area of the single scanning unit 4 at least comprises an accurate area right below the single scanning unit and a fuzzy area covering part of the accurate scanning area of the adjacent scanning unit 4, so that the processing unit 8 can finish the splicing of the processed images with high space-time resolution by overlapping the clear image output by the scanning unit 4 and the fuzzy image output by the adjacent scanning unit 4 and corresponding to the same surface area of the article, thereby forming a complete surface image of the article to be scanned with a large breadth.
Specifically, the scanning unit 4 includes groups of scanning modules that are parallel to each other and are all arranged laterally. The single scanning module group is formed by transversely arranging a plurality of scanning modules. The scanning height of the scanning module group is adjusted according to the scanning operation, and the working height of the scanning module is adjusted based on the material thickness and the like of the object to be scanned, so that the scanning module and the surface of the object to be scanned always keep a set distance and depth of field for scanning. Any one scanning module of the scanning module group can adjust the focus position according to the fluctuation of the surface of the object, so that the scanning module can always collect a relatively clear focusing image. Preferably, the scanning modules of the scanning module group may employ a linear camera.
Preferably, the platform main body 1 is connected with the placing table 3 for placing the object to be scanned through a guide rail connection structure. The platform body 1 can be fixedly supported in an arbitrary installation space, so that the platform body 1 can provide a stable and flat working environment for the placing table 3. The side of the platform main body 1 is also provided with a storage box which is at the same height as the placing table 3. The storage box is internally provided with a film base roller which can carry out protective film laying on the objects to be scanned.
Preferably, the lighting unit 5 comprises at least a first lighting module 51, a second lighting module 52 and a mounting bracket 53 for mounting the lighting modules. Further preferably, both ends of the first and second illumination modules 51, 52 are connected to the mounting bracket 53 through a rotation mounting seat 54 that can be inserted into an end of the mounting bracket 53, so that the first and second illumination modules 51, 52 can rotate relative to the mounting bracket 53 as required. Preferably, the rotary mounting seat 54 rotates the working ends of the first and second illumination modules 51 and 52 by rotating on the mounting bracket 53, so as to adjust the angles and the relative positions between the light rays emitted by the first and second illumination modules 51 and 52 and the scanning unit 4. Preferably, the rotary mount 54 is also movably connected with a rotary drive unit 55 capable of controlling its rotation. Preferably, the rotation driving unit 55 performs a preset rotation by adjustably controlling the rotation mounting base 54, so that the first illumination module 51 and the second illumination module 52 can irradiate their light rays on the surface of the object to be scanned at a set angle. The rotation driving unit 55 is arranged, so that the rotation mounting seat 54 can drive the first illumination module 51 and the second illumination module 52 to adjust the rotation angle, the working positions of the first illumination module 51 and the second illumination module 52 are limited, and a user can conveniently and rapidly and accurately finish adjustment of the emergent angle of illumination light. Preferably, an end of the mounting bracket 53 remote from the first illumination module 51 or the second illumination module 52 is detachably connected to the scan moving body 10. Preferably, the rotation driving unit 55 may employ a servo motor capable of accurately controlling an operation state thereof.
Preferably, the lighting unit 5 further comprises a sensing control module 56 capable of actively acquiring the position of the object to be scanned and accurately controlling the first lighting module 51 and the second lighting module 52 to be turned on or off. Preferably, the induction control module 56 can be fixedly installed at one side of the mounting bracket 53 near the feeding end and the discharging end, respectively, and the working end of the induction control module 56 is arranged in a manner facing the placing table 3. Specifically, the induction control module 56 disposed at the feeding end can control the first illumination module 51 and the second illumination module 52 to work when sensing the object to be scanned for the first time, shine the imaging shooting area of the scanning unit 4, and the induction control module 56 disposed at the discharging end controls the first illumination module 51 and the second illumination module 52 to stop working when detecting the surface of the sliding table, which is not covered by the object to be scanned, of the placing table 3 again when the first illumination module 51 and the second illumination module 52 are in the working state. By providing the induction control module 56, the electric energy consumed by the illumination light filling can be reduced to a certain extent, and the service life of the illumination unit 5 can be effectively prolonged.
Preferably, the suspension adjustment assembly 6 includes a first adjustment mechanism 61 and a second adjustment mechanism 62. Two first adjustment mechanisms 61 disposed in parallel are connected to the scanning moving body 10. Preferably, the working end of the first adjustment mechanism 61 is linearly movable in a vertical direction perpendicular to the working surface of the placement table 3. A second adjusting mechanism 62 is fixedly mounted on the working end of the first adjusting mechanism 61. Preferably, the plurality of second adjusting mechanisms 62 can correspond to the scanning units 4 arranged side by side, so that the second adjusting mechanisms 62 can drive the set scanning units 4 connected with the second adjusting mechanisms to perform secondary adjustment of the suspension height, and the focus of each scanning unit 4 can be positioned on the uneven surface of the article to be scanned.
Preferably, a purge unit 7 capable of performing a purge process on the working surface of the placement stage 3 is further mounted on the scanning motion body 10. The purge unit 7 is disposed upstream of the illumination unit 5, so that when the scanning movement body 10 performs translational movement, the purge unit 7 firstly performs dust removal, impurity removal and water removal treatment on the surface of the object to be scanned placed on the placement table 3, and then the scan unit 4 acquires image information. Preferably, the blowing unit 7 can adjust the blowing amount thereof according to the scanning requirement.
As shown in fig. 2, the scanning unit 4 processes the surface data of the object by the processing unit 8 after acquiring the surface data of the object, thereby generating an image with high resolution and sharpness. Preferably, the processing unit 8 is capable of selectively performing a guided filter-based full-focus fusion process and/or a deep neural network-based high spatial-temporal resolution full-focus imaging operation under the control of the control unit 9. Preferably, the control unit 9 is also capable of transmitting the image data processed by the processing unit 8 to a terminal or a display screen for display.
Example 3
This embodiment is a further improvement of embodiment 2, and the repeated contents will not be described again.
In the prior art, a one-time shooting operation is required to perform a scanning operation at the expense of the accuracy of a certain parameter when acquiring image information. For example, there is a discrepancy between the spatial resolution and the angular resolution, which tends to be sacrificed while the angular resolution is acquired, and only a low resolution image can be realized. The drawbacks of the prior art limit the application of light field imaging in rapid imaging techniques.
Preferably, the acquired scene information is only imaged in a clear range, limited by the imaging system architecture, beyond which objects may be blurred by defocus, which range is defined as depth of field. Due to the limitation of the depth of field, objects within the depth of field are focused clearly, and objects outside the depth of field are blurred, which results in the acquired images being partial in-focus images. In the application of object detection and classification recognition of machine vision, the requirements on the depth of field of an image are extremely high, and only a large depth of field can ensure that as many objects as possible in a scene are in focus. Because of the limited depth of field, the acquisition depth and imaging quality of scene information are both affected, conventional cameras typically require a trade-off between depth of field and signal to noise ratio, cameras have a fixed and single focal length, and the degree of blurring of the out-of-depth image depends on the size of the focal length and aperture size. By increasing the depth of field in a manner of reducing the aperture, the signal-to-noise ratio can be reduced, whereas if the aperture is increased, the signal-to-noise ratio is increased but the depth of field is reduced, so that it can be known that reducing the aperture size of the aperture is not the best choice for increasing the depth of field, especially in a dark field environment, the imaging effect is seriously affected because the imaging effect of the small aperture is inevitably reduced. In order to effectively solve such problems, a method of increasing the exposure time is often adopted to increase the image intensity, however, the imaging mode of increasing the exposure time is not good in dynamic imaging and high-speed imaging, and cannot realize high-speed, clear and large-depth imaging under dynamic scenes. In order to effectively increase the imaging depth of field, a full-focus imaging method has been developed, namely an imaging method in which objects with various depths in a scene are in focus.
Further, as the multi-focus image acquisition method of the related art, there are drawbacks in that manual focus accuracy is low, speed is slow, and image acquisition efficiency is low, and furthermore, an automatic focus method based on a mechanical structure improves accuracy relative to the manual focus method, but the number of multi-focus images is limited. The multi-focus image acquisition method based on the specific optical device has advantages and disadvantages, for example, a focusing method based on an electric control liquid crystal zoom lens increases a micro braking structure and increases system complexity. The partial focusing method based on the DMD has higher cost, the spectrum scanning camera and the method based on the color filter aperture are limited by a core device, and can not realize rapid, high-precision and large-quantity multi-focus image acquisition, thereby being unfavorable for multi-focus image fusion. Furthermore, the four methods described above have a common disadvantage: the depth of focus range is limited.
Preferably, in order to solve the problems occurring in the existing full-focus fusion algorithm, the processing unit 8 of the present application applies a guided filter-based full-focus fusion algorithm to perform full-focus fusion on the multi-focus image. The guide filter is a typical nonlinear filter, can reserve the edge information to the maximum extent, and realizes high-speed and high-quality full-focus fusion. In order to improve the quality of multi-focus image acquisition, the multi-focus image acquisition is carried out under single exposure by adopting a light field imaging method, so that the consistency of the background information of the multi-focus image and the completeness of the full-focus fusion input information are ensured, and meanwhile, the method also has the advantages of low cost, simple system structure and wide depth expansion range, and the high-quality multi-focus image acquisition is effectively realized. The fusion algorithm based on the guide filter is combined with the light field imaging to form a light field full-focus fusion processing system based on single exposure of the guide filter, so that high-quality and large-depth full-focus image acquisition is realized.
Preferably, the present application is directed to the defect that the spatial resolution of the acquired image is low due to the sacrifice of the spatial resolution while the angular resolution is acquired during the scanning process, and the processing unit 8 acquires the image data with high temporal resolution and spatial resolution by introducing a high spatial-temporal resolution full-focal imaging system based on a deep neural network. For the existing scanning unit 4, when acquiring multi-focus images, because intensity information and angle information of scene light are acquired at the same time under single exposure, although the acquired image information has high time resolution, the multi-focus images acquired through light field imaging are limited by the number of micro lens units and the number of corresponding sensors, and the spatial resolution is often lower, so that a high-resolution full-focus image cannot be acquired. Therefore, single-image super-resolution of the light field low-resolution image is realized by adding a convolution depth neural network. Preferably, the setting scanning unit 4 supplements the low-resolution light field part acquired by the setting scanning unit 4 in the focal image edge area through the image edge information acquired by the adjacent scanning unit 4, so that the missing image information of the image edge area is supplemented to improve the resolution of the single image. Preferably, the network is utilized to up-sample the light field part with low resolution on the in-focus image, so that the resolution of the multi-focus image is effectively improved, and finally, the full-focus image with high space-time resolution is obtained through a full-focus fusion algorithm based on a guide filter, so that the image acquisition with high resolution and large depth of field of scene information is realized. Preferably, the processing unit 8 solves the problems of low precision, high complexity, inconsistent background information, high-frequency information loss and the like of the traditional multi-focus fusion method by a processing mode of performing light field full-focus fusion based on single exposure of a guide filter, realizes the acquisition of full-focus images under single exposure, and effectively acquires scene information with large depth of field. Further preferably, the processing unit 8 utilizes the convolution depth neural network to realize up-sampling of the low-resolution light field part on the in-focus image on the basis of full-focus fusion based on the guide filter, so that the spatial resolution of the full-focus image is effectively improved, further, full-focus imaging with high space-time resolution is realized, and the aim of collecting the high-resolution full-focus image within a large depth of field is fulfilled.
Preferably, by setting a convolution depth neural network for single-image super-resolution of a light field low-resolution image, up-sampling operation is performed on the low-resolution light field image, so that the resolution of a multi-focus image is effectively improved, the defect that the spatial resolution of the multi-focus image obtained by the existing light field imaging is low, and a guide filter cannot be used for obtaining a high-resolution full-focus image is overcome, so that the multi-focus image after single-image super-resolution is processed based on the full-focus fusion operation of the guide filter, and the full-focus image with high time resolution and spatial resolution is obtained. Preferably, the super-resolution processing is to reconstruct a high-resolution image from at least one low-resolution image, so that the original image acquired by the scanning unit 4 can output a high-resolution image having high-quality perception information after processing.
Preferably, compared with a conventional depth value indexing method and a wavelet fusion algorithm, the guide filter module has better edge holding capacity, and better ensures that high-frequency detail information of a result graph obtained by full-focus fusion is not lost, so that the guide filter module has better fusion effect compared with the existing image fusion. Further preferably, under the condition that a guiding filter module is introduced to improve the time resolution of collecting multi-focus images, a convolution depth neural network capable of completing single-image super-resolution processing is introduced to perform up-sampling operation on low-resolution light field images, so that the resolution of the multi-focus images is effectively improved, and the defect that the spatial resolution of the multi-focus images obtained by the existing light field imaging is low and high-resolution full-focus images cannot be obtained by using the guiding filter is overcome.
Preferably, the construction process of the low resolution image includes: the high resolution image is first convolved with the blur kernel, then the resulting result is downsampled, and finally the noise influencing factor is added. A difficulty with the Shan Tuchao resolution algorithm is that the same low resolution input may correspond to a plurality of different high resolution output images.
Preferably, the convolutional neural network provided in the foregoing may include a compression module, a reconstruction module, and a loss module. Preferably, the compression module has three sub-modules each consisting of at least two convolution layers and one maximum pooling layer. Preferably, the data output by the compression module is capable of performing upsampling in the reconstruction module and improving the spatial smoothness of the final generated image after passing through the loss module. Preferably, the reconstruction module mainly consists of Residual Dense Blocks (RDB), and image fusion mixing is realized in a main structure by adding a convolution layer, and channels are reduced, so that the whole structure can flexibly realize up-sampling by adopting a pixel shuffling strategy. Preferably, in the loss module, the output structure has 16 weight layer networks (VGG-16) for marking the input pre-trained visual geometry group network and outputting the activation values of ReLU at different depth layers while calculating L1 loss, the function of the total loss variable being to improve the spatial smoothness of the generated image.
Example 4
CN104614339 discloses a three-dimensional terahertz imaging method for oil painting, which is used for providing scientific basis for repairmen to draw a repair scheme. However, according to this patent, after reconstructing the oil painting to obtain a three-dimensional image of the oil painting according to the position information of the points of the different painting layers in the imaging area of the oil painting to be measured and the light intensities of the corresponding reflected signals, the repairman is faced with the three-dimensional image in the display screen and does not perform a good repair, but needs to do repeated exercises by means of the duplicate formed by printing, and then does not perform an actual repair on the genuine article or even the valuable article. However, referring to the scan diagram shown in fig. 3 of the patent, the scan-determined three-dimensional image cannot be printed to give an image close to the original, but only a true three-dimensional image can be given to judge each trace of handwriting and its pigment in the relevant depth direction. By contrast, with the teachings of embodiments 1 through 3 of the present application, finer images can be scanned and printed so that the repairman performs repair exercises on the printed matter by contrasting the three-dimensional image formed according to CN 104614339.
In the present application, when a plurality of scanning units 4 as shown in fig. 1 and 3 perform information acquisition on a large-format area to be scanned (oil painting) in a parallel manner, each scanning unit 4 determines its own height position from a three-dimensional image determined by a three-dimensional terahertz imaging method (CN 104614339), in other words, the height position of each scanning unit 4 when performing scanning is dynamically adjusted in real time from a predetermined three-dimensional image.
The scanning units 4 mounted on the suspension adjustment assembly 6 may also preferably be arranged in a matrix so that they can accurately acquire all data of the surface of the object in a corresponding rectangular area in a single scanning operation. Specifically, when scanning is performed on the uneven surface in one rectangular area of the object to be scanned, the control unit 9 determines the height positions of the scanning units 4 at different positions in one rectangular area according to the three-dimensional image determined by the three-dimensional terahertz imaging method (CN 104614339), so that the lifting and lowering of the scanning units 4 can be changed by controlling the operation of the suspension adjusting assembly 6, each scanning unit 4 can be controlled by the control unit 9 to adjust the working height thereof, and the plurality of scanning units 4 arranged in a matrix can exhibit the height and height fluctuation state corresponding to the object surface, and the focus of each scanning unit 4 can be positioned on the object surface at the corresponding position.
Preferably, the control unit 9 may dynamically adjust the high and low positions of the plurality of scanning units 4 in real time according to the movement of the scanning moving body 10. After the scanning unit 4 collects data of a rectangular area on the surface of the object, the scanning motion body 10 can drive the scanning unit 4 to perform stepping translation, so that the scanning unit 4 arranged in a matrix can scan a rectangular area which is not subjected to data collection again. Preferably, the control unit 9 is able to adjust the height position of the scanning unit 4 based on a three-dimensional image of the surface of the object in the area acquired in advance before the scanning unit 4 performs the scanning. In addition, after the scanning unit 4 collects the surface data of the object, when the corresponding data is processed by the processing unit 8, the processing unit 8 corrects the image collected by each scanning unit 4 according to the predetermined three-dimensional image, especially corrects the view finding range and the depth of field, so that the printing equipment associated with the scanning device can accurately print the surface shape and the color of the object consistent with the original object.
Specifically, the control unit 9 may locate the three-dimensional image of the next rectangular area to be scanned according to the whole three-dimensional image of the scanned object received in advance while the scanning unit 4 is driven by the scanning motion body 10 to perform the step-by-step motion after the scanning operation in one rectangular area is completed by the scanning unit 4, so that the height positions of the plurality of scanning units 4 arranged in a matrix are adjusted while the scanning motion body 10 moves, and the scanning unit 4 immediately completes the scanning operation in the area after the scanning motion body 10 drives the scanning unit 4 to move to the next area to be scanned, so that the scanning unit 4 can accurately complete the scanning of the rectangular area in a short dead time after the scanning motion body 10 completes the single step-by-step translation. In the continuous movement process of the scanning movement body 10, the control unit 9 can correspondingly control the scanning unit 4 to carry out adaptive working position adjustment, so that continuous movement of the scanning movement body 10 and real-time dynamic adjustment of the scanning unit 4 are realized to complete large-breadth scanning work.
Preferably, the matrix arrangement of the scanning units 4 can be adapted to the movement of the scanning moving body 10, so that the adjustment period of the respective working height adjustment of the plurality of scanning units 4 according to the three-dimensional image determined by the three-dimensional terahertz imaging method (CN 104614339) received in advance can exactly correspond to the movement period of the step-by-step scanning moving body 10, so that the scanning units 4 can adjust the scanning height themselves to the scanning height required for the corresponding surface position of the object to be scanned in advance, and further complete the scanning work of one rectangular area of the object to be scanned in a stationary state. After the scanning operation of one rectangular area is completed, the scanning motion body 10 drives the scanning unit 4 to perform secondary motion with the motion direction unchanged, the scanning unit 4 synchronously adjusts the height position and completes high-precision focusing in advance, so that after the scanning motion body 10 drives the scanning unit 4 to complete the translation of one rectangular width, the scanning unit 4 can rapidly complete high-precision scanning and data acquisition in the dead time of the stepping gap of the scanning motion body 10. The scanning mode of the application is that a plurality of scanning units 4 arranged in a matrix are synchronously scanned, and the scanning can be completed at the moment of stagnation of the scanning moving body 10, so that the stagnation clearance of a plurality of stepping motions of the scanning moving body 10 is shortened as far as possible, the splicing of the stepping motions can be similar to a continuous motion process, and the height position adjustment of the scanning units 4 is also a continuous process, thereby greatly improving the scanning precision and simultaneously guaranteeing the scanning efficiency compared with the high-speed scanning mode in the prior art.
Compared with the prior art that the scanning operation of the surface of the object to be scanned is completed by adopting the continuous high-speed movement of the object to be scanned or the scanning device, the scanning unit 4 can complete the scanning of the surface of the object to be scanned in the static state by setting the scanning movement main body 10 to be in the stepping movement, the focusing precision of the scanning unit 4 and the accuracy of scanning data are greatly improved, particularly compared with the scanning of the surface of the object in the moving state, the scanning operation in the static state has higher scanning quality, the processing unit 8 can immediately verify whether shadows and the like exist in the scanned image after the scanning is completed, and the control unit 9 adjusts the movement state of the scanning movement main body 10 according to the verification result, so that the scanning unit 4 can immediately perform secondary scanning on the area, and the complicated workload of performing verification and correction after the scanning and image splicing of all areas are completed is avoided. The scanning of the scanning unit 4 to a rectangular area means that the plurality of scanning units 4 perform block scanning, the scanning process is extremely short in time consumption, the scanning time is not required to be set particularly, and only the scanning working time point is required to be matched with the stepping movement period of the scanning movement main body 10, so that the working periods of the scanning units and the stepping movement period of the scanning movement main body 10 are overlapped. Preferably, when the scanning unit 4 needs to perform the secondary scanning of the same area, the control unit 9 controls the dead time of the scanning motion body 10 to be increased by one step length, and at this time, the control unit 9 adjusts the working position of the illumination unit 5 to eliminate the scanning shadow existing in the area on the surface of the article. The scanning unit 4 performs a secondary scan after the illumination unit 5 completes the secondary adjustment.
Preferably, the operating position of the illumination unit 5 is also transformed from a three-dimensional image determined by the three-dimensional terahertz imaging method (CN 104614339). Specifically, the control unit 9 adaptively adjusts the illumination height and angle of the illumination unit 5 in accordance with the three-dimensional image of the object to be scanned while adjusting the height position of the scanning unit 4, so that the illumination unit 5 eliminates shadows that may exist on the surface of the rectangular area of the object to be scanned by illumination. The illumination unit 5 and the scanning unit 4 perform synchronous motion according to the three-dimensional image determined by the three-dimensional terahertz imaging method (CN 104614339) received in advance by the control unit 9, so that high-precision scanning images can be efficiently and accurately acquired, and meanwhile, the time consumption caused by the adjustment operation of the traditional step-by-step scanning can be avoided. In particular, the height position adjustment of the scanning units 4 arranged in a matrix manner can overlap with the single step movement time of the scanning movement body 10, so that the scanning operation of a rectangular area can be completed by keeping the dead space of continuous movement of the scanning movement body 10 and the continuous fluctuation of the scanning units 4, the scanning efficiency is greatly improved and the scanning precision is effectively ensured.
It should be noted that the above-described embodiments are exemplary, and that a person skilled in the art, in light of the present disclosure, may devise various solutions that fall within the scope of the present disclosure and fall within the scope of the present disclosure. It should be understood by those skilled in the art that the present description and drawings are illustrative and not limiting to the claims. The scope of the invention is defined by the claims and their equivalents. Throughout this document, the word "preferably" is used in a generic sense to mean only one alternative, and not to be construed as necessarily required, so that the applicant reserves the right to forego or delete the relevant preferred feature at any time.

Claims (10)

1. A multi-lens scanning device at least comprising a placing table (3) for fixing and placing an object to be scanned, a scanning unit (4) and an illumination unit (5), wherein the scanning unit (4) is connected with a platform main body (1) through a rack (2), the scanning unit (4) finishes scanning the object to be scanned placed on the placing table (3) in a way of carrying out directional translation, and meanwhile, the scanning unit (4) can adjust the working position of the scanning unit according to the uneven surface of the object to be scanned, the multi-lens scanning device is characterized in that a plurality of scanning units (4) are suspended above the platform main body (1) in a side-by-side way through a suspension adjusting component (6), and the distance between two adjacent scanning units (4) is set in a way that at least partial overlapping exists in the scannable areas of the two adjacent scanning units (4), so that the scanning unit (4) can carry out supplementary and correction on in-focus image edge area data acquired by the scanning unit (4) adjacent to the scanning unit;
The plurality of scanning units (4) arranged side by side can transmit acquired image data to the processing unit (8) for selective image processing, so that a full-focus image with high space-time resolution is acquired.
2. The multi-lens scanning device according to claim 1, characterized in that a plurality of the scanning units (4) complete the acquisition of image data of the surface of the object to be scanned with a large breadth while the directional translation takes place, and the scanning units (4) transmit the image data acquired thereof to the processing unit (8) in an orderly arrangement, the processing unit (8) selectively performing a guided filter-based perfocal fusion process and/or a deep neural network-based high spatiotemporal resolution perfocal imaging operation under the control of the control unit (9).
3. A multi-lens scanning device as claimed in claim 2, characterized in that the processing unit (8) performs the guided filter-based through-focus fusion process in such a way that the edge information of the images acquired by the single scanning unit (4) can be retained to the maximum extent, and the sequential arrangement of the strip-shaped image data is performed, so that the adjacent two strip-shaped image data perform the respective edge information supplementing and correcting in a mutual authentication manner, and the whole image is simultaneously performed.
4. A multi-lens scanning apparatus as claimed in claim 3, characterized in that the processing unit (8) increases the convolutional deep neural network in such a way as to increase the spatial resolution of the low-resolution light field portion in-focus image to accomplish single-image super-resolution of the low-resolution image, wherein,
any one scanning unit (4) supplements the collected light field part with low resolution in the edge area of the focal image through the image edge information collected by the adjacent scanning units (4), so that the missing image information of the image edge area is supplemented to improve the resolution of the single image.
5. A multi-lens scanning device as claimed in claim 4, characterized in that the processing unit (8) is arranged to perform single-image super-resolution of the light field low-resolution image by upsampling the low-resolution light field portion in the in-focus image.
6. A multi-lens scanning device as claimed in claim 1, characterized in that the processing unit (8) supplements the edge region of the focal image with the image edge information acquired by the adjacent scanning unit (4) for the light field part with low resolution acquired by the designated scanning unit (4), so that the missing image information of the edge region of the image acquired by the designated scanning unit (4) is supplemented.
7. The multi-lens scanning apparatus as claimed in claim 6, wherein the convolutional deep neural network of the processing unit (8) comprises at least a compression module, a reconstruction module and a loss module, wherein the data output by the compression module can be up-sampled in the reconstruction module, and the reconstruction module mixes and reduces channels by adding convolutional layers in its backbone structure, so that the whole structure can flexibly implement up-sampling by adopting a pixel shuffling strategy.
8. The multi-lens scanning device according to claim 7, wherein the illumination unit (5) is capable of adjusting the light emission angle of the scanning unit (4) according to the working position of the scanning unit and the surface condition of the object to be scanned, so that the scanning unit (4) obtains the shadow-free surface information of the object to be scanned;
the illumination unit (5) at least comprises a first illumination module (51) and a second illumination module (52) which are respectively arranged at two sides of the scanning unit (4), and the first illumination module (51) and the second illumination module (52) can construct two illumination rays with included angles to supplement light on the surface of an object to be scanned.
9. A scanning method of a multi-lens scanning device, the scanning method comprising at least the steps of:
S1: the method comprises the steps that a large-format article to be scanned is placed on a placing table (3), a scanning unit (4) scans the large-format article to be scanned placed on the placing table (3) by controlling the directional translation of a scanning motion main body (10), so that the scanning unit (4) can collect a plurality of linear image information in a block dividing mode, and meanwhile, the scanning unit (4) can adjust the working position of the large-format article to be scanned according to the uneven surface of the article to be scanned;
s2: acquiring light field information of a region to be detected by combining light field imaging;
s3: decoding the acquired light field information and acquiring a low-resolution multi-focus image by using a digital refocusing algorithm;
s4: upsampling the multi-focus image by using a deep neural network to obtain a high-resolution light field multi-focus image;
s5: and performing full-focus fusion processing on the high-resolution multi-focus image by using a full-focus fusion algorithm based on a guide filter, and synthesizing the high-resolution large-depth-of-field full-focus image.
10. The scanning method of claim 9, wherein the high resolution light field multi-focus image is obtained by upsampling a low resolution light field image by introducing a convolutional neural network structure with a convolutional compression module.
CN202210670745.3A 2022-06-13 2022-06-13 Multi-lens scanning device and scanning method thereof Active CN115065761B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210670745.3A CN115065761B (en) 2022-06-13 2022-06-13 Multi-lens scanning device and scanning method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210670745.3A CN115065761B (en) 2022-06-13 2022-06-13 Multi-lens scanning device and scanning method thereof

Publications (2)

Publication Number Publication Date
CN115065761A CN115065761A (en) 2022-09-16
CN115065761B true CN115065761B (en) 2023-09-12

Family

ID=83199443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210670745.3A Active CN115065761B (en) 2022-06-13 2022-06-13 Multi-lens scanning device and scanning method thereof

Country Status (1)

Country Link
CN (1) CN115065761B (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5301244A (en) * 1991-07-18 1994-04-05 Eastman Kodak Company Computer input scanner incorporating multiple scanning modes
US5532846A (en) * 1995-06-29 1996-07-02 Agfa Division, Bayer Corporation Method and apparatus for positioning a focusing lens
WO2004097888A2 (en) * 2003-04-25 2004-11-11 Cxr Limited X-ray sources
JP3185858U (en) * 2013-05-02 2013-09-05 テスト リサーチ, インク. Automatic optical detection equipment
CN205961256U (en) * 2016-08-23 2017-02-15 北京龙日艺通数码印刷有限公司 Novel platform formula scanner
CN109064437A (en) * 2018-07-11 2018-12-21 中国人民解放军国防科技大学 Image fusion method based on guided filtering and online dictionary learning
CN109523458A (en) * 2018-05-24 2019-03-26 湖北科技学院 A kind of high-precision sparse angular CT method for reconstructing of the sparse induction dynamic guiding filtering of combination
AU2020100199A4 (en) * 2020-02-08 2020-03-19 Cao, Sihua MR A medical image fusion method based on two-layer decomposition and improved spatial frequency
CN111182238A (en) * 2019-11-15 2020-05-19 北京超放信息技术有限公司 High-resolution mobile electronic equipment imaging device and method based on scanning light field
CN111415297A (en) * 2020-03-06 2020-07-14 清华大学深圳国际研究生院 Imaging method of confocal microscope
CN111866370A (en) * 2020-05-28 2020-10-30 北京迈格威科技有限公司 Method, device, equipment, medium, camera array and assembly for synthesizing panoramic deep image
CN112686829A (en) * 2021-01-11 2021-04-20 太原科技大学 4D light field full-focus image acquisition method based on angle information
WO2021203883A1 (en) * 2020-04-10 2021-10-14 杭州思看科技有限公司 Three-dimensional scanning method, three-dimensional scanning system, and computer readable storage medium
WO2021253326A1 (en) * 2020-06-18 2021-12-23 深圳先进技术研究院 Domain transform-based method for reconstructing positron emission tomography image

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10848654B2 (en) * 2018-12-17 2020-11-24 Vergent Research Pty Ltd Oblique scanning aerial cameras
CN115442515B (en) * 2019-03-25 2024-02-02 华为技术有限公司 Image processing method and apparatus

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5301244A (en) * 1991-07-18 1994-04-05 Eastman Kodak Company Computer input scanner incorporating multiple scanning modes
US5532846A (en) * 1995-06-29 1996-07-02 Agfa Division, Bayer Corporation Method and apparatus for positioning a focusing lens
WO2004097888A2 (en) * 2003-04-25 2004-11-11 Cxr Limited X-ray sources
JP3185858U (en) * 2013-05-02 2013-09-05 テスト リサーチ, インク. Automatic optical detection equipment
CN205961256U (en) * 2016-08-23 2017-02-15 北京龙日艺通数码印刷有限公司 Novel platform formula scanner
CN109523458A (en) * 2018-05-24 2019-03-26 湖北科技学院 A kind of high-precision sparse angular CT method for reconstructing of the sparse induction dynamic guiding filtering of combination
CN109064437A (en) * 2018-07-11 2018-12-21 中国人民解放军国防科技大学 Image fusion method based on guided filtering and online dictionary learning
CN111182238A (en) * 2019-11-15 2020-05-19 北京超放信息技术有限公司 High-resolution mobile electronic equipment imaging device and method based on scanning light field
AU2020100199A4 (en) * 2020-02-08 2020-03-19 Cao, Sihua MR A medical image fusion method based on two-layer decomposition and improved spatial frequency
CN111415297A (en) * 2020-03-06 2020-07-14 清华大学深圳国际研究生院 Imaging method of confocal microscope
WO2021203883A1 (en) * 2020-04-10 2021-10-14 杭州思看科技有限公司 Three-dimensional scanning method, three-dimensional scanning system, and computer readable storage medium
CN111866370A (en) * 2020-05-28 2020-10-30 北京迈格威科技有限公司 Method, device, equipment, medium, camera array and assembly for synthesizing panoramic deep image
WO2021253326A1 (en) * 2020-06-18 2021-12-23 深圳先进技术研究院 Domain transform-based method for reconstructing positron emission tomography image
CN112686829A (en) * 2021-01-11 2021-04-20 太原科技大学 4D light field full-focus image acquisition method based on angle information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于亚像素扫描的超分辨技术在高分辨X射线显微镜中的应用;邹晶;耿星杰;廖可梁;徐临燕;胡晓东;;光子学报(第12期);全文 *

Also Published As

Publication number Publication date
CN115065761A (en) 2022-09-16

Similar Documents

Publication Publication Date Title
US7876948B2 (en) System for creating microscopic digital montage images
CN108873290B (en) Apparatus and method for scanning microscope slides
US20050041113A1 (en) Method and apparatus for recording a sequence of images using a moving optical element
JPH11164094A (en) Double lens type converging device for double plane type flat scanner
CN105530399B (en) Indoor footprint harvester based on linearly polarized light glancing incidence formula scan imaging method
CN108196357A (en) A kind of multi-angle illumination light source and the Fourier stacking imaging system based on this light source
CN110648301A (en) Device and method for eliminating imaging reflection
CN201681056U (en) Industrial high-resolution observation device for X-ray negative films
CN115065761B (en) Multi-lens scanning device and scanning method thereof
MXPA06010076A (en) Method and arrangement for imaging a primarily two-dimensional target.
CN115052077B (en) Scanning device and method
CN208937902U (en) Microfilm scanner
CN117054447A (en) Method and device for detecting edge defects of special-shaped glass
CN115103079B (en) Linear scanning device and scanning method thereof
CN113204107B (en) Three-dimensional scanning microscope with double objective lenses and three-dimensional scanning method
EP0967505A2 (en) Autofocus process and system with fast multi-region sampling
CN108259697A (en) High-definition picture scanning means
CN1504046A (en) Method for high resolution incremental imaging
DE19528244A1 (en) Copier
CN207677837U (en) Plane historical relic high-definition picture scanning system
CN217406629U (en) Time-sharing stroboscopic light source device of line scanning camera based on luminosity stereo
CN114152623B (en) Method and device for acquiring surface image of object with high light reflection surface
US20230232124A1 (en) High-speed imaging apparatus and imaging method
JP7235861B2 (en) High-throughput optical tomography imaging method and imaging system
CN115753775A (en) High-speed microscopic image acquisition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant