CN115065761A - Multi-lens scanning device and scanning method thereof - Google Patents

Multi-lens scanning device and scanning method thereof Download PDF

Info

Publication number
CN115065761A
CN115065761A CN202210670745.3A CN202210670745A CN115065761A CN 115065761 A CN115065761 A CN 115065761A CN 202210670745 A CN202210670745 A CN 202210670745A CN 115065761 A CN115065761 A CN 115065761A
Authority
CN
China
Prior art keywords
scanning
image
resolution
focus
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210670745.3A
Other languages
Chinese (zh)
Other versions
CN115065761B (en
Inventor
张凌
陈天君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongyi Qihang Digital Technology Beijing Co ltd
Original Assignee
Zhongyi Qihang Digital Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongyi Qihang Digital Technology Beijing Co ltd filed Critical Zhongyi Qihang Digital Technology Beijing Co ltd
Priority to CN202210670745.3A priority Critical patent/CN115065761B/en
Publication of CN115065761A publication Critical patent/CN115065761A/en
Application granted granted Critical
Publication of CN115065761B publication Critical patent/CN115065761B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • H04N1/0402Scanning different formats; Scanning with different densities of dots per unit length, e.g. different numbers of dots per inch (dpi); Conversion of scanning standards
    • H04N1/042Details of the method used
    • H04N1/044Tilting an optical element, e.g. a refractive plate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • H04N1/0402Scanning different formats; Scanning with different densities of dots per unit length, e.g. different numbers of dots per inch (dpi); Conversion of scanning standards
    • H04N1/042Details of the method used
    • H04N1/0443Varying the scanning velocity or position
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • H04N1/047Detection, control or error compensation of scanning velocity or position

Abstract

The invention relates to a multi-lens scanning device, which at least comprises a scanning unit and an illuminating unit, wherein a plurality of scanning units are suspended above a platform main body in a side-by-side mode through a suspension adjusting assembly, and the distance between the adjacent scanning units is set in a mode that the scannable regions of the scanning units are at least partially overlapped, so that the scanning units can supplement and correct the data acquired by the adjacent scanning units in the edge regions of the in-focus images by using the acquired data of at least partial strip in-focus images; the scanning units arranged side by side can transmit the acquired image data to the processing unit for selective image processing. The invention also relates to a scanning method of the multi-lens scanning device, which effectively improves the spatial resolution of the all-focus image, further realizes the all-focus imaging with high space-time resolution and achieves the aim of acquiring the all-focus image with high resolution in a large depth of field range.

Description

Multi-lens scanning device and scanning method thereof
Technical Field
The invention relates to the technical field of scanning equipment, in particular to a multi-lens scanning device and a scanning method thereof.
Background
Currently, there are many methods for acquiring multi-focus images by a scanning device, and the most common methods include a manual focusing method, an automatic focusing method based on a mechanical structure, and a focusing method based on a specific device. The manual focusing method acquires partial focusing images with different focal depths in a mode of manually adjusting the focusing position, can adjust the focusing position according to actual requirements, has good scene adaptability, but cannot ensure the focusing precision due to uncertainty of manual operation, has limited number of partial focusing images acquired by manual operation, and has time and labor consumption in the process of adjusting the focusing and low image acquisition efficiency. The automatic focusing mode based on the mechanical structure can control the change amount of the focal length through a preset program, namely the position of the sensor or the imaging main lens is adjusted through mechanical motion to further realize relatively accurate image focusing, a part of in-focus images are acquired, however, in the image acquisition process, multiple exposures are carried out along with the change of the position of the sensor (the position change of the imaging main lens), the background and noise information of each part of the in-focus images are inconsistent, and the final full-focus fusion quality is seriously influenced.
The traditional single-image super-resolution method based on deep learning needs to firstly carry out secondary sampling on a high-resolution image to obtain a corresponding low-resolution image, obtain training data sets corresponding to the high-resolution image and the low-resolution image through a large number of secondary sampling operations, and establish a reasonable training data set on the premise that the high-resolution image is obtained in advance. In addition, a low-resolution image is reconstructed from a single-exposure light field image, and a high-resolution image needs to be acquired by using another common single-lens reflex camera. In the reconstruction process, input information is a low-resolution image obtained by secondary sampling of an original high-resolution image, the output information is a high-resolution image reconstructed by deep learning, and in the structure, the output information and the input information have essential correlation. However, it is difficult to achieve the same field angle and focus position of the slr camera and the light field camera during operation, so the resulting training set of high and low resolution image pairs is difficult to acquire.
Patent document No. CN113689334A discloses a laser imaging apparatus and a laser imaging control method, which are used to solve the problem of large laser imaging error of low-resolution images and improve laser imaging accuracy. The laser imaging device provided by the embodiment of the invention can improve the resolution of the original binary dot matrix image by adopting an image interpolation algorithm, the pixel distance between adjacent pixels is reduced, and under the condition that the number of the convex pixels at the edge of the curve image or the oblique line image is the same, the offset of the convex pixels at the edge is reduced, and the laser imaging precision is improved. Although the resolution of the original binary dot matrix image is improved through an image interpolation algorithm, the spatial resolution and the time resolution of the actually acquired image are still low, and the output image still has large imaging errors.
Therefore, in order to solve the defects of low spatial resolution of light field imaging and poor fusion quality of edge position pixel points when multi-focus images are subjected to full focus fusion, a multi-lens scanning device and a scanning method thereof are needed, wherein the multi-lens scanning device can perform single-image super-resolution on the light field low-resolution images and can effectively perform full focus fusion on a plurality of subarea focused images.
Furthermore, on the one hand, due to the differences in understanding to the person skilled in the art; on the other hand, since the inventor has studied a lot of documents and patents when making the present invention, but the space is not limited to the details and contents listed in the above, however, the present invention is by no means free of the features of the prior art, but the present invention has been provided with all the features of the prior art, and the applicant reserves the right to increase the related prior art in the background.
Disclosure of Invention
In view of the defects of the prior art, the technical solution of the present invention provides a multi-lens scanning device, which at least includes a flatbed platform for fixing and placing an object to be scanned, a scanning unit and an illumination unit connected to a platform main body through a rack, wherein the scanning unit performs scanning of the object to be scanned placed on the flatbed platform in a directional translation manner, a plurality of scanning units are suspended above the platform main body in a side-by-side manner through a suspension adjustment assembly, and a distance between two adjacent scanning units is set in a manner that at least a part of scannable regions of two adjacent scanning units are overlapped, so that the scanning units can supplement and correct data of in-focus image edge regions acquired by the scanning units adjacent to the scanning units with at least part of bar-shaped in-focus image data acquired by the scanning units; the scanning units arranged side by side can transmit acquired image data to the processing unit for selective image processing, so that a full-focus image with high space-time resolution is acquired. The method has the advantages that the plurality of scanning units are arranged side by side on the set working position, so that only a single scanning unit needs to acquire clear image data of at least partial strip-shaped areas, the scanning units are arranged at equal intervals in a mode that the partial clear imaging area of any one scanning unit is overlapped with the fuzzy imaging area of the adjacent scanning unit, the scanning areas of the two adjacent scanning units are at least partially overlapped, verification, supplement and correction can be performed on the edge area data of in-focus images of the adjacent scanning units, and the spliced large-format scanning image obtained by full-focus fusion processing has high resolution. When image processing operation is carried out, a depth neural network capable of carrying out up-sampling on a low-resolution in-focus image is introduced, so that the resolution of the low-resolution in-focus image is improved through the neural network before full-focus fusion, high-frequency information details of the high-resolution image can be reconstructed through the introduced depth neural network, full-focus fusion is carried out on the multi-focus image by combining a guide filter, a full-focus image with rich high-frequency detail information and sharp edges is obtained, and the large-range depth of field expansion and the high-quality large-depth of field full-focus image acquisition are realized.
According to a preferred embodiment, a plurality of the scanning units complete the acquisition of image data of a large-format object surface to be scanned while performing directional translation, and the scanning units transmit the image data acquired by the scanning units to the processing unit in an ordered arrangement, and the processing unit selectively performs a guide filter-based all-focus fusion process and/or a deep neural network-based high-space-time resolution all-focus imaging operation under the control of the control unit. The scanning device has the advantages that the working height of the scanning unit can be adjusted according to initial information such as the thickness of an object to be scanned and the like by arranging the adjustable scanning unit, and the scanning unit can also selectively adjust the working height of one or more scanning units according to the fluctuation conditions of the surfaces of different strip-shaped area objects. In addition, by introducing the full focus fusion processing and the deep neural network of the guide filter, the processing unit can further reconstruct high-frequency information details of the high-resolution partial images while improving the resolution of the low-resolution part of the in-focus image, so that a better full focus fusion result is achieved to a certain extent, information verification of the image overlapping part can be accurately completed during splicing of the strip images, and the finally obtained full focus image has higher resolution.
According to a preferred embodiment, the processing unit performs the through-focus fusion processing based on the guided filter by completing the splicing and fusion of the strip-shaped image data which are sequentially arranged in a manner of retaining the edge information of the image acquired by the single scanning unit to the maximum extent, so that the splicing and fusion of the whole image is completed while the adjacent two strip-shaped image data are supplemented and corrected by the respective edge information in a mutually verified manner. The method has the advantages that the edge information can be retained to the maximum extent by applying the full-focus fusion algorithm based on the guide filter to perform full-focus fusion on the multi-focus image, and high-speed and high-quality full-focus fusion is realized. In order to improve the multi-focus image acquisition quality, the multi-focus image acquisition is carried out under single exposure by adopting a light field imaging method, so that the consistency of the background information of the multi-focus image and the completeness of the full-focus fusion input information are ensured, and meanwhile, the method has the advantages of low cost, simple system structure and wide depth extension range, and effectively realizes the high-quality multi-focus image acquisition. The fusion algorithm based on the guide filter is combined with the light field imaging to form a single-exposure light field full-focus fusion processing system based on the guide filter, so that the acquisition of high-quality and large-depth-of-field full-focus images is realized.
According to a preferred embodiment, the processing unit adds the convolutional deep neural network in a manner that the spatial resolution of the light field part with low resolution in the focal image can be improved, so as to complete the single-image super-resolution of the low-resolution image, wherein any one of the scanning units supplements the edge area of the light field part with low resolution in the focal image acquired by the adjacent scanning unit through the image edge information acquired by the adjacent scanning unit, so that the image information missing from the image edge area is supplemented, and the resolution of the single image is improved. The method has the advantages that image data with high time resolution and space resolution are obtained by introducing a deep neural network-based high-space-time-resolution all-focus imaging system, the single-image super-resolution of a light field low-resolution image is realized by adding a convolution depth neural network, the light field part with low resolution is up-sampled in a focus image by using the network, the resolution of the multi-focus image is effectively improved, and finally, the all-focus image with high space-time resolution is obtained through a guiding filter-based all-focus fusion algorithm, so that the high-resolution and large-depth-of-field image acquisition of the surface of an object to be scanned is realized. The processing mode of carrying out the light field full focus fusion by single exposure based on the guide filter solves the problems of low precision, high complexity, inconsistent background information, loss of high-frequency information and the like of the traditional multi-focus fusion method, realizes the acquisition of full focus images under single exposure and effectively acquires image information with large depth of field. On the basis of the full focus fusion based on the guide filter, the convolution depth neural network is utilized to realize the up-sampling of the low-resolution light field part in the focus image, thereby effectively improving the spatial resolution of the full focus image and realizing the target of the high-resolution full focus image acquisition in a large depth-of-field range.
According to a preferred embodiment, the processing unit performs single-image super-resolution of the light-field low-resolution image by upsampling the low-resolution light-field portion in the focal image. The method has the advantages that the convolution depth neural network capable of completing single-image super-resolution processing is introduced to perform up-sampling operation on the low-resolution light field image, so that the resolution of the multi-focus image is effectively improved, and the defect that the high-resolution full-focus image cannot be obtained by using a guide filter due to low spatial resolution of the multi-focus image obtained by the conventional light field imaging is overcome.
According to a preferred embodiment, the processing unit supplements the low-resolution light field part acquired by the designated scanning unit in the edge region of the in-focus image by using the edge information of the image acquired by the adjacent scanning unit, so that the image information missing from the edge region of the image acquired by the designated scanning unit is supplemented.
According to a preferred embodiment, the convolutional deep neural network of the processing unit at least comprises a compression module, a reconstruction module and a loss module, wherein data output by the compression module can be up-sampled in the reconstruction module, and the reconstruction module mixes and reduces channels by adding convolutional layers in a trunk structure of the reconstruction module, so that the whole structure can flexibly realize up-sampling by adopting a pixel shuffling strategy. The method has the advantages that the convolution compression module is introduced to compress the image to a certain degree, and the difference between the bilinear interpolation method and the real image fuzzy kernel is reduced through compression, so that the problem that the training data set is difficult to acquire in the prior art is solved. In addition, when the reconstruction module is constructed, the reconstruction module has better performance by selecting a structure based on a Residual Dense Network (RDN). Finally, the application also uses a pre-trained convolutional neural network as a loss function to achieve a better detail preservation effect.
According to a preferred embodiment, the lighting unit can adjust the light ray emergence angle according to the working position of the scanning unit and the surface condition of the object to be scanned, so that the scanning unit acquires the shadow-free surface information of the object to be scanned; the illumination unit is located including dividing at least first illumination module and the second illumination module of scanning unit both sides, moreover first illumination module and second illumination module can construct two illumination light that are the contained angle and carry out the light filling to waiting to scan the object surface. The scanning unit has the advantages that the illumination units capable of emitting the illumination light rays with the symmetrical included angles are arranged on the two sides of the scanning unit, so that the illumination light rays can be supplemented with light according to the actual light supplementing requirement on the surface of an object, the scanning unit can conveniently and rapidly and accurately acquire surface information of the object to be scanned, and particularly image data which needs color restoration and three-dimensional space representation are acquired.
The present application further provides a scanning method of a multi-lens scanning device, the scanning method at least includes the following steps:
s1: positioning a large-format article to be scanned on a flat laying table after the position is reset to zero, and enabling the large-format article to be scanned to pass through a scannable area of a scanning unit at a stable speed along with the movement of the flat laying table by controlling the directional translation of the flat laying table 3, so that the scanning unit can complete the acquisition of a plurality of linear image information in a plate dividing manner;
s2: collecting light field information of a region to be detected by combining light field imaging;
s3: decoding the acquired light field information and acquiring a multi-focus image with low resolution by using a digital refocusing algorithm;
s4: the multi-focus image is up-sampled by using a depth neural network, and a high-resolution light field multi-focus image is obtained;
s5: and performing full-focus fusion processing on the high-resolution multi-focus image by using a full-focus fusion algorithm based on a guide filter and synthesizing the full-focus image with high resolution and large depth of field. The method has the advantages that the edge information can be retained to the maximum extent by applying the full-focus fusion algorithm based on the guide filter to perform full-focus fusion on the multi-focus image, and high-speed and high-quality full-focus fusion is realized. In order to improve the multi-focus image acquisition quality, the multi-focus image acquisition is carried out under single exposure by adopting a light field imaging method, so that the consistency of the background information of the multi-focus image and the completeness of the full-focus fusion input information are ensured, and meanwhile, the method has the advantages of low cost, simple system structure and wide depth extension range, and effectively realizes the high-quality multi-focus image acquisition. The fusion algorithm based on the guide filter is combined with the light field imaging to form a single-exposure light field full-focus fusion processing system based on the guide filter, so that the acquisition of high-quality and large-depth-of-field full-focus images is realized.
According to a preferred embodiment, the high-resolution light field multi-focus image is obtained by up-sampling the low-resolution light field image through a convolutional neural network structure incorporating a convolutional compression module. The method has the advantages that image data with high time resolution and space resolution are obtained by introducing a deep neural network-based high-space-time-resolution all-focus imaging system, the single-image super-resolution of a light field low-resolution image is realized by adding a convolution depth neural network, the light field part with low resolution is up-sampled in a focus image by using the network, the resolution of the multi-focus image is effectively improved, finally, the all-focus image with high space-time resolution is obtained by using a guiding filter-based all-focus fusion algorithm, and the high-resolution and large depth-of-field image acquisition of the surface of an object to be scanned is realized. The processing mode of carrying out light field full focus fusion through single exposure based on the guide filter solves the problems of low precision, high complexity, inconsistent background information, high frequency information loss and the like of the traditional multi-focus fusion method, and the acquisition operation of the full focus image under the single exposure based on the guide filter can acquire the image information with large depth of field. On the basis of the all-focus fusion based on the guide filter, the convolution depth neural network is utilized to realize the up-sampling of the light field part with low resolution on the focus image, so that the spatial resolution of the all-focus image is effectively improved, and the acquisition of the all-focus image with high resolution in a large depth of field range is realized.
Drawings
Fig. 1 is a schematic perspective view of a preferred multi-lens scanning device according to the present invention;
FIG. 2 is a schematic flowchart of a scanning method of a preferred multi-lens scanning device according to the present invention;
FIG. 3 is a schematic structural diagram of a preferred multi-lens scanning device according to the present invention;
fig. 4 is a schematic view of a scanning unit arrangement structure of a preferred multi-lens scanning device according to the present invention.
List of reference numerals
1: a platform body; 2: a frame; 3: a flat laying platform; 4: a scanning unit; 5: a lighting unit; 6: a suspension adjustment assembly; 7: a purging unit; 8: a processing unit; 9: a control unit; 10: scanning the moving body; 51: a first lighting module; 52: a second lighting module; 53: mounting a bracket; 54: rotating the mounting seat; 55: a rotation driving unit; 56: an induction control unit; 61: a first adjustment mechanism; 62: a second adjustment mechanism.
Detailed Description
The following detailed description is made with reference to the accompanying drawings.
Example 1
The application provides a scanning method of a multi-lens scanning device, which combines a convolutional neural network with optical field imaging to realize full-focus image acquisition with high time resolution and high spatial resolution, and the scanning method of the multi-lens scanning device is divided into the following steps according to an image processing operation flow of a processing unit 8 in fig. 2:
s1: placing large-format articles to be scanned on the placing table 3, and controlling the directional translation of the scanning motion main body 10 to enable the scanning unit 4 to scan the large-format articles to be scanned placed on the placing table 3, so that the scanning unit 4 can complete the acquisition of a plurality of linear image information in a plate dividing block manner;
s2: collecting light field information of a region (scene) to be detected by combining light field imaging;
s2: collecting light field information of a region (scene) to be detected by combining light field imaging;
s3: decoding the acquired light field information and acquiring a multi-focus image with low resolution by using a digital refocusing algorithm;
s4: the multi-focus image is up-sampled by using a depth neural network, and a high-resolution light field multi-focus image is obtained;
s5: and performing full-focus fusion processing on the high-resolution multi-focus image by using a full-focus fusion algorithm based on a guide filter and synthesizing the full-focus image with high resolution and large depth of field.
Preferably, the illumination unit 5 is disposed at both sides of the scanning unit 4 in a manner of supplementing light without dead angles in a scannable area defined by a lens of the scanning unit 4, and it can eliminate shadows formed on the object to be scanned due to surface irregularities by adjusting a relative illumination angle between the illumination unit and the scanning unit 4 during image scanning by the scanning unit 4. Preferably, the suspension adjusting assembly 6 can adjust the suspension height of the scanning unit 4 according to the thickness, surface flatness, etc. of the scanned object, so that the lens focus of the scanning unit 4 can be always positioned on the surface of the object, thereby obtaining accurate image data.
Preferably, the process of light field information acquisition refers to acquiring the intensity information and the angle information of the scene light simultaneously in a single exposure, and this operation has a high time resolution, but is limited by the number of microlens units and the number of corresponding sensors, and the image space resolution of the light field imaging is low. The convolution neural network structure is used for carrying out up-sampling operation on the low-resolution light field image, the image resolution can be effectively improved, and finally the all-focus image with high space-time resolution is obtained through the all-focus fusion algorithm based on the guide filter. Preferably, the refocusing process is equivalent to scanning an actual scene, each scanning position is a clearly focused position, and in the light field digital refocusing process, the scanning step length can be set to be small enough according to actual conditions, so that enough partial focused images can be acquired within the same acquisition depth range, the focusing scanning precision is high, the number of multiple focused images is large, and the completeness of multiple focused image information required by a later-stage all-focus fusion algorithm is ensured due to the advantage of light field refocusing. The up-sampling method based on the deep neural network can reconstruct more detail information, and recover high-frequency information to a greater extent under the condition of keeping the focusing characteristic of the original partial focusing image.
Preferably, a convolution compression module is introduced into the convolution neural network adopted by the application and used for compressing the image to a certain degree and reducing the difference between a bilinear interpolation method and a real image fuzzy core through compression, so that the problem of a training data set is successfully solved. In addition, when the reconstruction module is constructed, the reconstruction module has better performance by selecting a structure based on a Residual Dense Network (RDN). Finally, the method and the device use the pre-trained convolutional neural network as a loss function to achieve better detail keeping upsampling effect. Therefore, the convolutional neural network can realize high-quality up-sampling, and has a better high-frequency detail information retaining effect for the light field digital refocusing image, so that the full-focus image with high time resolution and high space resolution can be obtained by the high-space-time resolution full-focus imaging technology based on the deep neural network, and further the scene information acquisition with large depth of field and high resolution is realized.
Example 2
The application also provides a multi-lens scanning device, which comprises a platform main body 1, a machine frame 2, a placing table 3, a scanning unit 4, a lighting unit 5, a suspension adjusting assembly 6, a purging unit 7 and a scanning motion main body 10.
According to a specific embodiment shown in fig. 1 and 2, a placing table 3 for placing an object to be scanned is provided on the table main body 1. The frame 2 is provided at both sides of the table main body 1 so that the scanning moving body 10 located above the frame 2 can be supported right above the table main body 1. The scanning motion body 10 is internally provided with an illumination unit 5, a suspension adjustment assembly 6 and a purging unit 7, wherein the suspension adjustment assembly 6 is also connected with the scanning unit 4. The frame 2 can drive the scanning motion main body 10 to perform reciprocating translation above the placing table 3, so that the scanning unit 4 can acquire surface information of a whole large-breadth object to be scanned placed on the placing table 3. The scanning unit 4 scans an object to be scanned placed on the placing table 3. The scanning unit 4 is movably connected to the scanning motion main body 10 through the suspension adjusting assembly 6, so that the scanning unit 4 can adjust the relative working position between the scanning unit and the placing table 3 by using the movable suspension adjusting assembly 6, the scanning unit 4 can obtain the surface information of an object to be scanned, which passes through a scanning area of the scanning unit 4, while the scanning motion main body 10 drives the scanning unit 4 to perform reciprocating translation above the placing table 3, and meanwhile, the scanning unit 4 can adjust the working position of the scanning unit 4 according to the uneven surface of the object to be scanned, so that the scanning unit 4 can obtain clear scanning data in a mode that the focus of the scanning unit 4 can be accurately positioned on the surface of the object.
As shown in fig. 4, the plurality of scanning units 4 can scan and acquire information of the to-be-scanned area in a parallel manner, that is, a single scanning unit 4 performs scanning operation according to a manner that the single scanning unit 4 can linearly scan and acquire the high spatial-temporal resolution image and the image color of at least part of the strip-shaped area of the large-format to-be-scanned area, so that the plurality of scanning units 4 arranged side by side can simultaneously complete image data acquisition of the whole large-format to-be-scanned area according to respective working position adjustment manners, and further complete data integration of the whole scanning area by splicing the strip-shaped image data acquired by the plurality of scanning units 4. Preferably, any one of the scanning units 4 can always position the focal point of its camera at the surface position of the object to be scanned when completing the acquisition of one linear rectangular image, so that at least a part of the collected image information representing the surface of the object to be scanned can be used as the optimal information. Preferably, the strip-shaped area that can be scanned by a single scanning unit 4 partially overlaps the strip-shaped area of an adjacent scanning unit 4, that is, the scanning area of the single scanning unit 4 at least includes a precise area right below the scanning area and a blurred area covering a part of the precise scanning area of the adjacent scanning unit 4, so that the processing unit 8 can complete the splicing of images with high spatial and temporal resolution after processing by overlapping the sharp image output by the scanning unit 4 and the blurred image output by the adjacent scanning unit 4, which correspond to the same surface area of the object, thereby forming a complete surface image of a large-format object to be scanned.
Specifically, the scanning unit 4 includes groups of scanning modules that are parallel to each other and are all arranged laterally. The single scanning module group is formed by transversely arranging a plurality of scanning modules. The scanning height of the scanning module group is adjusted according to the scanning operation, and the working height of the scanning module is adjusted based on the material thickness and the like of the object to be scanned, so that the scanning module and the surface of the object to be scanned always keep a set distance and a depth of field for scanning. Any scanning module of the scanning module group can adjust the focus position according to the fluctuation of the surface of the article, and the scanning module can acquire relatively clear focused images all the time. Preferably, the scanning module of the scanning module group may employ a linear camera.
Preferably, the platform main body 1 is connected with a placing table 3 for placing the object to be scanned through a guide rail connecting structure. The platform main body 1 can be fixedly supported in any arrangement space, so that the platform main body 1 can provide a stable and flat working environment for the placing table 3. The side of platform main part 1 still is provided with and places the receiver of platform 3 co-altitude. A film base roller which can carry out protective film laying on the object to be scanned is arranged in the storage box.
Preferably, the lighting unit 5 comprises at least a first lighting module 51, a second lighting module 52 and a mounting bracket 53 for mounting the lighting modules. Further preferably, both ends of the first and second lighting modules 51 and 52 are connected to the mounting bracket 53 through a rotary mounting seat 54 that can be inserted into an end of the mounting bracket 53, so that the first and second lighting modules 51 and 52 can rotate relative to the mounting bracket 53 as required. Preferably, the rotating mounting seat 54 rotates the working ends of the first and second illumination modules 51 and 52 by rotating on the mounting bracket 53, so as to adjust the angle and the relative position between the light emitted by the first and second illumination modules 51 and 52 and the scanning unit 4. Preferably, the rotation mounting base 54 is also movably connected with a rotation driving unit 55 capable of controlling the rotation thereof. Preferably, the rotation driving unit 55 adjustably controls the rotation mount 54 to perform a predetermined rotation, so that the first and second illumination modules 51 and 52 can irradiate light onto the surface of the object to be scanned at a predetermined angle. The rotation driving unit 55 is arranged, so that the rotation mounting base 54 can drive the first lighting module 51 and the second lighting module 52 to adjust the rotation angle, the working positions of the first lighting module 51 and the second lighting module 52 are limited, and a user can conveniently and accurately complete the adjustment of the emergent angle of the lighting light. Preferably, an end of the mounting bracket 53 remote from the first illumination module 51 or the second illumination module 52 is detachably attached to the scanning moving body 10. Preferably, the rotary drive unit 55 may employ a servo motor capable of accurately controlling its operating state.
Preferably, the illumination unit 5 further includes a sensing control unit 56 capable of actively acquiring the position of the object to be scanned to accurately control the first illumination module 51 and the second illumination module 52 to be turned on or off. Preferably, the induction control module 56 can be respectively and fixedly installed on one side of the mounting bracket 53 close to the feeding end and the discharging end, and the working end of the induction control module 56 is arranged in a manner of facing the placing table 3. Specifically, the sensing control module 56 arranged at the feed end can control the first lighting module 51 and the second lighting module 52 to work when sensing an object to be scanned for the first time, and polish an imaging shooting area of the scanning unit 4, and the sensing control module 56 arranged at the discharge end controls the first lighting module 51 and the second lighting module 52 to stop working when detecting that the placing table 3 is not covered by the object to be scanned on the surface of the sliding table when the first lighting module 51 and the second lighting module 52 are in a working state. The electric energy consumed by lighting supplementary lighting can be reduced to a certain extent by arranging the induction control module 56, and the service life of the lighting unit 5 can be effectively prolonged.
Preferably, the suspension adjustment assembly 6 includes a first adjustment mechanism 61 and a second adjustment mechanism 62. Two first adjustment mechanisms 61 arranged in parallel are connected to the scanning moving body 10. Preferably, the working end of the first adjustment mechanism 61 is linearly movable in a vertical direction perpendicular to the working surface of the placing table 3. A second adjustment mechanism 62 is fixedly mounted on the working end of the first adjustment mechanism 61. Preferably, a plurality of second adjusting mechanisms 62 can correspond to the scanning units 4 arranged side by side, so that the second adjusting mechanisms 62 can drive the set scanning units 4 connected with the second adjusting mechanisms to perform secondary adjustment of the suspension height, and the focus of each scanning unit 4 can be positioned on the uneven surface of the object to be scanned.
Preferably, a purge unit 7 capable of performing a purge process on the work surface of the placing table 3 is further mounted on the scanning moving body 10. The purging unit 7 is disposed upstream of the illumination unit 5, so that when the scanning movement main body 10 performs the translational movement, the purging unit 7 performs dust removal, impurity removal, and water removal processes on the surface of the object to be scanned placed on the placing table 3, and then the image information is acquired by the scanning unit 4. Preferably, the blowing unit 7 can adjust the blowing amount according to the scanning requirement.
As shown in fig. 2, the scanning unit 4 processes the data acquired from the surface of the object by the processing unit 8, thereby generating an image with high resolution and sharpness. Preferably, the processing unit 8 is capable of selectively performing a guided filter-based afocal fusion process and/or a deep neural network-based high-spatial-temporal-resolution afocal imaging operation under the control of the control unit 9. Preferably, the control unit 9 is also capable of transmitting the image data processed by the processing unit 8 to a terminal or a display screen for display.
Example 3
This embodiment is a further improvement of embodiment 2, and repeated contents are not described again.
In the prior art, one-time shooting operation needs to be performed at the expense of accuracy of a certain parameter when image information is acquired. For example, there is a contradiction between the spatial resolution and the angular resolution, and the angular resolution is often obtained while the spatial resolution is sacrificed, so that only a low-resolution image can be realized. The shortcomings of the prior art limit the application of light field imaging in fast imaging techniques.
Preferably, due to the limitation of the structure of the imaging system, the acquired scene information can only be clearly imaged within a certain range, and objects beyond the range are blurred due to defocusing, and the range is defined as the depth of field. Due to the limitation of the depth of field, objects within the depth of field are focused clearly, and objects outside the depth of field form blurred images, so that the acquired images are all partial in-focus images. In the application of object detection and classification identification of machine vision, the requirement on the depth of field of an image is extremely high, and only with large depth of field, it can be ensured that as many objects as possible in a scene are in focus clearly. Due to the existence of limited depth of field, the acquisition depth and the imaging quality of scene information are affected, a traditional camera usually needs to balance between the depth of field and the signal-to-noise ratio, the camera has a fixed and single focal length, and the degree of blurring of an image outside the depth of field depends on the size of the focal length and the size of an aperture. The depth of field is improved by reducing the aperture, so that the signal to noise ratio is reduced, otherwise, if the aperture is increased, the signal to noise ratio is improved but the depth of field is reduced, so that the reduction of the aperture size of the aperture is not the best choice for increasing the depth of field, and especially in a dark field environment, the imaging of the small aperture inevitably causes the weakening of the image intensity, and the imaging effect is seriously influenced. In order to effectively solve such problems, people often adopt a method of increasing exposure time to increase image intensity, however, the imaging method of increasing exposure time has poor performance in dynamic imaging and high-speed imaging, and cannot realize high-speed, clear and large depth-of-field imaging in a dynamic scene. In order to effectively increase the imaging depth of field, a full focus imaging method is developed, i.e. an imaging method in which each depth object in a scene is in focus.
In addition, as the multi-focus image acquisition method in the prior art, the defects of low manual focusing precision, low speed and low image acquisition efficiency exist, and in addition, the precision of the automatic focusing method based on a mechanical structure is improved compared with the manual focusing method, but the number of multi-focus images is limited. The multi-focus image acquisition methods based on specific optical devices have advantages and disadvantages, for example, the focusing method based on the electrically controlled liquid crystal zoom lens increases a micro-brake structure and increases the system complexity. The cost of the partial focusing method based on the DMD is high, the spectrum scanning camera and the method based on the color filter aperture are limited by core devices, rapid, high-precision and large-amount multi-focus image acquisition cannot be realized, and multi-focus image fusion is not facilitated. Furthermore, the four methods described above have common disadvantages: the depth of focus range is limited.
Preferably, in order to solve the problems occurring in the existing all-focus fusion algorithm, the processing unit 8 of the present application applies the all-focus fusion algorithm based on the guiding filter to perform the all-focus fusion on the multi-focus image. The guide filter is a typical nonlinear filter, can retain edge information to the maximum extent, and realizes high-speed and high-quality all-focus fusion. In order to improve the multi-focus image acquisition quality, the multi-focus image acquisition is carried out under single exposure by adopting a light field imaging method, so that the consistency of the background information of the multi-focus image and the completeness of the full-focus fusion input information are ensured, meanwhile, the method also has the advantages of low cost, simple system structure and wide depth extension range, and the high-quality multi-focus image acquisition is effectively realized. The fusion algorithm based on the guide filter is combined with the light field imaging to form the single-exposure light field full focus fusion processing system based on the guide filter, so that the acquisition of the full focus image with high quality and large depth of field is realized.
Preferably, in the present application, in order to overcome the defect that the spatial resolution of the acquired image is low due to the fact that the spatial resolution is sacrificed while the angular resolution is acquired in the scanning process, the processing unit 8 acquires the image data with high temporal resolution and spatial resolution by introducing the high spatial-temporal resolution all-focal imaging system based on the deep neural network. When the existing scanning unit 4 collects a multi-focus image, because the intensity information and the angle information of scene light rays are simultaneously obtained under single exposure, although the collected image information has very high time resolution, the multi-focus image obtained through light field imaging is limited by the number of micro-lens units and the number of corresponding sensors, the spatial resolution is often low, and thus a high-resolution all-focus image cannot be obtained. Therefore, the single-image super resolution of the light field low-resolution image is realized by adding the convolution depth neural network. Preferably, the setting scanning unit 4 supplements the low-resolution light field part acquired by the setting scanning unit 4 in the edge region of the in-focus image through the image edge information acquired by the adjacent scanning unit 4, so that the image information missing from the image edge region is supplemented to improve the resolution of the single image. Preferably, the network is used for sampling the light field part with low resolution on the focal image, so that the resolution of the multi-focal image is effectively improved, and finally, the all-focal image with high space-time resolution is obtained through an all-focal fusion algorithm based on a guide filter, so that the image acquisition with high resolution and large depth of field of the scene information is realized. Preferably, the processing unit 8 solves the problems of low precision, high complexity, inconsistent background information, loss of high-frequency information and the like of the conventional multi-focus fusion method by performing the light field full focus fusion processing mode based on the single exposure of the guide filter, realizes the acquisition of the full focus image under the single exposure, and effectively acquires the scene information with large depth of field. Further preferably, the processing unit 8 realizes the up-sampling of the low-resolution light field part on the focal image by using the convolution depth neural network on the basis of the full-focus fusion based on the guiding filter, so as to effectively improve the spatial resolution of the full-focus image, further realize the full-focus imaging of high spatial-temporal resolution, and achieve the target of collecting the high-resolution full-focus image within the large depth-of-field range.
Preferably, the convolution depth neural network used for single image super-resolution of the light field low-resolution image is arranged, and the low-resolution light field image is subjected to up-sampling operation, so that the resolution of the multi-focus image is effectively improved, the defect that the multi-focus image obtained by the existing light field imaging is low in spatial resolution and cannot be obtained by a guide filter is overcome, the multi-focus image after single image super-resolution is completed can be processed by the full-focus fusion operation based on the guide filter, and the full-focus image with high time resolution and spatial resolution is obtained. Preferably, the super-resolution processing is to reconstruct a high-resolution image from at least one low-resolution image, so that the original image acquired by the scanning unit 4 can be processed to output a high-resolution image with high-quality perceptual information.
Preferably, compared with a conventional depth value indexing method and a wavelet fusion algorithm, the guiding filter module has better edge retention capability, and better ensures that high-frequency detail information of a result image obtained by full-focus fusion is not lost, so that the guiding filter module has better fusion effect compared with the existing image fusion. Further preferably, under the condition that the guiding filter module is introduced to improve the time resolution of acquiring the multi-focus images, the convolution depth neural network capable of completing single-image super-resolution processing is introduced to perform up-sampling operation on the low-resolution light field images, so that the resolution of the multi-focus images is effectively improved, and the defect that the multi-focus images acquired by the existing light field imaging are low in spatial resolution and cannot acquire high-resolution full-focus images by using the guiding filter is overcome.
Preferably, the construction process of the low resolution image comprises: the high resolution image is first convolved with a blur kernel, the result is then down sampled, and finally a noise impact factor is added. A difficulty with single-image super-resolution algorithms is that the same low-resolution input may correspond to multiple different high-resolution output images.
Preferably, the convolutional neural network set forth in the foregoing may include a compression module, a reconstruction module, and a loss module. Preferably, the compression module has three sub-modules each consisting of at least two convolutional layers and one max-pooling layer. Preferably, the data output by the compression module can be up-sampled in the reconstruction module, and the spatial smoothness of the finally generated image is improved after passing through the loss module. Preferably, the reconstruction module is mainly composed of residual block (RDB), and image fusion blending is realized by adding convolution layer in its backbone structure and reducing channels, so that the whole structure can flexibly realize upsampling by adopting pixel shuffling strategy. Preferably, in the loss module, the output structure has 16 weight-tier networks (VGG-16), a pre-trained visual geometry group network for labeling the input, and an activation value for outputting ReLU at different depth layers, while calculating L1 loss, and the effect of the total loss variable is to improve the spatial smoothness of the generated image.
Example 4
CN104614339 discloses a three-dimensional terahertz imaging method for oil painting, which is used for providing scientific basis for a repairman to draw up a repairing scheme. However, according to this patent, after reconstructing the oil painting to obtain the three-dimensional image of the oil painting based on the position information of the points of the different painting layers in the oil painting imaging area to be measured and the corresponding reflected signal light intensities, the repairman does not perform the repair well with respect to the three-dimensional image in the display screen, but needs to exercise repeatedly with the aid of the copy formed by printing, and then dares to perform the actual repair on genuine or even rare goods. However, referring to the scanned image shown in fig. 3 of the patent, the scanned three-dimensional image cannot provide an image close to the original image when printed, and only a real three-dimensional image can be provided for judging each handwriting trace and the pigment thereof in the relevant depth direction. By virtue of the contents of embodiments 1 to 3 of the present invention, it is possible to scan and print finer images so that a repairman can perform a repair exercise on the printed matter by collating the three-dimensional image formed according to CN 104614339.
In the present application, when a plurality of scanning units 4 shown in fig. 1 and 3 acquire information of a large-format region to be scanned (oil painting) in a parallel manner, each scanning unit 4 determines a respective height position according to a three-dimensional image determined by a three-dimensional terahertz imaging method (CN104614339), in other words, the height position of each scanning unit 4 when scanning is performed is dynamically adjusted in real time according to a predetermined three-dimensional image.
Preferably, the scanning unit 4 mounted on the suspension adjustment assembly 6 may also adopt a matrix arrangement, so that it can accurately acquire all data of the surface of the object in a corresponding rectangular area when performing scanning once. Specifically, when scanning is performed on an uneven surface in a rectangular area of an object to be scanned, the control unit 9 determines the height positions of the scanning units 4 at different positions in the rectangular area according to a three-dimensional image determined by a three-dimensional terahertz imaging method (CN104614339), so that the lifting of the scanning units 4 can be changed by controlling the operation of the suspension adjusting assembly 6, so that each scanning unit 4 can adjust the working height thereof under the control of the control unit 9, and further, a plurality of scanning units 4 arranged in a matrix can present a height fluctuation state corresponding to the surface of the object, and the focus of each scanning unit 4 can be positioned on the surface of the object at the corresponding position.
Preferably, the control unit 9 can dynamically adjust the height positions of the plurality of scanning units 4 in real time according to the movement of the scanning moving body 10. After the scanning unit 4 acquires data of a rectangular area on the surface of an article, the scanning motion main body 10 can drive the scanning unit 4 to perform stepping translation, so that the scanning unit 4 arranged in a matrix can scan a rectangular area without data acquisition again. Preferably, before the scanning unit 4 performs scanning, the control unit 9 can adjust the height position of the scanning unit 4 according to the three-dimensional image of the surface of the article in the area acquired in advance. In addition, after the scanning unit 4 collects the surface data of the object, when the corresponding data is sent to the processing unit 8 for processing, the processing unit 8 corrects the image collected by each scanning unit 4 according to the predetermined three-dimensional image, especially corrects the viewing range and the depth of field, thereby facilitating the printing device associated with the scanning device to accurately print the surface shape and the color of the object which are consistent with the original object.
Specifically, the control unit 9 may position the three-dimensional image of the next rectangular region to be scanned according to the pre-received whole three-dimensional image of the scanned object while the scanning main body 10 drives the scanning unit 4 to perform the step-by-step movement after the scanning unit 4 completes the scanning operation in the rectangular region, so that the height position adjustment corresponding to each of the plurality of scanning units 4 arranged in the matrix is completed while the scanning main body 10 moves, and the scanning unit 4 completes the scanning operation of the region immediately after the scanning main body 10 drives the scanning unit 4 to move to the next rectangular region to be scanned, and the scanning unit 4 can accurately complete the scanning of the rectangular region within a short dead time after the scanning main body 10 completes the single step-by-step translation. During the continuous movement of the scanning moving body 10, the control unit 9 can correspondingly control the scanning unit 4 to perform adaptive working position adjustment, thereby implementing the continuous movement of the scanning moving body 10 and the real-time dynamic adjustment of the scanning unit 4 to complete the large-format scanning work.
Preferably, the matrix arrangement of the scanning units 4 can be adapted to the motion of the scanning motion body 10, so that the adjustment period of the working height adjustment of each of the plurality of scanning units 4 according to the received three-dimensional image determined by the three-dimensional terahertz imaging method (CN104614339) can exactly correspond to the motion period of the step-by-step scanning motion body 10, and thus the scanning unit 4 can adjust the scanning height required by the scanning height of itself to the corresponding surface position of the object to be scanned in advance, and then complete the scanning work of a rectangular region of the object to be scanned in a static state. After the scanning operation of a rectangular area is completed, the scanning motion main body 10 drives the scanning unit 4 to perform secondary motion with the motion direction unchanged, the scanning unit 4 synchronously performs height position adjustment and completes high-precision focusing in advance, and therefore after the scanning motion main body 10 drives the scanning unit 4 to complete translation of a rectangular width, the scanning unit 4 can rapidly complete high-precision scanning and data acquisition within the dead time of the step gap of the scanning motion main body 10. The scanning mode of this application is that a plurality of scanning unit 4 of matrix arrangement carries out synchronous scanning, its scanning can be accomplished in the moment of scanning motion main part 10 stagnated, make the stagnation clearance of a plurality of step motion of scanning motion main part 10 shorten as far as possible, thereby a plurality of step motion splice mutually can be similar to a continuous motion's process, scanning unit 4's high position control also is a continuous process, thereby high speed scanning mode among the prior art has improved the scanning precision greatly and has also guaranteed scanning efficiency simultaneously.
Compared with the prior art that the scanning work of the surface of the object to be scanned is completed by adopting the continuous high-speed motion of the object to be scanned or the scanning device, the scanning device has the advantages that by arranging the scanning motion main body 10 to move in a stepping mode, the scanning unit 4 can complete the scanning of the surface of the object when the object to be scanned is in a static state, the focusing precision and the accuracy of the scanning data of the scanning unit 4 are greatly improved, especially compared with the scanning of the surface of the object in a moving state, the scanning operation in the static state has higher scanning quality, and can verify whether there is a shadow or the like in the scanned image immediately after the completion of the scanning, the control unit 9 adjusts the motion state of the scanning moving body 10 according to the verification result, the scanning unit 4 can immediately perform secondary scanning on the region, and the complex workload of performing verification and correction after completing scanning of all regions and image splicing is avoided. The scanning of one rectangular area by the scanning unit 4 means that the scanning of a plurality of scanning units 4 is performed in blocks, the time consumption of the scanning process is extremely short, the scanning time length is not required to be specially set, and only the scanning working time point of the scanning unit is matched with the stepping movement period of the scanning movement main body 10, so that the working periods of the scanning unit and the scanning movement main body are overlapped. Preferably, when the scanning unit 4 needs to perform the secondary scanning of the same area, the control unit 9 controls the dead time of the scanning moving body 10 to be increased by a step length, and at this time, the control unit 9 eliminates the scanning shadow existing on the area of the surface of the object by adjusting the working position of the lighting unit 5. The scanning unit 4 performs the secondary scanning after the illumination unit 5 completes the secondary adjustment.
Preferably, the working position of the illumination unit 5 is also transformed according to a three-dimensional image determined by a three-dimensional terahertz imaging method (CN 104614339). Specifically, the control unit 9 adjusts the illumination height and angle of the illumination unit 5 in accordance with the three-dimensional image of the object to be scanned while adjusting the height position of the scanning unit 4, so that the illumination unit 5 eliminates shadows that may exist on the surface of the rectangular region of the object to be scanned by illumination. The illumination unit 5 and the scanning unit 4 perform synchronous motion according to the three-dimensional image determined by the three-dimensional terahertz imaging method (CN104614339) received by the control unit 9 in advance, so that not only can the high-precision scanning image be efficiently and accurately acquired, but also the time consumption caused by the conventional step-by-step scanning adjustment operation can be avoided. In particular, the height position adjustment of the scanning units 4 arranged in a matrix can overlap with the time of a single stepping motion of the scanning motion main body 10, so that the scanning operation of a rectangular area can be completed by keeping a stagnation gap of continuous motion between the scanning units and the scanning motion main body 10, the scanning efficiency is greatly improved by the continuous motion of the scanning motion main body 10 and the continuous fluctuation working position conversion of the scanning units 4, and the scanning accuracy is effectively ensured.
It should be noted that the above-mentioned embodiments are exemplary, and that those skilled in the art, having benefit of the present disclosure, may devise various arrangements that are within the scope of the present disclosure and that fall within the scope of the invention. It should be understood by those skilled in the art that the present specification and figures are illustrative only and are not limiting upon the claims. The scope of the invention is defined by the claims and their equivalents. Throughout this document, the features referred to as "preferably" are only an optional feature and should not be understood as necessarily requiring that such applicant reserves the right to disclaim or delete the associated preferred feature at any time.

Claims (10)

1. A multi-lens scanning device at least comprises a flat laying platform (3) for fixing and placing an object to be scanned, a scanning unit (4) and a lighting unit (5) which are connected with a platform main body (1) through a frame (2), the scanning unit (4) completes the scanning of the object to be scanned placed on the flat laying table (3) according to the directional translation mode, characterized in that a plurality of the scanning units (4) are suspended above the platform body (1) in a side-by-side manner by means of suspension adjustment assemblies (6), and the spacing between adjacent scanning units (4) is set in such a way that the scannable areas of two adjacent scanning units (4) at least partially overlap, enabling the scanning unit (4) to supplement and correct in-focus image edge area data acquired by the adjacent scanning unit (4) with at least part of the strip in-focus image data acquired by the scanning unit;
a plurality of the scanning units (4) arranged side by side can transmit the acquired image data to a processing unit (8) for selective image processing, so that a full focus image with high space-time resolution is acquired.
2. The multi-lens scanning device according to claim 1, wherein a plurality of the scanning units (4) complete the acquisition of image data of the surface of the object to be scanned with a large width while performing directional translation, and the scanning units (4) transmit the image data acquired by the scanning units to the processing unit (8) in an ordered arrangement, and the processing unit (8) selectively performs the guiding filter-based all-focus fusion processing and/or the deep neural network-based high space-time resolution all-focus imaging operation under the control of the control unit (9).
3. The multi-lens scanning device according to claim 2, characterized in that the processing unit (8) performs the through-focus fusion processing based on the guiding filter to perform the splicing fusion of the strip-shaped image data arranged in order in a manner that the edge information of the image collected by the single scanning unit (4) can be retained to the maximum extent, so that the splicing fusion of the whole image is performed while the adjacent two strip-shaped image data are supplemented and corrected by the respective edge information in a mutually verified manner.
4. The multi-lens scanning device according to claim 3, characterized in that the processing unit (8) adds a convolutional deep neural network to achieve single-image super-resolution of the low-resolution image in such a way as to increase the spatial resolution of the low-resolution light-field portions in the in-focus image,
any one of the scanning units (4) supplements the low-resolution light field part collected by the scanning unit in the edge area of the focal image through the image edge information collected by the adjacent scanning unit (4), so that the image information missing from the image edge area is supplemented to improve the resolution of the single image.
5. Multi-lens scanning device according to claim 4, characterized in that the processing unit (8) performs single-shot super resolution of the light-field low-resolution image by upsampling the low-resolution light-field portion in the focal image.
6. The multi-lens scanning device according to claim 1, characterized in that the processing unit (8) supplements the low-resolution light field part acquired by the designated scanning unit (4) in the edge region of the focus image by using the edge information of the image acquired by the adjacent scanning unit (4), so that the image information missing from the edge region of the image acquired by the designated scanning unit (4) is supplemented.
7. The multi-lens scanning device according to claim 6, characterized in that the convolutional deep neural network of the processing unit (8) comprises at least a compression module, a reconstruction module and a loss module, wherein the data output by the compression module can be up-sampled in the reconstruction module, and the reconstruction module mixes and reduces channels by adding convolutional layers in its backbone structure, so that the whole structure can flexibly realize up-sampling by adopting a pixel shuffling strategy.
8. The multi-lens scanning device according to claim 7, characterized in that the illumination unit (5) is capable of adjusting the light emergence angle thereof according to the working position of the scanning unit (4) and the surface condition of the object to be scanned, so that the scanning unit (4) acquires the shadow-free surface information of the object to be scanned;
the illumination unit (5) at least comprises a first illumination module (51) and a second illumination module (52) which are respectively arranged at two sides of the scanning unit (4), and the first illumination module (51) and the second illumination module (52) can construct two illumination light lines with included angles to supplement light to the surface of an object to be scanned.
9. A scanning method of a multi-lens scanning device is characterized by at least comprising the following steps:
s1: placing large-breadth articles to be scanned on a placing table (3), and controlling the directional translation of a scanning motion main body (10) to enable a scanning unit (4) to scan the large-breadth articles to be scanned placed on the placing table (3), so that the scanning unit (4) can complete the acquisition of a plurality of linear image information in a plate dividing block mode;
s2: collecting light field information of a region to be detected by combining light field imaging;
s3: decoding the acquired light field information and acquiring a multi-focus image with low resolution by using a digital refocusing algorithm;
s4: the multi-focus image is up-sampled by using a depth neural network, and a high-resolution light field multi-focus image is obtained;
s5: and performing full-focus fusion processing on the high-resolution multi-focus image by using a full-focus fusion algorithm based on a guide filter and synthesizing the full-focus image with high resolution and large depth of field.
10. The scanning method of claim 9 wherein the high resolution light field multi-focused image is obtained by upsampling a low resolution light field image by introducing a convolutional neural network structure having a convolutional compression module.
CN202210670745.3A 2022-06-13 2022-06-13 Multi-lens scanning device and scanning method thereof Active CN115065761B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210670745.3A CN115065761B (en) 2022-06-13 2022-06-13 Multi-lens scanning device and scanning method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210670745.3A CN115065761B (en) 2022-06-13 2022-06-13 Multi-lens scanning device and scanning method thereof

Publications (2)

Publication Number Publication Date
CN115065761A true CN115065761A (en) 2022-09-16
CN115065761B CN115065761B (en) 2023-09-12

Family

ID=83199443

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210670745.3A Active CN115065761B (en) 2022-06-13 2022-06-13 Multi-lens scanning device and scanning method thereof

Country Status (1)

Country Link
CN (1) CN115065761B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5301244A (en) * 1991-07-18 1994-04-05 Eastman Kodak Company Computer input scanner incorporating multiple scanning modes
US5532846A (en) * 1995-06-29 1996-07-02 Agfa Division, Bayer Corporation Method and apparatus for positioning a focusing lens
WO2004097888A2 (en) * 2003-04-25 2004-11-11 Cxr Limited X-ray sources
JP3185858U (en) * 2013-05-02 2013-09-05 テスト リサーチ, インク. Automatic optical detection equipment
CN205961256U (en) * 2016-08-23 2017-02-15 北京龙日艺通数码印刷有限公司 Novel platform formula scanner
CN109064437A (en) * 2018-07-11 2018-12-21 中国人民解放军国防科技大学 Image fusion method based on guided filtering and online dictionary learning
CN109523458A (en) * 2018-05-24 2019-03-26 湖北科技学院 A kind of high-precision sparse angular CT method for reconstructing of the sparse induction dynamic guiding filtering of combination
AU2020100199A4 (en) * 2020-02-08 2020-03-19 Cao, Sihua MR A medical image fusion method based on two-layer decomposition and improved spatial frequency
CN111182238A (en) * 2019-11-15 2020-05-19 北京超放信息技术有限公司 High-resolution mobile electronic equipment imaging device and method based on scanning light field
US20200195813A1 (en) * 2018-12-17 2020-06-18 Paul Lapstun Oblique Scanning Aerial Cameras
CN111415297A (en) * 2020-03-06 2020-07-14 清华大学深圳国际研究生院 Imaging method of confocal microscope
CN111866370A (en) * 2020-05-28 2020-10-30 北京迈格威科技有限公司 Method, device, equipment, medium, camera array and assembly for synthesizing panoramic deep image
CN112686829A (en) * 2021-01-11 2021-04-20 太原科技大学 4D light field full-focus image acquisition method based on angle information
WO2021203883A1 (en) * 2020-04-10 2021-10-14 杭州思看科技有限公司 Three-dimensional scanning method, three-dimensional scanning system, and computer readable storage medium
WO2021253326A1 (en) * 2020-06-18 2021-12-23 深圳先进技术研究院 Domain transform-based method for reconstructing positron emission tomography image
US20220014684A1 (en) * 2019-03-25 2022-01-13 Huawei Technologies Co., Ltd. Image display method and device

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5301244A (en) * 1991-07-18 1994-04-05 Eastman Kodak Company Computer input scanner incorporating multiple scanning modes
US5532846A (en) * 1995-06-29 1996-07-02 Agfa Division, Bayer Corporation Method and apparatus for positioning a focusing lens
WO2004097888A2 (en) * 2003-04-25 2004-11-11 Cxr Limited X-ray sources
JP3185858U (en) * 2013-05-02 2013-09-05 テスト リサーチ, インク. Automatic optical detection equipment
CN205961256U (en) * 2016-08-23 2017-02-15 北京龙日艺通数码印刷有限公司 Novel platform formula scanner
CN109523458A (en) * 2018-05-24 2019-03-26 湖北科技学院 A kind of high-precision sparse angular CT method for reconstructing of the sparse induction dynamic guiding filtering of combination
CN109064437A (en) * 2018-07-11 2018-12-21 中国人民解放军国防科技大学 Image fusion method based on guided filtering and online dictionary learning
US20200195813A1 (en) * 2018-12-17 2020-06-18 Paul Lapstun Oblique Scanning Aerial Cameras
US20220014684A1 (en) * 2019-03-25 2022-01-13 Huawei Technologies Co., Ltd. Image display method and device
CN111182238A (en) * 2019-11-15 2020-05-19 北京超放信息技术有限公司 High-resolution mobile electronic equipment imaging device and method based on scanning light field
AU2020100199A4 (en) * 2020-02-08 2020-03-19 Cao, Sihua MR A medical image fusion method based on two-layer decomposition and improved spatial frequency
CN111415297A (en) * 2020-03-06 2020-07-14 清华大学深圳国际研究生院 Imaging method of confocal microscope
WO2021203883A1 (en) * 2020-04-10 2021-10-14 杭州思看科技有限公司 Three-dimensional scanning method, three-dimensional scanning system, and computer readable storage medium
CN111866370A (en) * 2020-05-28 2020-10-30 北京迈格威科技有限公司 Method, device, equipment, medium, camera array and assembly for synthesizing panoramic deep image
WO2021253326A1 (en) * 2020-06-18 2021-12-23 深圳先进技术研究院 Domain transform-based method for reconstructing positron emission tomography image
CN112686829A (en) * 2021-01-11 2021-04-20 太原科技大学 4D light field full-focus image acquisition method based on angle information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
戴光智;陈铁群;薛家祥;姚屏;: "高清晰超声微扫描成像无损检测系统", 计算机应用研究, no. 06 *
邹晶;耿星杰;廖可梁;徐临燕;胡晓东;: "基于亚像素扫描的超分辨技术在高分辨X射线显微镜中的应用", 光子学报, no. 12 *

Also Published As

Publication number Publication date
CN115065761B (en) 2023-09-12

Similar Documents

Publication Publication Date Title
US5408294A (en) 3D photographic printer with direct key-subject alignment
CN108873290B (en) Apparatus and method for scanning microscope slides
JPH0918667A (en) Device to scan original documents
JPH11164094A (en) Double lens type converging device for double plane type flat scanner
JPH0918670A (en) Flatbed scanner
CN105530399B (en) Indoor footprint harvester based on linearly polarized light glancing incidence formula scan imaging method
JPH0918669A (en) Device to scan original documents
CN110648301A (en) Device and method for eliminating imaging reflection
JPH0918671A (en) Flatbed scanner
US5067020A (en) Dual sensor film scanner having coupled optics and a video viewfinder
AU2005222107B2 (en) Method and arrangement for imaging a primarily two-dimensional target
JPH11168607A (en) Single lamp illumination system for double planar type flatbed scanner
CN115065761B (en) Multi-lens scanning device and scanning method thereof
CN208937902U (en) Microfilm scanner
CN117054447A (en) Method and device for detecting edge defects of special-shaped glass
CN115052077B (en) Scanning device and method
JPH0918659A (en) Flatbed scanner
CN115103079B (en) Linear scanning device and scanning method thereof
CN113204107B (en) Three-dimensional scanning microscope with double objective lenses and three-dimensional scanning method
JPS62188952A (en) Film image information reader
JPH0918658A (en) Scanner
CN220913429U (en) Optical imaging device
CN217406629U (en) Time-sharing stroboscopic light source device of line scanning camera based on luminosity stereo
FR2589300A1 (en) DEVICE FOR TRANSFORMING A REPRESENTATION OF A COLOR IMAGE INTO AN ELECTRICAL SIGNAL, AND INVERSE
US20230232124A1 (en) High-speed imaging apparatus and imaging method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant