CN114363600A - Remote rapid 3D projection method and system based on structured light scanning - Google Patents

Remote rapid 3D projection method and system based on structured light scanning Download PDF

Info

Publication number
CN114363600A
CN114363600A CN202210250572.XA CN202210250572A CN114363600A CN 114363600 A CN114363600 A CN 114363600A CN 202210250572 A CN202210250572 A CN 202210250572A CN 114363600 A CN114363600 A CN 114363600A
Authority
CN
China
Prior art keywords
picture
projector
gray
pixel
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210250572.XA
Other languages
Chinese (zh)
Other versions
CN114363600B (en
Inventor
于洋
吴雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shitian Technology Tianjin Co ltd
Original Assignee
Shitian Technology Tianjin Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shitian Technology Tianjin Co ltd filed Critical Shitian Technology Tianjin Co ltd
Priority to CN202210250572.XA priority Critical patent/CN114363600B/en
Publication of CN114363600A publication Critical patent/CN114363600A/en
Application granted granted Critical
Publication of CN114363600B publication Critical patent/CN114363600B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/254Projection of a pattern, viewing through a pattern, e.g. moiré
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of projection and image processing, and discloses a remote rapid 3D projection method based on structured light scanning, which comprises the following steps of S1: calculating the shot picture through a constructed neural network to obtain a picture without moire fringes, and carrying out Gray code decoding on the picture without moire fringes; step S2: processing the decoded picture to generate a mark table, and determining a mapping relation between a projector pixel point and a picture pixel point according to the mark table corresponding to different Gray code bits; step S3: generating a two-dimensional mapping picture of the visual angle shadow of the projector according to the mapping relation; step S4: and editing by taking the two-dimensional mapping picture as a manuscript, and outputting the edited content to a projected object through a projector to realize 3D projection. The 3D projection implementation method provided by the invention can be realized by calculating the two-dimensional mapping picture of the projector visual angle and directly carrying out corresponding artistic operation on the corresponding two-dimensional mapping picture without carrying out any distortion adjustment on the picture, and the operation is simple.

Description

Remote rapid 3D projection method and system based on structured light scanning
Technical Field
The invention relates to the technical field of projection and image processing, in particular to a remote rapid 3D projection method and system based on structured light scanning.
Background
The main applications of projection (mapping) in the market today are divided into two categories: one is fine projection for 2D planar content and the other is 3D projection technology for 3D stereoscopic content.
There are currently two main implementations of 3D projection of 3D stereoscopic content, one of which is a way that relies on artificial adjustment of the distortion of the output picture, similar to the fine projection of 2D planar content. And the other method is to accurately determine the spatial relationship between the projector and the projected object pair by using a plurality of high-precision cameras and realizing the reconstruction of the three-dimensional scene of the projected object, thereby realizing the real-time tracking and automatic full-automatic distortion correction. No matter the projection precision of second kind of scheme still is all stronger than first scheme to the interference killing feature of projector position variation far away in the realization effect. It also has some significant disadvantages: the first projected object requires the preparation of a three-dimensional model with relatively high accuracy in advance, which inevitably greatly reduces the application scenarios of the technology. Because more precise 3D modeling of large buildings or other equipment requires both relatively specialized equipment and a significant amount of time to achieve a relatively satisfactory result. In addition, the hardware required in 3D projection is very expensive, and at least 2 high-resolution industrial cameras and other auxiliary equipment are required, and therefore, the cost is high.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a remote rapid 3D projection method and system based on structured light scanning, which are simple in 3D projection operation, low in hardware cost and wide in application scene.
In order to achieve the above purpose, the invention provides the following technical scheme:
a remote fast 3D projection method based on structured light scanning comprises the following steps:
step S0: the method comprises the steps that pictures coded by Gray codes are projected onto a scanned object, and shooting is carried out one by one to obtain shot pictures;
step S1: calculating the shot picture through a constructed neural network to obtain a picture without moire fringes, and carrying out Gray code decoding on the picture without moire fringes;
step S2: processing the decoded picture to generate a mark table, and determining a mapping relation between a projector pixel point and a picture pixel point according to the mark table corresponding to different Gray code bits;
step S3: generating a two-dimensional mapping picture of the visual angle shadow of the projector according to the mapping relation;
step S4: and receiving the content edited by taking the two-dimensional mapping picture as a manuscript, remotely transmitting the content to a projector, and outputting the content to a projected object to realize 3D projection.
In the present invention, further, the step S2 includes:
converting the picture into a gray scale image, generating a mark table according to the gray scale difference value of the positive and negative gray code picture in the gray scale image, and determining picture pixel points mapped and corresponding to the projector pixel points according to the mark table;
the method comprises the steps that a mark table comprises determined pixel points and uncertain pixel points, gray code values corresponding to each pixel in a picture are determined according to the mark table of the determined pixel points, the gray code values are converted into decimal numbers to obtain pixel points of a projector corresponding to the pixel points in each picture, the corresponding uncertain pixel points pass through the determined pixel points to verify missing information and determine filling positions.
In the present invention, further, the generating a mark table according to the gray difference of the positive and negative gray code pictures includes:
comparing the gray difference value of a positive gray code and a negative gray code in a picture with a set positive gray value and a set negative gray value, recording the state value of the gray difference value larger than the positive set gray value under the gray code as 1, recording the state value of the gray difference value smaller than the negative set gray value under the gray code as 0, recording the uncertain point of the gray value between the positive gray value and the negative gray value as 2, and repeating the operation on the gray codes with different bit numbers to generate a marking table comprising states of 0, 1 and 2, wherein the states of 0 and 1 represent the determined pixel points, and the state of 2 represents the uncertain pixel points.
In the present invention, further, determining, according to the tag table, a picture pixel point mapped and corresponding to a projector pixel point includes:
if one projector pixel point only corresponds to one picture pixel point, determining the picture pixel point as a corresponding projector pixel point;
and if one projector pixel point corresponds to at least one picture pixel point, screening the picture pixel points to determine that the unique picture pixel point corresponds to the pixel point in the projector.
In the present invention, further, the verifying the missing information and determining the filling position by the determined pixel point, wherein the step of verifying the missing information by the determined pixel point, comprises:
step S200: determining whether the uncertain pixel points only have the highest gray code in the state 2, if so, executing the step S201;
step S201: searching for a projector pixel point corresponding to the picture pixel point missing the one-bit gray code, judging the corresponding projector pixel point and whether a corresponding picture pixel exists, if so, not filling the corresponding missing position, and if not, performing the step S202;
step S202: and judging the projector pixel points needing to determine the mapping relation according to the horizontal and vertical coordinates of the picture pixel points of the positions to be lost, and finally determining the position coordinates needing to be filled.
In the present invention, further, the method for constructing the neural network in step S1 includes:
step S10: generating a feature map of a related image by using convolution on an original picture in the data set;
step S11: pooling the feature maps to realize down-sampling, and obtaining a feature map with low resolution;
step S12: performing multilayer convolution operation on the low-resolution feature map and performing equal input and output transformation;
step S13: after deconvolution operation is carried out on the transformed characteristic graph, integration is carried out to obtain a picture output by the neural network;
step S14: and comparing the image output by the neural network with the corresponding image without moire fringes, acquiring an error function, and performing back propagation by using a gradient descent algorithm to correct the whole neural network.
In the present invention, further, the step S13 includes
Step S13-1: performing deconvolution operation according to the following formula to obtain a feature map with the size equal to that of the original image:
Figure 715965DEST_PATH_IMAGE001
wherein o is the output size, s is the step length, i is the input size, p is the padding, and k is the convolution kernel size;
step S13-2: converting all the feature maps of the step S13-1 into feature maps of a channel 3;
step S13-3: and adding the characteristic maps of all the channels 3 in the step S13-2 pixel by pixel and averaging to obtain the image of the degranulation generated by the neural network.
In the present invention, further, the method for acquiring the data set comprises:
and taking the picture without moire fringes as a true value, displaying the true value on different displays, and shooting the true value by using a camera to obtain an original picture corresponding to the picture without moire fringes.
In the present invention, preferably, the screening method for screening the picture pixel points to determine correspondence between the unique picture pixel point and the pixel point in the projector is: calculating the average value of Euclidean distances between the pixel point of a certain picture and other pixel points, screening out the pixel points with the average value smaller than 3, and selecting the point with the minimum average value as the unique picture pixel point.
A remote fast 3D projection system based on structured light scanning, comprising:
the structured light scanning module is used for projecting horizontal and vertical gray codes and positive and negative gray codes respectively through a projector and collecting corresponding pictures through a camera in a mode of projecting a gray code picture to shoot a picture;
the two-dimensional mapping image generation module is used for carrying out Gray code decoding on the collected photos, removing Moire through a neural network, processing the pictures without Moire to generate a plurality of mark tables, determining the mapping relation between each picture pixel point and the corresponding pixel point in the projector according to the mark tables corresponding to different Gray code bits, and generating the two-dimensional mapping image of the projector visual angle according to the mapping relation;
and the artistic effect processing and output module is used for receiving the picture or video edited and processed by taking the generated two-dimensional mapping picture as a manuscript and remotely transmitting the picture or video to a projector on site for picture output.
Compared with the prior art, the invention has the beneficial effects that:
the 3D projection implementation method provided by the invention uses the structured light technology to calculate the two-dimensional mapping picture of the projector visual angle, directly carries out corresponding artistic operation on the corresponding two-dimensional mapping picture, does not need to carry out any distortion adjustment on the picture, is simple to operate, and solves the problems that the three-dimensional modeling of a projection object, the correction on the image and the limited application scene are required in advance in the prior art.
In addition, the method also adopts a deep learning mode to process the collected pictures, and removes Moire patterns in the pictures by constructing a neural network so as to prevent interference on subsequent picture identification and processing. Meanwhile, the mapping relation between the projector and the camera is determined by setting mapping tables of three states of 0, 1 and 2, in order to improve the accuracy of the mapping relation and to improve the difficulty in identifying high-order gray codes, the scheme further processes the pixel point of the state 2 in the mapping table, and verifies whether the estimated pixels are reasonable or not through the determined pixels, so that the problem that the state of each pixel is difficult to judge in the process of actually generating the mark table due to different reflection degrees of different objects is solved.
Moreover, the invention can be realized only by adding one camera outside the projector, and the hardware price is low, thereby being beneficial to saving the cost. Meanwhile, the creation of the system mainly operates on the two-dimensional mapping chart, so that after the two-dimensional mapping chart is generated, an artist can create the mapping chart remotely and send the picture or video content to field equipment remotely after creation, and therefore manual work is not needed to adjust the equipment and survey the field, manpower resources are saved, and the whole 3D projection is enabled to be more convenient to achieve.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram of a hardware part of a remote fast 3D projection system based on structured light scanning according to the present invention;
FIG. 2 is a schematic flow chart of a remote fast 3D projection method based on structured light scanning according to the present invention;
FIG. 3 is a partial flowchart of step S2 in a remote fast 3D projection method based on structured light scanning according to the present invention;
FIG. 4 is a schematic flow chart of a method for constructing a neural network in a remote fast 3D projection method based on structured light scanning according to the present invention;
fig. 5 is a flowchart illustrating step S13 in the method for remote fast 3D projection based on structured light scanning according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another component, it can be directly connected to the other component or intervening components may also be present. When a component is referred to as being "disposed on" another component, it can be directly on the other component or intervening components may also be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for illustrative purposes only.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1, a preferred embodiment of the present invention provides a remote fast 3D projection system based on structured light scanning, which includes a hardware portion and a software portion, wherein the hardware portion includes a projector, a camera, a PC or android module and a projected object, the projector may select a 3LCD or DLP projector, the camera may select a common RGB camera, and a resolution suggestion is more than 2K, wherein the PC or android module, the projector and the camera may implement remote communication through a wireless transmission technology.
The software part comprises a structured light scanning module, a two-dimensional mapping image generation module and an artistic effect processing and output module, wherein the structured light scanning module is mainly used for collecting projection pictures.
For better understanding of the present solution, a description is first made of a structured optical gray code scanning technique, as follows:
gray code is a special binary code that is often used for encoding in structured light three-dimensional vision. Compared with the common binary code, the gray code has the advantage that the codes of adjacent numbers are different by only one bit, which is very important for decoding, and the error rate of optical decoding can be reduced. The structured light scanning module firstly calculates a group of Gray code pictures matched with the projector according to the resolution of the projector, and specifically comprises the following steps:
(a) gray code picture generation
The gray code picture comprises two groups of gray codes of a horizontal gray code and a vertical gray code. Wherein, the horizontal Gray code is used for determining the mapping relation between the vertical direction of the projector and the vertical direction of the camera, and the vertical Gray code is used for determiningAnd determining the mapping relation between the horizontal direction of the horizontal projector and the horizontal direction of the camera. The number of horizontal gray code pictures depends on the vertical resolution of the projector. Since the Gray code is a binary code, the n-bit Gray code can be
Figure 335165DEST_PATH_IMAGE002
Each pixel is encoded.
For example:
Figure 913783DEST_PATH_IMAGE003
equal to 2, so 1-bit gray code can be coded for 2 pixels, and the same applies
Figure 440579DEST_PATH_IMAGE004
Is equal to 32, so a 5-bit gray code can encode 32 pixels. Therefore, for different resolutions, n-bit gray code pictures with resolutions equal to or higher than the resolution of the pictures need to be generated, such as a common 1920 × 1080 resolution projector, the horizontal gray code needs to be the same
Figure 656928DEST_PATH_IMAGE005
2048, but since 2048 exceeds 1080, only the first 1080 portion is retained.
(b) Role and generation of Gray code reverse pictures
Because the actual environment is influenced by the reflection rate of the projected object and other environmental factors, it is difficult to make the gray value of white in the gray code in the ideal picture acquired by the camera be larger than the gray value of black part, and further the error is generated. Therefore, the method of the Gray code inverse graph is adopted to increase the accuracy of decoding. The gray code inverse graph is obtained by performing all negation on all pixel points in the original gray code graph, namely all original 0 take 1, all original 1 take 0, so that all original black in the image are white, and all original white in the image are black. In the positive and negative gray codes described herein, the reverse gray code is a reverse graph of the positive gray code.
(c) Generation and action of special pictures
After a group of horizontal gray code pictures and a group of vertical gray code pictures are prepared respectively, all white and all black test charts which are equal to the resolution of the projector need to be prepared. The two test patterns are used to determine the projection range of the projector in the camera viewing angle (full white vs. full black). On the other hand, the exposure time is adjusted by taking the projection time of a full white picture (the brightness is highest and the overexposure is easy in full white) as a reference.
In conclusion, the structured light scanning module in the scheme adopts a mode of projecting one gray code picture to shoot one picture, horizontal, vertical, black and white gray code pictures are projected through the projector respectively, and corresponding pictures are collected through the camera.
Further, Gray code decoding is carried out on the collected photos through a two-dimensional mapping image generation module, Moire lines are removed through a neural network, the pictures with the Moire lines removed are processed to generate a plurality of mark tables, the mapping relation between each picture pixel point and the corresponding pixel point in the projector is determined according to the mark tables corresponding to different Gray code bits, and the two-dimensional mapping image of the visual angle of the projector is generated according to the mapping relation;
in addition, the artist can remotely create with the generated two-dimensional mapping picture as a manuscript, the system receives the picture or video edited and processed with the generated two-dimensional mapping picture as the manuscript by arranging an artistic effect processing and output module, and packs the picture or video and remotely sends the picture or video to a projector on site for picture output so as to realize a 3D projection effect. Therefore, the equipment is not required to be adjusted and the actual scene is not required to be surveyed manually on site, the manpower resource is saved, and the whole 3D projection is more convenient to realize.
The invention realizes the 3D projection by calculating the two-dimensional mapping picture of the visual angle of the projector by using the structured light technology and directly carrying out corresponding artistic operation on the corresponding two-dimensional mapping picture without carrying out any distortion adjustment on the picture. Therefore, the method is simple to operate, and the problems that three-dimensional modeling of the projection object and subsequent image correction processes need to be carried out in advance and application scenes are limited in the prior art are solved.
Moreover, the invention can be realized only by adding one camera outside the projector, and the hardware price is low, thereby being beneficial to saving the cost. Meanwhile, because the creation of the system mainly operates on the two-dimensional map, after the two-dimensional map is generated, the artist can create the map remotely and send the picture or video content to the field device after creation.
In another embodiment provided by the present invention, as shown in fig. 2, a remote fast 3D projection method based on structured light scanning includes:
step S0: the method comprises the steps that pictures coded by Gray codes are projected onto a scanned object, and shooting is carried out one by one to obtain shot pictures;
step S1: calculating the shot picture through a constructed neural network to obtain a picture without moire fringes, and carrying out Gray code decoding on the picture without moire fringes;
step S2: processing the decoded picture to generate a mark table, and determining a mapping relation between a projector pixel point and a picture pixel point according to the mark table corresponding to different Gray code bits;
step S3: generating a two-dimensional mapping picture of the visual angle shadow of the projector according to the mapping relation;
step S4: and receiving the content edited by taking the two-dimensional mapping picture as a manuscript and remotely transmitting the edited content to a projector for outputting in a video streaming mode so as to realize 3D projection.
Specifically, a required projection picture is acquired and collected through the structural optical module, specifically, pictures coded by gray codes are projected onto a scanned object, shooting is performed one by one to acquire shot pictures, that is, horizontal, vertical, black and white gray codes are projected to the scanned object through a projector, and corresponding pictures are acquired by a camera. And then carrying out Gray code decoding on the shot picture, and theoretically finding out the pixel point of the camera corresponding to each projector pixel according to the black and white (light and shade, gray value) in the Gray code and the horizontal and vertical Gray codes. However, in the actually acquired picture, especially in the high-order gray code picture, moire fringes are easily generated due to small spacing between black and white lines and high density, and the generation of moire fringes can generate great interference on the black and white recognition in the picture. Therefore, the shot picture is processed in a deep learning manner to remove moire fringes contained in the picture.
In an embodiment provided by the present invention, as shown in fig. 4, a method for constructing a neural network includes:
step S10: the feature map of the associated image is generated using convolution on the original picture in the data set.
Step S11: pooling the feature maps to realize down-sampling, and obtaining a feature map with low resolution;
step S12: performing multilayer convolution operation on the low-resolution feature map and performing equal input and output transformation;
step S13: after deconvolution operation is carried out on the transformed characteristic graph, integration is carried out to obtain a picture output by the neural network;
step S14: and comparing the image output by the neural network with the corresponding image without moire fringes, acquiring an error function, and performing back propagation by using a gradient descent algorithm to correct the whole neural network.
Specifically, in this embodiment, we take the generation of a moir e removal picture output by the neural network as an example to explain the above steps:
first, in step S10, two sets of feature maps, i.e., feature map set a and feature map set B, are acquired.
For original image H (pixel height) × W (pixel width) × 3 (three RGB, so three channels) in the data set, 32 check original images with different sizes of 3 × 3, which are generated randomly, are convoluted, and since the pixel size of the image needs to be maintained, padding (padding) is set to 1 on the premise that the step size (stride) is 1 × 1, and the concrete formula of the convolution operation is as follows:
Figure 978188DEST_PATH_IMAGE006
wherein the content of the first and second substances,oto output the size of the picture width or height,ifor inputting width or height of pictureThe size of the glass fiber is measured,kthe width or height dimension of the core.
After the convolution operation, a set a of H × W × 32 feature maps is obtained, and then the operation is repeated, and a set B of H × W × 32 feature map is reproduced from the original image by randomly generating 32 additional kernels (kernels) with different sizes of 3 × 3.
Next, in step S11, two sets of picture sets, i.e., feature map group a and feature map group B, are obtained, and then, down-sampling is performed by pooling the two sets of picture sets.
And obtaining the low-resolution feature map of the feature map group A by using maximum pooling, wherein the window of each pooling is 2 x 2, so that the resolution of the pooled feature map is half of that of the original feature map used once, and the feature maps of the groups with the original sizes of 1/2, 1/4, 1/8 and 1/16 are obtained by using pooling for multiple times and are named as A2, A4, A8 and A16 respectively. Feature set B was pooled using mean pooling, which also resulted in feature sets of original sizes 1/2, 1/4, 1/8, 1/16, and designated B2, B4, B8, B16, respectively.
Again, in step S12, after completion of the pooling, 5-layer convolution needs to be performed for each set of feature maps. Taking the feature map group a as an example, which is composed of 32 feature maps with H × W pixels, to keep the input and output of each convolution layer consistent, that is, to perform equal input-output transformation, it is necessary to use 32 kernels with different sizes of 3 × 32 and convolve the a group feature maps in a manner of step size 1 and padding 1, so that the input and output of each convolution layer of the a group feature map are kept in the format of H × W32, and 5-layer convolution operations are performed in a similar manner.
For example, the a2 signature group is H/2W/2 x 32 in size, and the a2 signature group is convolved with 3 x 32 kernels with a step size of 1 and a fill of 1, and then repeated through 5 layers of convolutions as well. The remaining feature sets are processed in a similar manner such that each feature set is subjected to a 5-level convolution operation.
Again, as shown in fig. 5, regarding step S13, the deconvolution of the transformed feature map includes:
step S13-1: performing deconvolution operation according to the following formula to obtain a feature map with the size equal to that of the original image:
Figure 47031DEST_PATH_IMAGE007
wherein the content of the first and second substances,oin order to output the size of the image,sin order to be the step size,iin order to input the size of the object,pin order to be filled in,kis the convolution kernel size;
step S13-2: converting all the feature maps of the step S13-1 into feature maps of a channel 3;
step S13-3: and adding the characteristic maps of all the channels 3 in the step S13-2 pixel by pixel and averaging to obtain the image of the degranulation generated by the neural network.
After each 5-layer convolution operation, the feature map group with the resolution smaller than that of the original image needs to be deconvoluted to restore the resolution to the original image resolution. For example, the a2 feature map group needs to be deconvoluted once, and because the size of the feature map is enlarged by one time, s needs to be set to 2, p needs to be set to 1, and k needs to be set to 4, so that the picture size of the a2 feature map group will be restored to the original image size after the deconvolution. In this way, the a4 feature map group only needs to perform the above operations twice, the A8 feature map group needs to perform the above operations 3 times, and the a16 feature map group needs to perform the deconvolution operation 4 times to restore the feature map group to the original image with the same size.
Then, after each feature map group is restored to be as large as the original size, it is necessary to integrate these feature maps. The integration step is divided into two steps, and the first part is to perform 1 × 1 convolution operation on feature map groups with the original image and the like. Specifically, each set of feature maps is convolved with 3 sets of kernels having a width and a height of 3 × 3, and the convolution operation is performed with padding (padding) set to 1 on the premise that the step size (stride) is 1 × 1, so that the set of feature maps becomes channel 3. In this way, the same operation is performed on all feature map groups, and all 10 feature map groups are uniformly converted into 3-channel feature map groups. Finally, all the feature map groups are added pixel by pixel and averaged to obtain the image without the moire pattern generated by the neural network.
Finally, in step S14, since the neural network needs to be trained, the error values of the generated picture and the picture without moire fringes need to be averaged pixel by pixel to obtain an error function, and the gradient descent algorithm is used to perform back propagation to correct the whole neural network.
In a preferred embodiment provided by the present invention, a large number of data sets, i.e., a picture with moire and a picture without moire, are required for training of the neural network. The acquisition of data set resources is scarce, so the method has a great limitation. In order to obtain data required by training, the invention takes a group of pictures without moire patterns as a true value, and then presents the pictures on different displays and shoots the pictures by a camera. Therefore, a large amount of paired real data with moire fringes and removed moire can be manufactured, the values in the initial random convolution kernels in the neural network under the training of a large amount of data are corrected one by one, and finally a set of usable neural network and related parameters thereof can be obtained.
In practical use, only the image containing the moire fringes can be directly obtained through calculation of the neural network. It should be noted that, because the construction and training of the neural network are both directed to the system, and the purpose of the neural network is to remove moire in the gray code picture, in the application scenario of the embodiment, the neural network has a better effect of removing moire.
In the invention, further, after the pictures with the moire patterns removed are obtained, the forward photos and the reverse photos of each group of gray codes are used for carrying out pixel-by-pixel comparison to obtain the pixel points of the projector corresponding to the pixel points in each picture.
Specifically, the step S2 includes:
converting the picture into a gray scale picture, generating a marking table according to the gray scale difference value of the positive and negative gray code picture, and determining picture pixel points mapped and corresponding to the projector pixel points according to the marking table; comparing the gray difference value of the positive gray code and the negative gray code in the picture with the set positive gray value and the set negative gray value, recording the state value under the gray code with the gray difference value larger than the positive set gray value as 1, and recording the state value under the gray code with the gray difference value smaller than the negative set gray value as 0.
Specifically, in one embodiment of the present invention, all photos are converted from a color map to a gray scale map, so that the value stored in each pixel is changed to a gray scale value, and the gray scale value can better represent the brightness of the object in the photos. The brightness depends on the one hand on the intensity of the light (in gray code, there are only two colors, black and white, the white part is the strongest light projected by the projector, and the black part is the weakest light projected by the projector), which is irradiated by the projection of the object, and on the other hand on the reflection coefficient of the object itself. Because the white part in the gray code positive direction graph is certainly black in the negative direction graph, the same pixel in the same group of positive and negative gray code projection pictures is inevitably in two states of a white picture of a projector and a black picture of the projector respectively.
Theoretically, the brightness of the object corresponding to the same pixel in the white frame state is always larger than that in the black frame state, that is, the value in the gray scale map is larger. Therefore, comparing the photos in the positive and negative gray code image states, obtaining the gray difference of the positive and negative gray codes, and comparing the gray difference with a set value, the set value in this embodiment is 10, that is, if the gray difference is greater than 10 gray values, it means that the state value of the pixel in this gray code is 1. Otherwise, if the gray difference is smaller than the-10 gray value, the state value of the pixel is recorded as 0, and after the above operation is performed on each pixel, a tag table containing only 0 and 1 states is generated for each group of gray codes (forward and reverse gray codes). Repeating the above operation for all gray codes with different numbers of bits will obtain n flag tables containing only 0 and 1 states (the number of n depends on the pixels of the projector). According to the principle of Gray codes, the Gray code value corresponding to each pixel in the picture can be determined by combining the group of mark tables, and then the Gray codes are converted into decimal numbers, so that the pixel point of the projector corresponding to the pixel point in each picture can be obtained.
In this embodiment, since the camera used in the system has a resolution of 4K (4096 × 2160), and the projector is a high-definition projector (1920 × 1080), the system can be adapted to various projectors. However, since there are few 4K projectors on the market, the range covered by 1 pixel of the projector will contain the pixel points of n cameras in most cases. Therefore, the method can be divided into the following steps when determining the camera pixel point corresponding to each projector pixel point:
if one projector pixel point only corresponds to one picture pixel point, determining the picture pixel point as a corresponding projector pixel point;
and if one projector pixel point corresponds to at least one picture pixel point, screening the picture pixel points to determine that the unique picture pixel point corresponds to the pixel point in the projector. The screening method comprises the following steps: calculating the average value of Euclidean distances between the pixel point of a certain picture and other pixel points, screening out the pixel points with the average value smaller than 3, and selecting the point with the minimum average value as the unique picture pixel point.
The reason for selecting 3 pixels is that the calculated average value does not exceed the square root of the ratio of the camera resolution (4K) to the projector resolution (1080P) plus 1 according to the principle of corresponding point distribution, wherein the plus 1 is to increase partial fault tolerance.
In another preferred embodiment provided by the present invention, since the reflection degrees of different objects are different, and various direct and indirect reflected lights of white pixels also affect black pixels, in practical use, it is often difficult to determine whether the state of a certain pixel is 1 or 0 when generating a mark table, that is, the value of the gray difference obtained after a set of positive and negative gray codes is used for projection is between-10 and 10 gray levels. If the threshold value is not set to directly depend on positive and negative values to determine the state value, many mismapping points are generated due to factors such as different object reflectivity and projection angles.
Therefore, in the present embodiment, the mark table includes the determined pixel point and the uncertain pixel point, where in the above embodiment, the point marked as state 1 and state 0 is the determined pixel point, and the implementation manner of the determined point is the same as that described above. And then, marking uncertain pixel points which cannot be determined as a state 2, and verifying missing information and determining filling positions by the corresponding uncertain pixel points through the determined pixel points. That is, theoretically, the projector pixel corresponding to each camera pixel can be calculated by using the mark table corresponding to different gray code bits, but some points appearing in 2 in the mark table cannot find the point of the camera corresponding to the point.
In the invention, furthermore, the higher the gray code number is, the smaller the interval of the black and white pixels is, so the probability of occurrence of unrecognizable is higher, and the probability of occurrence of 2 in the highest gray code pattern is the highest. As shown in fig. 3, in order to solve the problem that the highest-order gray code cannot be identified, the following specific method is adopted for processing, specifically, the method for verifying missing information and determining the filling position by the determined pixel point first for the corresponding uncertain pixel point is as follows:
step S200: determining whether the uncertain pixel points only have the highest gray code in the state 2, if so, executing the step S201;
step S201: searching for a projector pixel point corresponding to the picture pixel point missing the one-bit gray code, judging the corresponding projector pixel point and whether a corresponding picture pixel exists, if so, not filling the corresponding missing position, and if not, performing the step S202;
step S202: and judging the projector pixel points needing to determine the mapping relation according to the horizontal and vertical coordinates of the picture pixel points of the positions to be lost, and finally determining the position coordinates needing to be filled.
Specifically, after only the highest-order gray code is determined to be in the state 2, because every missing one-order gray code can correspond 1 projector pixel to 1 original image pixel, the image pixel is changed into 1 image pixel corresponding 2 projector pixels (if the missing occurs on the vertical gray code photo, the image pixel corresponds to two pixels in the horizontal direction, otherwise, the missing occurs in the horizontal direction and corresponds to two pixels in the vertical direction), and thus, the uncertainty occurs. In order to solve the uncertainty, firstly, searching a gray code image pixel missing one bit as a pixel C _ defect, judging whether missing information of the pixel C _ defect is unidirectional information or not, wherein the unidirectional information refers to horizontal information or vertical information, if so, determining two projector pixels corresponding to the unidirectional information, checking whether corresponding points exist in the two projector pixels, and if so, discarding the pixel C _ defect; if only one projector pixel has a corresponding point, judging whether the one-way coordinate of the corresponding point is smaller than the one-way coordinate of the corresponding current picture pixel, if so, setting the missing information of the pixel C _ defect as the one-way component of the projector pixel of the non-corresponding point, and if so, discarding the pixel C _ defect.
If the missing information is not unidirectional, i.e., vertical information and horizontal and vertical information are missing, four projector pixels will be mapped. Following the above principle, the corresponding positions are not filled as long as the corresponding projector pixel points and the corresponding picture pixels are unambiguous. And then, according to the horizontal and vertical coordinates of the image pixel at the definite corresponding position, checking and limiting the validity of the pixel point needing to determine the mapping position, and finally determining the position needing to be filled. For example, if none of the four projector pixels has an exact mapping pixel, there is no limitation, and in this case, the upper left position is filled uniformly. The rest of the cases are also constrained according to the accurate mapping, and if the conditions meet the constraint, the filling is performed in the left-upper direction as much as possible.
For example, when a pixel C _ defect is simply missing horizontal pixel information, it would correspond to two projector pixels P _ left and P _ right, then look to see if these two pixels already have a corresponding point, and if so, then discard this missing most significant pixel C _ defect.
If only one point has a corresponding relation, for example, P _ left exists, then whether the horizontal coordinate of the picture coordinate C _ left corresponding to the P _ left point is smaller than the horizontal coordinate of the current picture pixel is checked, if so, the projector horizontal pixel value corresponding to the picture pixel is set as the P _ right horizontal component, otherwise, the pixel C _ default is discarded. In principle, the mapping points directly calculated by gray codes are accurate, so that other estimated points need to be verified based on the mapping points. Because the projection will be distorted when the picture is attached to other objects and will not affect the horizontal and vertical order between the pixels. The pixels that have been determined can be used to verify whether those pixels that are inferred are reasonable.
Similarly, if a C _ defect pixel is missing only the last bit in the horizontal Gray code map, then this pixel will correspond to two pixels, P _ up and P _ down. The subsequent processing is also similar to the above-described level judgment.
Therefore, the mapping relation between each picture pixel point and the corresponding pixel point in the projector, namely the mapping relation between the projector and the camera is obtained through the technology, on the basis, the camera shoots a picture on the premise that the projector does not output any signal, then the point in the picture corresponding to each projector pixel is found from the picture according to the mapping relation of the mark table, and finally a picture of the projector visual angle is generated.
The artist or the creator directly creates the 3D projection by taking the generated picture at the visual angle of the projector as a manuscript, remotely transmits the created content to the projector, and directly outputs the content through the projector to realize an ideal 3D projection effect. Specifically, a transparent layer is added on a generated picture of a projector viewing angle, then an effect or animation needing to be added is directly drawn on the transparent layer, and the parts which are not drawn are all black. And after the creation content is finished, uploading the creation content to a system, and receiving the creation content by an artistic effect processing and output module in the system and transmitting the creation content to a projector to output pictures in a video stream mode.
The above description is intended to describe in detail the preferred embodiments of the present invention, but the embodiments are not intended to limit the scope of the claims of the present invention, and all equivalent changes and modifications made within the technical spirit of the present invention should fall within the scope of the claims of the present invention.

Claims (10)

1. A remote rapid 3D projection method based on structured light scanning is characterized by comprising the following steps:
step S0: the method comprises the steps that pictures coded by Gray codes are projected onto a scanned object, and shooting is carried out one by one to obtain shot pictures;
step S1: calculating the shot picture through a constructed neural network to obtain a picture without moire fringes, and carrying out Gray code decoding on the picture without moire fringes;
step S2: processing the decoded picture to generate a mark table, and determining a mapping relation between a projector pixel point and a picture pixel point according to the mark table corresponding to different Gray code bits;
step S3: generating a two-dimensional mapping picture of the visual angle of the projector according to the mapping relation;
step S4: and receiving the content edited by taking the two-dimensional mapping picture as a manuscript, remotely transmitting the content to a projector, and outputting the content to a projected object to realize 3D projection.
2. The method for remote fast 3D projection based on structured light scanning as claimed in claim 1, wherein said step S2 comprises:
converting the picture into a gray-scale image, generating a marking table according to the gray-scale difference value of the positive and negative gray code picture, and determining picture pixel points mapped and corresponding to the projector pixel points according to the marking table;
the method comprises the steps that a mark table comprises determined pixel points and uncertain pixel points, gray code values corresponding to each pixel in a picture are determined according to the mark table of the determined pixel points, the gray code values are converted into decimal numbers to obtain pixel points of a projector corresponding to the pixel points in each picture, the corresponding uncertain pixel points pass through the determined pixel points to verify missing information and determine filling positions.
3. The method of claim 2, wherein the generating the mark table according to the gray difference of the positive and negative gray code pictures comprises:
comparing the gray difference value of a positive gray code and a negative gray code picture in the picture with a set positive gray value and a set negative gray value, recording the state value of the gray difference value larger than the positive gray value under the gray code as 1, recording the state value of the gray difference value smaller than the negative gray value under the gray code as 0, recording the uncertain point of the gray value between the positive gray value and the negative gray value as 2, and repeating the operation on the gray codes with different bit numbers to generate a mark table comprising states of 0, 1 and 2, wherein the states of 0 and 1 represent the determined pixel points, and the state of 2 represents the uncertain pixel points.
4. The method of claim 2, wherein determining the picture pixel point mapped by the projector pixel point according to the mark table comprises:
if one projector pixel point only corresponds to one picture pixel point, determining the picture pixel point as a corresponding projector pixel point;
and if one projector pixel point corresponds to at least one picture pixel point, screening the picture pixel points to determine that the unique picture pixel point corresponds to the pixel point in the projector.
5. The method of claim 4, wherein the step of verifying the missing information and determining the filling position by the determined pixel points by the corresponding uncertain pixel points comprises:
step S200: determining whether the uncertain pixel points only have the highest gray code in the state 2, if so, executing the step S201;
step S201: searching for a projector pixel point corresponding to the picture pixel point missing the one-bit gray code, judging the corresponding projector pixel point and whether a corresponding picture pixel exists, if so, not filling the corresponding missing position, and if not, performing the step S202;
step S202: and judging the projector pixel points needing to determine the mapping relation according to the horizontal and vertical coordinates of the picture pixel points of the positions to be lost, and finally determining the position coordinates needing to be filled.
6. The method for remote fast 3D projection based on structured light scanning as claimed in claim 1, wherein the method for constructing neural network of step S1 comprises:
step S10: generating a feature map of a related image by using convolution on an original picture in the data set;
step S11: pooling the feature maps to realize down-sampling, and obtaining a feature map with low resolution;
step S12: performing input and output transformation such as multilayer convolution on the low-resolution feature map;
step S13: after deconvolution operation is carried out on the transformed characteristic graph, integration is carried out to obtain a picture output by the neural network;
step S14: and comparing the image output by the neural network with the corresponding image without moire fringes, acquiring an error function, and performing back propagation by using a gradient descent algorithm to correct the whole neural network.
7. The method according to claim 6, wherein the step S13 includes
Step S13-1: performing deconvolution operation according to the following formula to obtain a feature map with the size equal to that of the original image:
Figure 391018DEST_PATH_IMAGE001
wherein the content of the first and second substances,oin order to output the size of the image,sin order to be the step size,iin order to input the size of the object,pin order to be filled in,kis the convolution kernel size;
step S13-2: converting all the feature maps of the step S13-1 into feature maps of a channel 3;
step S13-3: and adding the characteristic maps of all the channels 3 in the step S13-2 pixel by pixel and averaging to obtain the image of the degranulation generated by the neural network.
8. The remote fast 3D projection method based on structured light scanning as claimed in claim 4, wherein the screening method for screening the picture pixel points to determine correspondence of the unique picture pixel points and the pixel points in the projector is: calculating the average value of Euclidean distances between the pixel point of a certain picture and other pixel points, screening out the pixel points with the average value smaller than 3, and selecting the point with the minimum average value as the unique picture pixel point.
9. The method of claim 6, wherein the data set is acquired by a method comprising the following steps:
and taking the picture without moire fringes as a true value, displaying the true value on different displays, and shooting the true value by using a camera to obtain an original picture corresponding to the picture without moire fringes.
10. A remote fast 3D projection system based on structured light scanning, comprising:
the structured light scanning module is used for projecting horizontal and vertical gray codes and positive and negative gray codes respectively through a projector and collecting corresponding pictures through a camera in a mode of projecting a gray code picture to shoot a picture;
the two-dimensional mapping image generation module is used for carrying out Gray code decoding on the collected photos, removing Moire through a neural network, processing the pictures without Moire to generate a plurality of mark tables, determining the mapping relation between each picture pixel point and the corresponding pixel point in the projector according to the mark tables corresponding to different Gray code bits, and generating the two-dimensional mapping image of the projector visual angle according to the mapping relation;
and the artistic effect processing and output module is used for receiving the picture or video edited and processed by taking the generated two-dimensional mapping picture as a manuscript and remotely transmitting the picture or video to a projector on site for picture output.
CN202210250572.XA 2022-03-15 2022-03-15 Remote rapid 3D projection method and system based on structured light scanning Active CN114363600B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210250572.XA CN114363600B (en) 2022-03-15 2022-03-15 Remote rapid 3D projection method and system based on structured light scanning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210250572.XA CN114363600B (en) 2022-03-15 2022-03-15 Remote rapid 3D projection method and system based on structured light scanning

Publications (2)

Publication Number Publication Date
CN114363600A true CN114363600A (en) 2022-04-15
CN114363600B CN114363600B (en) 2022-06-21

Family

ID=81094398

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210250572.XA Active CN114363600B (en) 2022-03-15 2022-03-15 Remote rapid 3D projection method and system based on structured light scanning

Country Status (1)

Country Link
CN (1) CN114363600B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5832106A (en) * 1996-05-22 1998-11-03 Electronics And Telecommunications Research Institute Method for camera calibration of range imaging system by use of neural network
CN103226830A (en) * 2013-04-25 2013-07-31 北京大学 Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
CN103400409A (en) * 2013-08-27 2013-11-20 华中师范大学 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
CN103533318A (en) * 2013-10-21 2014-01-22 北京理工大学 Building outer surface projection method
CN104168467A (en) * 2014-09-02 2014-11-26 四川大学 Method for achieving projection display geometric correction by applying time series structure light technology
US20140379114A1 (en) * 2013-06-25 2014-12-25 Roland Dg Corporation Projection image correction system and projection image correction method
CN105026997A (en) * 2014-02-18 2015-11-04 松下电器(美国)知识产权公司 Projection system, semiconductor integrated circuit, and image correction method
US20160054118A1 (en) * 2014-03-06 2016-02-25 Panasonic Intellectual Property Corporation Of America Measurement system, measurement method, and vision chip
CN111161166A (en) * 2019-12-16 2020-05-15 西安交通大学 Image moire eliminating method based on depth multi-resolution network
WO2022043746A1 (en) * 2020-08-25 2022-03-03 Artec Europe S.A R.L. Systems and methods of 3d object reconstruction using a neural network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5832106A (en) * 1996-05-22 1998-11-03 Electronics And Telecommunications Research Institute Method for camera calibration of range imaging system by use of neural network
CN103226830A (en) * 2013-04-25 2013-07-31 北京大学 Automatic matching correction method of video texture projection in three-dimensional virtual-real fusion environment
US20140379114A1 (en) * 2013-06-25 2014-12-25 Roland Dg Corporation Projection image correction system and projection image correction method
CN103400409A (en) * 2013-08-27 2013-11-20 华中师范大学 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
CN103533318A (en) * 2013-10-21 2014-01-22 北京理工大学 Building outer surface projection method
CN105026997A (en) * 2014-02-18 2015-11-04 松下电器(美国)知识产权公司 Projection system, semiconductor integrated circuit, and image correction method
US20160054118A1 (en) * 2014-03-06 2016-02-25 Panasonic Intellectual Property Corporation Of America Measurement system, measurement method, and vision chip
CN104168467A (en) * 2014-09-02 2014-11-26 四川大学 Method for achieving projection display geometric correction by applying time series structure light technology
CN111161166A (en) * 2019-12-16 2020-05-15 西安交通大学 Image moire eliminating method based on depth multi-resolution network
WO2022043746A1 (en) * 2020-08-25 2022-03-03 Artec Europe S.A R.L. Systems and methods of 3d object reconstruction using a neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘永久: "基于结构光投影的运动物体高速实时三维测量方法研究", 《中国优秀博硕士学位论文全文数据库(博士) 信息科技辑》 *
张德正: "基于深度学习的卫星图像海面目标检测与移动预测研究", 《中国优秀博硕士学位论文全文数据库(硕士) 工程科技Ⅱ辑》 *
李桐: "感知一致的图像摩尔纹去除方法", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Also Published As

Publication number Publication date
CN114363600B (en) 2022-06-21

Similar Documents

Publication Publication Date Title
US11257272B2 (en) Generating synthetic image data for machine learning
CN100377171C (en) Method and apparatus for generating deteriorated numeral image
US6983082B2 (en) Reality-based light environment for digital imaging in motion pictures
JP5830546B2 (en) Determination of model parameters based on model transformation of objects
CN103828359B (en) For producing the method for the view of scene, coding system and solving code system
CN102970504A (en) Image projecting device, image processing device, image projecting method, and computer-readable recording medium
CN107734271B (en) 1,000,000,000 pixel video generation method of high dynamic range
KR20070008652A (en) Method for extracting raw data of a photographed image
JPWO2008108071A1 (en) Image processing apparatus and method, image processing program, and image processor
CN111768452A (en) Non-contact automatic mapping method based on deep learning
CN110766767B (en) Method, system and device for acquiring Gray code structured light image
CN107516333A (en) Adaptive De Bruijn color structured light coding methods
CN116527863A (en) Video generation method, device, equipment and medium based on virtual reality
CN114697623A (en) Projection surface selection and projection image correction method and device, projector and medium
CN113284037A (en) Ceramic watermark carrier recovery method based on deep neural network
US10740916B2 (en) Method and device for improving efficiency of reconstructing three-dimensional model
CN101799924A (en) Method for calibrating projector by CCD (Charge Couple Device) camera
CN114363600B (en) Remote rapid 3D projection method and system based on structured light scanning
CN117795553A (en) Information acquisition system, calibration method and device thereof and computer readable storage medium
JP7489253B2 (en) Depth map generating device and program thereof, and depth map generating system
CN107370952A (en) Image capturing method and device
Gu et al. 3dunderworld-sls: an open-source structured-light scanning system for rapid geometry acquisition
CN116957931A (en) Method for improving image quality of camera image based on nerve radiation field
CN111336949A (en) Spatial coding structured light three-dimensional scanning method and system
CN109446945A (en) Threedimensional model treating method and apparatus, electronic equipment, computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant