CN111986309A - System and method for generating special film Pre-vis based on three-dimensional scanning - Google Patents
System and method for generating special film Pre-vis based on three-dimensional scanning Download PDFInfo
- Publication number
- CN111986309A CN111986309A CN202010722088.3A CN202010722088A CN111986309A CN 111986309 A CN111986309 A CN 111986309A CN 202010722088 A CN202010722088 A CN 202010722088A CN 111986309 A CN111986309 A CN 111986309A
- Authority
- CN
- China
- Prior art keywords
- data
- unit
- point cloud
- dimensional
- dimensional model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000012545 processing Methods 0.000 claims description 39
- 230000008569 process Effects 0.000 claims description 31
- 230000011218 segmentation Effects 0.000 claims description 21
- 238000013507 mapping Methods 0.000 claims description 13
- 238000009795 derivation Methods 0.000 claims description 9
- 238000004519 manufacturing process Methods 0.000 abstract description 50
- 238000004088 simulation Methods 0.000 abstract description 3
- 230000002452 interceptive effect Effects 0.000 abstract description 2
- 238000003860 storage Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/08—Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Processing Or Creating Images (AREA)
- Image Processing (AREA)
Abstract
A system and a method for generating a special film Pre-vis based on three-dimensional scanning relate to the technical field of photo scanning generation models and comprise the following steps: the method comprises an acquisition module, a three-dimensional model generation module and an output module, wherein the generated scene is more accurate by acquiring field image data of the scene to be generated, the manufacturing period and the manufacturing difficulty of the traditional manufacturing are shortened, a manufacturer does not need to carry out interactive simulation on a complex scene, a large amount of manufacturing cost is saved, accurate reference is provided for the subsequent post-manufacturing of a project, the problem that the original manufacturing is repeated possibly caused by the problems of proportion, fidelity and the like can be solved only by carrying out the three-dimensional scanning asset data manufacturing again according to a scanned model, the manufacturing cost of the project is saved, a director and an executive director of a film can enter a lens manufacturing link of the project in advance, the director can obtain the scene data in the future, the path or the track of a camera does not need to be adjusted according to the actual manufacturing, and the manufacturing efficiency is greatly improved.
Description
Technical Field
The invention relates to the technical field of photo scanning generation models, in particular to a system and a method for generating a special film Pre-vis based on three-dimensional scanning.
Background
At present, more and more movies use the Pre-viz (visual preview) technology, and the appearance of the movie can be directly seen at each stage of movie production by producing a virtual scene model corresponding to a real scene so as to save production time.
Disclosure of Invention
The embodiment of the invention provides a system and a method for generating a special film Pre-vis based on three-dimensional scanning, which reappear a scene needing film and television production through a three-dimensional model and scanning textures which are actually generated, and then preview is performed to finish the early-stage visualization preview of the film, thereby shortening the production period.
The system for generating the special film Pre-vis based on the three-dimensional scanning comprises the following steps: the device comprises an acquisition module, a three-dimensional model generation module and an output module;
the acquisition module is used for acquiring image data of a scene to be generated, processing the image data and transmitting the processed image data to the three-dimensional model generation module;
the three-dimensional model generation module is used for receiving the image data transmitted by the acquisition module, processing the image data and then acquiring three-dimensional model data and mapping data;
and the output module is used for receiving the three-dimensional model data and the map data transmitted by the three-dimensional model generation module, and processing the three-dimensional model data and the map data to generate a preview image file.
Further, the acquisition module comprises an acquisition unit and a segmentation unit, the acquisition unit is used for acquiring image data of a scene to be generated and transmitting the acquired image data to the segmentation unit, and the segmentation unit is used for receiving the image data acquired by the acquisition unit, processing the image data to obtain single-frame image data and transmitting the single-frame image data to the three-dimensional model generation module.
Further, the acquisition unit acquires image data of a scene to be generated, including video data and picture data.
Further, the three-dimensional model generating module comprises a rough point cloud generating unit, a dense point cloud generating unit, a three-dimensional model generating unit, a quadrilateral surface model generating unit and a mapping baking unit, wherein the rough point cloud generating unit is used for receiving single-frame picture data transmitted by the dividing unit, processing the picture data to generate rough point cloud data, transmitting the generated rough point cloud data to the dense point cloud generating unit, the dense point cloud generating unit is used for receiving the rough point cloud data generated by the rough point cloud generating unit, processing the rough point cloud data to obtain dense point cloud data, transmitting the generated dense point cloud data to the three-dimensional model generating unit, the three-dimensional model generating unit is used for receiving the dense point cloud data transmitted by the dense point cloud generating unit, processing the dense point cloud data to obtain first three-dimensional model data, the method comprises the steps of transmitting first three-dimensional model data to a quadrilateral surface model generating unit, wherein the quadrilateral surface model generating unit is used for receiving the first three-dimensional model data generated by the three-dimensional model generating unit, processing the three-dimensional model data to obtain quadrilateral surface model data, transmitting the quadrilateral surface model data to a chartlet baking unit, the chartlet baking unit is used for receiving the quadrilateral surface model data generated by the quadrilateral surface model generating unit, processing the quadrilateral surface model data to obtain second three-dimensional model data of a scene to be generated and baked chartlet data, and transmitting the second three-dimensional model data and the baked chartlet data to an output module.
The output module further comprises a combination unit, a three-dimensional lens generation unit and a derivation unit, the combination unit is configured to receive the second three-dimensional model data and the baked map data transmitted by the map baking unit, process the second three-dimensional model data and the baked map data to obtain three-dimensional scanning asset data, transmit the three-dimensional scanning asset data to the three-dimensional lens generation unit, the three-dimensional lens generation unit is configured to receive the three-dimensional scanning asset data transmitted by the combination unit, process the three-dimensional scanning asset data to obtain complete scene data of a complete scene to be generated, generate lens data, transmit the lens data to the derivation unit, and the derivation unit is configured to receive the lens data generated by the three-dimensional lens generation unit, and process the lens data to obtain a preview image file.
In a second aspect, an embodiment of the present invention provides a method for generating a special film Pre-vis based on three-dimensional scanning, including the following steps:
s1, acquiring data, acquiring image data of a scene to be generated, transmitting the acquired image data to a segmentation unit, receiving the image data acquired by the acquisition unit by the segmentation unit, processing the image data to obtain single-frame image data, and transmitting the single-frame image data to a coarse point cloud generation unit;
s2, generating a three-dimensional model, wherein the rough point cloud generating unit receives single frame picture data transmitted by the segmentation unit, processes the picture data to generate rough point cloud data, transmits the generated rough point cloud data to the dense point cloud generating unit, the dense point cloud generating unit receives the rough point cloud data generated by the rough point cloud generating unit, processes the rough point cloud data to obtain dense point cloud data, transmits the generated dense point cloud data to the three-dimensional model generating unit, the three-dimensional model generating unit receives the dense point cloud data transmitted by the dense point cloud generating unit, processes the dense point cloud data to obtain first three-dimensional model data, transmits the first three-dimensional model data to the four-side model generating unit, the four-side model generating unit receives the first three-dimensional model data generated by the three-dimensional model generating unit, processes the three-dimensional model data to obtain four-side model data, transmitting the quadrilateral face model data to a mapping baking unit, receiving the quadrilateral face model data generated by the quadrilateral face model generating unit by the mapping baking unit, processing the quadrilateral face model data to obtain second three-dimensional model data of a scene to be generated and baked mapping data, and transmitting the second three-dimensional model data and the baked mapping data to a combining unit;
and S3, outputting the preview image file, receiving the second three-dimensional model data and the baked map data transmitted by the map baking unit by the combination unit, processing the second three-dimensional model data and the baked map data to obtain three-dimensional scanning asset data, transmitting the three-dimensional scanning asset data to the three-dimensional lens generation unit, receiving the three-dimensional scanning asset data transmitted by the combination unit by the three-dimensional lens generation unit, processing the three-dimensional scanning asset data to obtain complete scene data of a complete scene to be generated, generating lens data, transmitting the lens data to the export unit, receiving the lens data generated by the three-dimensional lens generation unit by the export unit, and processing the lens data to obtain the preview image file.
The technical scheme provided by the embodiment of the invention has the beneficial effects that at least:
(1) the scene generated by acquiring the field image data of the scene to be generated is more accurate, the manufacturing period and the manufacturing difficulty of the traditional manufacturing are shortened, the manufacturing personnel do not need to perform the plug-in operation on the complex scene, and a large amount of manufacturing cost is saved.
(2) Accurate reference is provided for subsequent post-production of the project, and production repetition possibly caused by the problems of proportion, fidelity and the like can be solved as long as three-dimensional scanning asset data production is carried out again according to the scanned model, so that the production cost of the project is saved.
(3) For the director and the director of the film, the process can be advanced to the lens production link of the project, so that the obtained result can be obtained without subsequently adjusting the path or track of a camera according to actual production, and the production efficiency is greatly increased.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic structural diagram of a system for generating a special film Pre-vis based on three-dimensional scanning according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for generating a special film Pre-vis based on three-dimensional scanning according to an embodiment of the present invention.
Reference numerals:
1-an acquisition module; 101-an acquisition unit; 102-a segmentation unit; 2-a three-dimensional model generation module; 201-a coarse point cloud generating unit; 202-dense point cloud generating unit; 203-a three-dimensional model generation unit; 204-a quadrilateral face model generating unit; 205-chartlet bake unit; 3-an output module; 301-a combination unit; 302-three-dimensional lens generation unit; 303-derivation unit.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Example one
As shown in fig. 1, an embodiment of the present invention provides a system for generating a special movie Pre-vis based on three-dimensional scanning, including: the three-dimensional model generating device comprises an acquisition module 1, a three-dimensional model generating module 2 and an output module 3;
the acquisition module 1 is used for acquiring image data of a scene to be generated, processing the image data and transmitting the image data to the three-dimensional model generation module 2, the acquisition module 1 comprises an acquisition unit 101 and a segmentation unit 102, the acquisition unit 101 is used for acquiring the image data of the scene to be generated and transmitting the acquired image data to the segmentation unit 102, the segmentation unit 102 is used for receiving the image data acquired by the acquisition unit 101, processing the image data to obtain single-frame picture data, transmitting the single-frame picture data to the three-dimensional model generation module 2, and the acquisition unit 101 acquires the image data of the scene to be generated, wherein the image data comprises video data and picture data;
specifically, the acquisition unit 101 acquires image data of a scene to be generated, the acquisition unit 101 includes an unmanned aerial vehicle, the acquired image data is divided into video data and picture data, the picture data directly acquired by the picture data does not need to be processed, and the video data needs to be imported into the segmentation unit 102 for segmentation to obtain single-frame picture data.
A three-dimensional model generation module 2, configured to receive the image data transmitted by the acquisition module 1, process the image data to obtain three-dimensional model data and chartlet data, where the three-dimensional model generation module 2 includes a coarse point cloud generation unit 201, a dense point cloud generation unit 202, a three-dimensional model generation unit 203, a quadrilateral face model generation unit 204, and a chartlet baking unit 205, the coarse point cloud generation unit 201 is configured to receive single-frame picture data transmitted by the segmentation unit 102, process the picture data to generate coarse point cloud data, transmit the generated coarse point cloud data to the dense point cloud generation unit 202, the dense point cloud generation unit 202 is configured to receive the coarse point cloud data generated by the coarse point cloud generation unit 201, process the coarse point cloud data to obtain dense point cloud data, and transmit the generated dense point cloud data to the three-dimensional model generation unit 203, the three-dimensional model generating unit 203 is used for receiving the dense point cloud data transmitted by the dense point cloud generating unit 202, the dense point cloud data is processed to obtain first three-dimensional model data, the first three-dimensional model data is transmitted to the quadrilateral surface model generating unit 204, the quadrilateral model generating unit 204 is configured to receive the first three-dimensional model data generated by the three-dimensional model generating unit 203, the three-dimensional model data is processed to obtain quadrilateral face model data, the quadrilateral face model data is transmitted to the chartler baking unit 205, the map baking unit 205 is configured to receive the quadrilateral surface model data generated by the quadrilateral surface model generating unit 204, processing the quadrilateral surface model data to obtain second three-dimensional model data of a scene to be generated and baked mapping data, and transmitting the second three-dimensional model data and the baked mapping data to the output module 3;
specifically, after picture data acquired by the acquisition module 1 is imported into the three-dimensional model generation module 2, the rough point cloud generation unit 201 generates rough point cloud data of a target model through a function of photo alignment, the rough point cloud data is input into the dense point cloud generation unit 202, the dense point cloud generation unit 202 uses the rough point cloud data obtained in the previous step to arrange, eliminates the rough point cloud data of an unnecessary part, retains a complete rough point cloud part which is further desired to generate dense point cloud, after arrangement, the arranged rough point cloud data is generated into dense point cloud data, the dense point cloud data is input into the three-dimensional model generation unit 203, the three-dimensional model generation unit 203 arranges the obtained dense point cloud number, eliminates some useless parts, retains the dense point cloud data which can generate a model, and generates the dense point cloud into first three-dimensional model data, inputting the first three-dimensional model data into the quadrilateral surface model generating unit 204, processing the first three-dimensional model data by the quadrilateral surface model generating unit 204 to obtain quadrilateral surface model data, inputting the quadrilateral surface model data into the chartlet baking unit 205, processing the quadrilateral surface model data by the chartlet baking unit 205 to obtain second three-dimensional model data, chartlet baking the picture data acquired by the acquiring module 1 and the second three-dimensional model data, baking chartlet texture information on the second three-dimensional model, and generating the second three-dimensional model data and baked chartlet data at the same time.
An output module 3, configured to receive the three-dimensional model data and the map data transmitted by the three-dimensional model generation module 2, process the three-dimensional model data and the map data to generate a preview image file, where the output module 3 includes a combination unit 301, a three-dimensional lens generation unit 302, and a derivation unit 303, the combination unit 301 is configured to receive the second three-dimensional model data and the baked map data transmitted by the map baking unit 205, process the second three-dimensional model data and the baked map data to obtain three-dimensional scanning asset data, transmit the three-dimensional scanning asset data to the three-dimensional lens generation unit 302, the three-dimensional lens generation unit 302 is configured to receive the three-dimensional scanning asset data transmitted by the combination unit 301, process the three-dimensional scanning asset data to obtain complete scene data of a complete scene to be generated, generate lens data, and transmit the lens data to the derivation unit 303, the export unit 303 is configured to receive the shot data generated by the three-dimensional shot generation unit 302, and process the shot data to obtain a preview image file;
specifically, the combining unit 301 scales the second three-dimensional model data generated by the three-dimensional model generation module 2 and the baked map data in units of scale, and then pastes the baked texture map to the imported second three-dimensional model data to form three-dimensional scanned asset data, repeats the above steps, inputs all of the scanned second three-dimensional model data and baked map data in one scene to the combining unit 301, and arranging and zooming to form complete three-dimensional scanning asset scene data, importing the complete three-dimensional scanning asset scene data into the three-dimensional lens generation unit 302, processing the three-dimensional scanning asset scene data by the three-dimensional lens generation unit 302 to obtain lens data, transmitting the lens data to the export unit 303, receiving the lens data generated by the three-dimensional lens generation unit 302 by the export unit 303, and processing the lens data to obtain a preview image file.
The invention reproduces scenes needing film production through a three-dimensional model and scanning textures which are generated really, then completes the visual preview of the film in the early stage by matching with the preview of a camera, shortens the production period, generates more accurate scenes by acquiring the real-field image data of the scenes to be generated, shortens the production period and the production difficulty of the traditional production, does not need a producer to carry out the rubbing on complex scenes, saves a large amount of production cost, provides accurate reference for the subsequent post production of the project, can solve the production repetition possibly caused by the problems of proportion, simulation degree and the like as long as the production of the three-dimensional scanning asset data is carried out again according to the scanned model, saves the production cost of the project, can further enter the lens production link of the project in advance for the director and the execution of the film, can be obtained by what you without adjusting the path or the track of the camera according to the actual production in the follow-up process and the like, greatly increasing the manufacturing efficiency.
Example two
The embodiment of the invention also discloses a method for generating the special film Pre-vis based on three-dimensional scanning, which comprises the following steps as shown in fig. 2:
s1, acquiring data, acquiring image data of a scene to be generated, transmitting the acquired image data to a segmentation unit 102, receiving the image data acquired by the acquisition unit 101 by the segmentation unit 102, processing the image data to obtain single-frame image data, and transmitting the single-frame image data to a coarse point cloud generation unit 201;
specifically, the acquisition unit 101 includes an unmanned aerial vehicle, the acquired image data is divided into two types, the first type is shot by a field unmanned aerial vehicle, a three-dimensional model generated as required is taken as a target point, photo shooting is performed around the target point, photo shooting is performed by three layers by adopting a depression viewpoint, a flat time point and a elevation viewpoint, and finally 500 pieces of photos shot around an object of which the model is to be generated are obtained, the format is JPG, the directly acquired image data is not required to be processed, the second type is shot by the field unmanned aerial vehicle as it is, the shooting method is the same as the photo shooting method, but an image recording method is adopted, a plurality of pieces of video data are obtained by final shooting, the video data are required to be led into the segmentation unit 102 for segmentation, a single-frame image is obtained, the single-frame image is stored in the format of a JPG image, and finally
S2, generating a three-dimensional model, wherein the rough point cloud generating unit 201 receives single frame of picture data transmitted by the dividing unit 102, processes the picture data to generate rough point cloud data, transmits the generated rough point cloud data to the dense point cloud generating unit 202, the dense point cloud generating unit 202 receives the rough point cloud data generated by the rough point cloud generating unit 201, processes the rough point cloud data to obtain dense point cloud data, transmits the generated dense point cloud data to the three-dimensional model generating unit 203, the three-dimensional model generating unit 203 receives the dense point cloud data transmitted by the dense point cloud generating unit 202, processes the dense point cloud data to obtain first three-dimensional model data, transmits the first three-dimensional model data to the quadrilateral plane model generating unit 204, the quadrilateral plane model generating unit 204 receives the first three-dimensional model data generated by the three-dimensional model generating unit 203, processes the three-dimensional model data to obtain quadrilateral plane model data, the quadrilateral surface model data are transmitted to the chartlet baking unit 205, the chartlet baking unit 205 receives the quadrilateral surface model data generated by the quadrilateral surface model generating unit 204, the quadrilateral surface model data are processed to obtain second three-dimensional model data of a scene to be generated and baked chartlet data, and the three-dimensional model data and the baked chartlet data are transmitted to the combining unit 301;
specifically, the three-dimensional model generation module 2 is configured to convert the plane information into three-dimensional information, obtain picture data acquired by the module 1, import the picture data into the three-dimensional model generation module 2, generate coarse point cloud data of a target model by the coarse point cloud generation unit 201 through a photo alignment function, input the coarse point cloud data into the dense point cloud generation unit 202, the dense point cloud generation unit 202 uses the coarse point cloud data obtained in the previous step to perform sorting, delete the coarse point cloud data of an unnecessary part, retain a complete coarse point cloud part which is desired to further generate a dense point cloud, after finishing the sorting, generate the sorted coarse point cloud data into dense point cloud data, input the dense point cloud data into the three-dimensional model generation unit 203, sort the obtained dense point cloud number by the three-dimensional model generation unit 203, and delete some useless parts, the method comprises the steps of reserving dense point cloud data capable of generating a model, generating the dense point cloud into first three-dimensional model data, inputting the first three-dimensional model data into a four-side model generating unit 204, processing the first three-dimensional model data by the four-side model generating unit 204 to obtain four-side model data, inputting the four-side model data into a chartlet baking unit 205, processing the four-side model data by the chartlet baking unit 205 to obtain second three-dimensional model data, chartlet baking picture data acquired by an acquisition module 1 and the second three-dimensional model data, baking chartlet texture information on the second three-dimensional model, and generating the second three-dimensional model data and baked chartlet data at the same time
S3, outputting a preview image file, receiving the second three-dimensional model data and the baked map data transmitted by the map baking unit 205 by the combining unit 301, processing the second three-dimensional model data and the baked map data to obtain three-dimensional scanning asset data, transmitting the three-dimensional scanning asset data to the three-dimensional lens generating unit 302, receiving the three-dimensional scanning asset data transmitted by the combining unit 301 by the three-dimensional lens generating unit 302, processing the three-dimensional scanning asset data to obtain complete scene data of a complete scene to be generated, generating lens data, transmitting the lens data to the deriving unit 303, receiving the lens data generated by the three-dimensional lens generating unit 302 by the deriving unit 303, and processing the lens data to obtain the preview image file.
The scene generated by acquiring the field image data of the scene to be generated is more accurate, the manufacturing period and the manufacturing difficulty of the traditional manufacturing are shortened, a manufacturer does not need to perform interactive simulation on a complex scene, a large amount of manufacturing cost is saved, accurate reference is provided for the subsequent post-manufacturing of the project, the problem of repeated manufacturing possibly caused by the problems of proportion, fidelity and the like can be solved as long as the three-dimensional scanning asset data is manufactured again according to the scanned model, the manufacturing cost of the project is saved, the director and the executive director of the film can enter the lens manufacturing link of the project in advance, the camera path or track and the like are not required to be adjusted according to the actual manufacturing, and the manufacturing efficiency is greatly improved.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is an example of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not intended to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, invention lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby expressly incorporated into the detailed description, with each claim standing on its own as a separate preferred embodiment of the invention.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. Of course, the processor and the storage medium may reside as discrete components in a user terminal.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in memory units and executed by processors. The memory unit may be implemented within the processor or external to the processor, in which case it can be communicatively coupled to the processor via various means as is known in the art.
What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, to the extent that the term "includes" is used in either the detailed description or the claims, such term is intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean a "non-exclusive or".
Claims (6)
1. The system for generating the special film Pre-vis based on the three-dimensional scanning is characterized by comprising the following steps: the device comprises an acquisition module, a three-dimensional model generation module and an output module;
the acquisition module is used for acquiring image data of a scene to be generated, processing the image data and transmitting the processed image data to the three-dimensional model generation module;
the three-dimensional model generation module is used for receiving the image data transmitted by the acquisition module, processing the image data and then acquiring three-dimensional model data and mapping data;
and the output module is used for receiving the three-dimensional model data and the map data transmitted by the three-dimensional model generation module, and processing the three-dimensional model data and the map data to generate a preview image file.
2. The system for generating a special film Pre-vis based on three-dimensional scanning as claimed in claim 1, wherein the acquiring module comprises an acquiring unit and a dividing unit, the acquiring unit is configured to acquire image data of a scene to be generated and transmit the acquired image data to the dividing unit, and the dividing unit is configured to receive the image data acquired by the acquiring unit, process the image data to obtain single-frame image data, and transmit the single-frame image data to the three-dimensional model generating module.
3. The system for generating a special film Pre-vis based on three-dimensional scanning as claimed in claim 2, wherein the capturing unit captures image data of a scene to be generated including video data and picture data.
4. The system for generating a special movie Pre-vis based on three-dimensional scanning as claimed in claim 2, wherein the three-dimensional model generation module comprises a rough point cloud generation unit, a dense point cloud generation unit, a three-dimensional model generation unit, a quadrilateral model generation unit and a chartlet baking unit, the rough point cloud generation unit is configured to receive single-frame picture data transmitted by the segmentation unit, process the picture data to generate rough point cloud data, transmit the generated rough point cloud data to the dense point cloud generation unit, the dense point cloud generation unit is configured to receive the rough point cloud data generated by the rough point cloud generation unit, process the rough point cloud data to obtain dense point cloud data, transmit the generated dense point cloud data to the three-dimensional model generation unit, and the three-dimensional model generation unit is configured to receive the dense point cloud data transmitted by the dense point cloud generation unit, the method comprises the steps of processing dense point cloud data to obtain first three-dimensional model data, transmitting the first three-dimensional model data to a four-side model generating unit, receiving the first three-dimensional model data generated by the three-dimensional model generating unit, processing the three-dimensional model data to obtain four-side model data, transmitting the four-side model data to a chartlet baking unit, receiving the four-side model data generated by the four-side model generating unit, processing the four-side model data to obtain second three-dimensional model data of a scene to be generated and baked chartlet data, and transmitting the second three-dimensional model data and the baked chartlet data to an output module.
5. The system for generating a special movie Pre-vis based on three-dimensional scanning as claimed in claim 1, wherein the output module comprises a combination unit, a three-dimensional shot generation unit and a derivation unit, the combination unit is configured to receive the second three-dimensional model data and the baked chartlet data transmitted by the chartlet baking unit, process the second three-dimensional model data and the baked chartlet data to obtain three-dimensional scanning asset data, transmit the three-dimensional scanning asset data to the three-dimensional shot generation unit, the three-dimensional shot generation unit is configured to receive the three-dimensional scanning asset data transmitted by the combination unit, process the three-dimensional scanning asset data to obtain complete scene data of a complete scene to be generated and generate shot data, transmit the shot data to the derivation unit, the derivation unit is configured to receive the shot data generated by the three-dimensional shot generation unit, and processing the lens data to obtain a preview image file.
6. The method for generating special film Pre-vis based on three-dimensional scanning is applied to the system for generating special film Pre-vis based on three-dimensional scanning as claimed in claim 1, and is characterized by comprising the following steps:
s1, acquiring data, acquiring image data of a scene to be generated, transmitting the acquired image data to a segmentation unit, receiving the image data acquired by the acquisition unit by the segmentation unit, processing the image data to obtain single-frame image data, and transmitting the single-frame image data to a coarse point cloud generation unit;
s2, generating a three-dimensional model, wherein the rough point cloud generating unit receives single frame picture data transmitted by the segmentation unit, processes the picture data to generate rough point cloud data, transmits the generated rough point cloud data to the dense point cloud generating unit, the dense point cloud generating unit receives the rough point cloud data generated by the rough point cloud generating unit, processes the rough point cloud data to obtain dense point cloud data, transmits the generated dense point cloud data to the three-dimensional model generating unit, the three-dimensional model generating unit receives the dense point cloud data transmitted by the dense point cloud generating unit, processes the dense point cloud data to obtain first three-dimensional model data, transmits the first three-dimensional model data to the four-side model generating unit, the four-side model generating unit receives the first three-dimensional model data generated by the three-dimensional model generating unit, processes the three-dimensional model data to obtain four-side model data, transmitting the quadrilateral face model data to a mapping baking unit, receiving the quadrilateral face model data generated by the quadrilateral face model generating unit by the mapping baking unit, processing the quadrilateral face model data to obtain second three-dimensional model data of a scene to be generated and baked mapping data, and transmitting the second three-dimensional model data and the baked mapping data to a combining unit;
and S3, outputting the preview image file, receiving the second three-dimensional model data and the baked map data transmitted by the map baking unit by the combination unit, processing the second three-dimensional model data and the baked map data to obtain three-dimensional scanning asset data, transmitting the three-dimensional scanning asset data to the three-dimensional lens generation unit, receiving the three-dimensional scanning asset data transmitted by the combination unit by the three-dimensional lens generation unit, processing the three-dimensional scanning asset data to obtain complete scene data of a complete scene to be generated, generating lens data, transmitting the lens data to the export unit, receiving the lens data generated by the three-dimensional lens generation unit by the export unit, and processing the lens data to obtain the preview image file.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010722088.3A CN111986309B (en) | 2020-07-24 | 2020-07-24 | System and method for generating special film Pre-vis based on three-dimensional scanning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010722088.3A CN111986309B (en) | 2020-07-24 | 2020-07-24 | System and method for generating special film Pre-vis based on three-dimensional scanning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111986309A true CN111986309A (en) | 2020-11-24 |
CN111986309B CN111986309B (en) | 2023-11-28 |
Family
ID=73439358
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010722088.3A Active CN111986309B (en) | 2020-07-24 | 2020-07-24 | System and method for generating special film Pre-vis based on three-dimensional scanning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111986309B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112396677A (en) * | 2020-11-25 | 2021-02-23 | 武汉艺画开天文化传播有限公司 | Animation production method, electronic device, and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140267627A1 (en) * | 2013-03-15 | 2014-09-18 | State Farm Mutual Automobile Insurance Company | Methods and systems for capturing the condition of a physical structure |
CN106056666A (en) * | 2016-05-27 | 2016-10-26 | 美屋三六五(天津)科技有限公司 | Three-dimensional model processing method and three-dimensional model processing system |
CN106780680A (en) * | 2016-11-28 | 2017-05-31 | 幻想现实(北京)科技有限公司 | Three-dimensional animation generation method, terminal and system based on augmented reality |
CN107194671A (en) * | 2017-05-31 | 2017-09-22 | 首汇焦点(北京)科技有限公司 | A kind of movie and television play makes whole process aided management system |
CN109945845A (en) * | 2019-02-02 | 2019-06-28 | 南京林业大学 | A kind of mapping of private garden spatial digitalized and three-dimensional visualization method |
CN111241615A (en) * | 2019-12-31 | 2020-06-05 | 国网山西省电力公司晋中供电公司 | Highly realistic multi-source fusion three-dimensional modeling method for transformer substation |
-
2020
- 2020-07-24 CN CN202010722088.3A patent/CN111986309B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140267627A1 (en) * | 2013-03-15 | 2014-09-18 | State Farm Mutual Automobile Insurance Company | Methods and systems for capturing the condition of a physical structure |
CN106056666A (en) * | 2016-05-27 | 2016-10-26 | 美屋三六五(天津)科技有限公司 | Three-dimensional model processing method and three-dimensional model processing system |
CN106780680A (en) * | 2016-11-28 | 2017-05-31 | 幻想现实(北京)科技有限公司 | Three-dimensional animation generation method, terminal and system based on augmented reality |
CN107194671A (en) * | 2017-05-31 | 2017-09-22 | 首汇焦点(北京)科技有限公司 | A kind of movie and television play makes whole process aided management system |
CN109945845A (en) * | 2019-02-02 | 2019-06-28 | 南京林业大学 | A kind of mapping of private garden spatial digitalized and three-dimensional visualization method |
CN111241615A (en) * | 2019-12-31 | 2020-06-05 | 国网山西省电力公司晋中供电公司 | Highly realistic multi-source fusion three-dimensional modeling method for transformer substation |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112396677A (en) * | 2020-11-25 | 2021-02-23 | 武汉艺画开天文化传播有限公司 | Animation production method, electronic device, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111986309B (en) | 2023-11-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11257272B2 (en) | Generating synthetic image data for machine learning | |
CN108401112B (en) | Image processing method, device, terminal and storage medium | |
CN103731583B (en) | Intelligent synthetic, print processing method is used for taking pictures | |
JP7227969B2 (en) | Three-dimensional reconstruction method and three-dimensional reconstruction apparatus | |
CN109564687B (en) | Learning method and recording medium | |
CN103945210A (en) | Multi-camera photographing method for realizing shallow depth of field effect | |
CN101588437A (en) | Method and apparatus for block-based compression of light-field images | |
CN112750085A (en) | Image restoration method and image restoration apparatus | |
CN110213491B (en) | Focusing method, device and storage medium | |
JP7170224B2 (en) | Three-dimensional generation method and three-dimensional generation device | |
CN111986106A (en) | High dynamic image reconstruction method based on neural network | |
CN111144491A (en) | Image processing method, device and electronic system | |
CN111353965B (en) | Image restoration method, device, terminal and storage medium | |
CN113327319A (en) | Complex scene modeling method and device, server and readable storage medium | |
CN111223169B (en) | Three-dimensional animation post-production method and device, terminal equipment and cloud rendering platform | |
CN111986309A (en) | System and method for generating special film Pre-vis based on three-dimensional scanning | |
CN115035580A (en) | Figure digital twinning construction method and system | |
CN107274335B (en) | Method and device for quickly establishing high-precision digital model | |
CN117671159A (en) | Three-dimensional model generation method and device, equipment and storage medium | |
CN116778091A (en) | Deep learning multi-view three-dimensional reconstruction algorithm based on path aggregation | |
CN111382753A (en) | Light field semantic segmentation method and system, electronic terminal and storage medium | |
CN112312041A (en) | Image correction method and device based on shooting, electronic equipment and storage medium | |
JP2009293970A (en) | Distance measuring device and method, and program | |
CN116310408B (en) | Method and device for establishing data association between event camera and frame camera | |
CN116433769B (en) | Space calibration method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |