CN111986309B - System and method for generating special film Pre-vis based on three-dimensional scanning - Google Patents

System and method for generating special film Pre-vis based on three-dimensional scanning Download PDF

Info

Publication number
CN111986309B
CN111986309B CN202010722088.3A CN202010722088A CN111986309B CN 111986309 B CN111986309 B CN 111986309B CN 202010722088 A CN202010722088 A CN 202010722088A CN 111986309 B CN111986309 B CN 111986309B
Authority
CN
China
Prior art keywords
data
unit
point cloud
dimensional
dimensional model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010722088.3A
Other languages
Chinese (zh)
Other versions
CN111986309A (en
Inventor
周安斌
邹方超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jindong Digital Creative Co ltd
Original Assignee
Shandong Jindong Digital Creative Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Jindong Digital Creative Co ltd filed Critical Shandong Jindong Digital Creative Co ltd
Priority to CN202010722088.3A priority Critical patent/CN111986309B/en
Publication of CN111986309A publication Critical patent/CN111986309A/en
Application granted granted Critical
Publication of CN111986309B publication Critical patent/CN111986309B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

A system and a method for generating a special film Pre-vis based on three-dimensional scanning relate to the technical field of photo scanning generation models, and comprise the following steps: the method comprises the steps of acquiring real image data of a scene to be generated, generating the scene more accurately by the acquisition module, generating the three-dimensional model and outputting the three-dimensional model, shortening the manufacturing period and manufacturing difficulty of traditional manufacturing, eliminating the need of manufacturing personnel to perform friction reduction on complex scenes, saving a large amount of manufacturing cost, providing accurate references for subsequent later manufacturing of the project, only needing to re-perform three-dimensional scanning asset data manufacturing according to a scanned model, repeatedly manufacturing the project, which is possibly caused by problems of proportion, fidelity and the like, saving the manufacturing cost of the project, leading director and executing director of a film, entering a lens manufacturing link of the project in advance, obtaining the scene, and avoiding adjusting a camera path or track according to actual manufacturing and the like, thereby greatly increasing the manufacturing efficiency.

Description

System and method for generating special film Pre-vis based on three-dimensional scanning
Technical Field
The application relates to the technical field of photo scanning generation models, in particular to a system and a method for generating a special film Pre-vis based on three-dimensional scanning.
Background
Today, with rapid development of technology, more and more movies use Pre-viz (visual preview) technology, and by making a virtual scene model corresponding to a real scene, each stage of movie making can directly see the appearance of the movie, so as to save making time.
Disclosure of Invention
The embodiment of the application provides a system and a method for generating a special film Pre-vis based on three-dimensional scanning, wherein a scene needing film and television production is reproduced through a truly generated three-dimensional model and scanning textures, and then the film is previewed and finished in early-stage visualization, so that the production period is shortened.
A system for generating a special film Pre-vis based on three-dimensional scanning, comprising: the device comprises an acquisition module, a three-dimensional model generation module and an output module;
the acquisition module is used for acquiring image data of a scene to be generated, processing the image data and transmitting the processed image data to the three-dimensional model generation module;
the three-dimensional model generation module is used for receiving the image data transmitted by the acquisition module, and processing the image data to obtain three-dimensional model data and mapping data;
and the output module is used for receiving the three-dimensional model data and the map data transmitted by the three-dimensional model generation module, and processing the three-dimensional model data and the map data to generate a preview image file.
Further, the acquisition module comprises an acquisition unit and a segmentation unit, wherein the acquisition unit is used for acquiring image data of a scene to be generated and transmitting the acquired image data to the segmentation unit, and the segmentation unit is used for receiving the image data acquired by the acquisition unit, processing the image data to obtain single-frame image data and transmitting the single-frame image data to the three-dimensional model generation module.
Further, the acquisition unit acquires image data of a scene to be generated, including video data and picture data.
Further, the three-dimensional model generating module comprises a coarse point cloud generating unit, a dense point cloud generating unit, a three-dimensional model generating unit, a four-side surface model generating unit and a map baking unit, wherein the coarse point cloud generating unit is used for receiving single-frame picture data transmitted by the dividing unit, processing the picture data to generate coarse point cloud data, transmitting the generated coarse point cloud data to the dense point cloud generating unit, the dense point cloud generating unit is used for receiving the coarse point cloud data generated by the coarse point cloud generating unit, processing the coarse point cloud data to obtain dense point cloud data, transmitting the generated dense point cloud data to the three-dimensional model generating unit, the three-dimensional model generating unit is used for receiving the dense point cloud data transmitted by the dense point cloud generating unit, processing the dense point cloud data to obtain first three-dimensional model data, transmitting the first three-dimensional model data to the four-side surface model generating unit, processing the three-dimensional model data to obtain four-side surface model data, transmitting the four-side surface model cloud data to the four-side surface model generating unit, processing the four-side surface model cloud data to the four-side surface model generating unit, and baking the four-side surface model data to obtain the map data to be used for generating a three-dimensional model, and the map baking unit.
Further, the output module comprises a combination unit, a three-dimensional lens generating unit and a deriving unit, wherein the combination unit is used for receiving the second three-dimensional model data and the baked map data transmitted by the map baking unit, processing the second three-dimensional model data and the baked map data to obtain three-dimensional scanning asset data, transmitting the three-dimensional scanning asset data to the three-dimensional lens generating unit, the three-dimensional lens generating unit is used for receiving the three-dimensional scanning asset data transmitted by the combination unit, processing to obtain complete scene data of a complete scene to be generated and generating lens data, transmitting the lens data to the deriving unit, and the deriving unit is used for receiving the lens data generated by the three-dimensional lens generating unit, and processing the lens data to obtain a preview image file.
In a second aspect, an embodiment of the present application provides a method for generating a special film Pre-vis based on three-dimensional scanning, including the steps of:
s1, data acquisition, namely acquiring image data of a scene to be generated, transmitting the acquired image data to a segmentation unit, receiving the image data acquired by the acquisition unit by the segmentation unit, processing the image data to obtain single-frame image data, and transmitting the single-frame image data to a coarse point cloud generation unit;
s2, generating a three-dimensional model, wherein a coarse point cloud generating unit receives single-frame picture data transmitted by a dividing unit, processes the picture data to generate coarse point cloud data, transmits the generated coarse point cloud data to a dense point cloud generating unit, the dense point cloud generating unit receives the coarse point cloud data generated by the coarse point cloud generating unit and processes the coarse point cloud data to obtain dense point cloud data, the generated dense point cloud data is transmitted to a three-dimensional model generating unit, the three-dimensional model generating unit receives the dense point cloud data transmitted by the dense point cloud generating unit, processes the dense point cloud data to obtain first three-dimensional model data, the first three-dimensional model data is transmitted to a four-sided model generating unit, the four-sided model generating unit receives the first three-dimensional model data generated by the three-dimensional model generating unit, processes the three-dimensional model data to obtain four-sided model data, the four-sided model data is transmitted to a paste map baking unit, the paste map baking unit receives the four-sided model data generated by the four-sided model generating unit, processes the second three-dimensional model data of a scene to be generated, and the paste map data to be transmitted to the three-dimensional model data of the paste unit;
s3, outputting the preview image file, wherein the combining unit receives the second three-dimensional model data and the baked map data transmitted by the map baking unit, processes the second three-dimensional model data and the baked map data to obtain three-dimensional scanning asset data, transmits the three-dimensional scanning asset data to the three-dimensional lens generating unit, and the three-dimensional lens generating unit receives the three-dimensional scanning asset data transmitted by the combining unit, processes the three-dimensional scanning asset data to obtain complete scene data of a complete scene to be generated and generates lens data, transmits the lens data to the deriving unit, and the deriving unit receives the lens data generated by the three-dimensional lens generating unit and processes the lens data to obtain the preview image file.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
(1) The scene generated by acquiring the real image data of the scene to be generated is more accurate, the manufacturing period and the manufacturing difficulty of traditional manufacturing are shortened, the complex scene is not needed to be sampled by a manufacturing person, and a large amount of manufacturing cost is saved.
(2) For the subsequent later production of the project, accurate reference is provided, and only the three-dimensional scanning asset data production is carried out again according to the scanned model, so that the repeated production possibly caused by the problems of proportion, fidelity and the like can be solved, and the production cost of the project is saved.
(3) For the director and the execution director of the film, the film can enter a lens making link of a project in advance, and the film is obtained without adjusting a camera path or track and the like according to actual making, so that the making efficiency is greatly improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the application is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, serve to explain the application. In the drawings:
fig. 1 is a schematic diagram of a system structure for generating a special film Pre-vis based on three-dimensional scanning according to an embodiment of the present application;
fig. 2 is a flowchart of a method for generating a special film Pre-vis based on three-dimensional scanning according to an embodiment of the present application.
Reference numerals:
1-an acquisition module; 101-an acquisition unit; 102-a segmentation unit; 2-a three-dimensional model generation module; 201-a coarse point cloud generation unit; 202-a dense point cloud generating unit; 203-a three-dimensional model generation unit; 204-a quadrilateral surface model generation unit; 205-a map baking unit; 3-an output module; 301-a combination unit; 302-a three-dimensional lens generation unit; 303-export unit.
DETAILED DESCRIPTION OF EMBODIMENT (S) OF INVENTION
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Example 1
As shown in fig. 1, an embodiment of the present application provides a system for generating a special film Pre-vis based on three-dimensional scanning, including: the three-dimensional model generation system comprises an acquisition module 1, a three-dimensional model generation module 2 and an output module 3;
an acquisition module 1, configured to acquire image data of a scene to be generated, process the image data, and transmit the processed image data to a three-dimensional model generation module 2, where the acquisition module 1 includes an acquisition unit 101 and a segmentation unit 102, the acquisition unit 101 is configured to acquire the image data of the scene to be generated, and transmit the acquired image data to the segmentation unit 102, and the segmentation unit 102 is configured to receive the image data acquired by the acquisition unit 101, process the image data to obtain single-frame picture data, and transmit the single-frame picture data to the three-dimensional model generation module 2, where the acquisition unit 101 acquires the image data of the scene to be generated including video data and picture data;
specifically, the acquisition unit 101 acquires image data of a scene to be generated, the acquisition unit 101 includes an unmanned aerial vehicle, the acquired image data is divided into two types, namely video data and picture data, the picture data directly acquired by the picture data are not processed, and the video data need to be imported into the segmentation unit 102 for segmentation, so that single-frame picture data are obtained.
A three-dimensional model generating module 2 for receiving the image data transmitted by the acquiring module 1, processing the image data to obtain three-dimensional model data and map data, wherein the three-dimensional model generating module 2 comprises a coarse point cloud generating unit 201, a dense point cloud generating unit 202, a three-dimensional model generating unit 203, a four-sided model generating unit 204 and a map baking unit 205, the coarse point cloud generating unit 201 is used for receiving single-frame picture data transmitted by the dividing unit 102, processing the picture data to generate coarse point cloud data, transmitting the generated coarse point cloud data to the dense point cloud generating unit 202, the dense point cloud generating unit 202 is used for receiving the coarse point cloud data generated by the coarse point cloud generating unit 201, processing the coarse point cloud data to obtain the dense point cloud data, transmitting the generated dense point cloud data to the three-dimensional model generating unit 203, the three-dimensional model generating unit 203 is used for receiving the dense point cloud data transmitted by the dense point cloud generating unit 202, processing the dense point cloud data to obtain first three-dimensional model data, transmitting the first three-dimensional model data to the four-sided model generating unit 205 is used for receiving the four-sided model data, the four-sided model data to generate the map data to be baked to the four-dimensional model generating unit 204, processing the map data to obtain the three-dimensional model data, transmitting the second three-dimensional model data and baked map data to the output module 3;
specifically, after the picture data acquired by the acquisition module 1 is imported into the three-dimensional model generation module 2, the coarse point cloud generation unit 201 generates coarse point cloud data of the target model by a photo alignment function, and inputs the coarse point cloud data into the dense point cloud generation unit 202, the dense point cloud generation unit 202 sorts the coarse point cloud data obtained in the previous step, removes unnecessary portions of the coarse point cloud data, retains the complete coarse point cloud portions which want to further generate the dense point cloud, generates the sorted coarse point cloud data into dense point cloud data after finishing, inputs the dense point cloud data into the three-dimensional model generation unit 203, the three-dimensional model generation unit 203 sorts the obtained dense point cloud data, deletes some useless portions, retains the dense point cloud data which can generate the model, generates the dense point cloud into first three-dimensional model data, inputs the first three-dimensional model data into the four-side model generation unit 204, processes the first three-dimensional model data into the surface model data, and then processes the first three-dimensional model 205 into the second three-dimensional model data, and then, and the second three-dimensional model data is obtained by baking the map data on the four-side model generation unit, and the map data is obtained by baking the map data.
An output module 3, configured to receive the three-dimensional model data and the map data transmitted by the three-dimensional model generating module 2, process the three-dimensional model data and the map data to generate a preview image file, where the output module 3 includes a combining unit 301, a three-dimensional lens generating unit 302, and an export unit 303, where the combining unit 301 is configured to receive the second three-dimensional model data and the baked map data transmitted by the map baking unit 205, process the second three-dimensional model data and the baked map data to obtain three-dimensional scan asset data, transmit the three-dimensional scan asset data to the three-dimensional lens generating unit 302, and the three-dimensional lens generating unit 302 is configured to receive the three-dimensional scan asset data transmitted by the combining unit 301, process to obtain complete scene data of a complete scene to be generated and generate lens data, transmit the lens data to the export unit 303, and the export unit 303 is configured to receive the lens data generated by the three-dimensional lens generating unit 302, and process the lens data to obtain the preview image file;
specifically, the combining unit 301 scales the second three-dimensional model data generated by the three-dimensional model generating module 2 and the baked map data according to a proportion unit, then pastes the baked texture map to the imported second three-dimensional model data to form three-dimensional scanned asset data, repeats the steps, inputs all the scanned second three-dimensional model data and the baked map data in one scene into the combining unit 301, sorts and scales the scanned second three-dimensional model data and the baked map data to form complete three-dimensional scanned asset scene data, imports the complete three-dimensional scanned asset scene data into the three-dimensional lens generating unit 302, processes the three-dimensional scanned asset scene data to obtain lens data, transmits the lens data to the exporting unit 303, and the exporting unit 303 receives the lens data generated by the three-dimensional lens generating unit 302 and processes the lens data to obtain the preview image file.
According to the method, the scene which needs to be made of the film is reproduced through the truly generated three-dimensional model and the scanned texture, then the camera preview is matched to complete the visual preview of the film in the early stage, the making period is shortened, the scene which is generated by acquiring the real image data of the scene to be generated is more accurate, the making period and the making difficulty of the traditional making are shortened, the making personnel are not required to carry out friction on the complex scene, a large amount of making cost is saved, accurate references are provided for the subsequent later making of the project, the problem that the making is repeated due to the problems of proportion, simulation degree and the like can be solved, the making cost of the project is saved, the director and the director of the film can enter the lens making link of the project in one step in advance, the fact is obtained, the camera path or track is not required to be adjusted according to the actual making, and the like is not needed, and the making efficiency is greatly increased.
Example two
The embodiment of the application also discloses a method for generating the special film Pre-vis based on three-dimensional scanning, as shown in figure 2, comprising the following steps:
s1, data acquisition, namely acquiring image data of a scene to be generated, transmitting the acquired image data to a segmentation unit 102, receiving the image data acquired by the acquisition unit 101 by the segmentation unit 102, processing the image data to obtain single-frame image data, and transmitting the single-frame image data to a coarse point cloud generation unit 201;
specifically, the acquisition unit 101 includes an unmanned aerial vehicle, the acquired image data is divided into two types, the first type is that the three-dimensional model generated according to the requirement is taken by the unmanned aerial vehicle on site as a target point, the photo taking is performed around the target point, the nodding point, the flat time point and the look-up point are adopted, the photo taking is performed in three layers, 300-500 photos taken around the object of the model to be generated are finally obtained, the format is JPG, the directly acquired image data is not processed, the second type is that the unmanned aerial vehicle is still taken on site, the shooting method is the same as the photo taking, but the image recording method is adopted, the video data is finally obtained, the video data is required to be imported into the segmentation unit 102 for segmentation to obtain a single-frame picture, the single-frame picture is stored in the format of the JPG picture, and the 300-500 pictures are finally stored
S2, generating a three-dimensional model, wherein a coarse point cloud generating unit 201 receives single-frame picture data transmitted by a dividing unit 102, processes the picture data to generate coarse point cloud data, transmits the generated coarse point cloud data to a dense point cloud generating unit 202, the dense point cloud generating unit 202 receives the coarse point cloud data generated by the coarse point cloud generating unit 201, processes the coarse point cloud data to obtain dense point cloud data, transmits the generated dense point cloud data to a three-dimensional model generating unit 203, the three-dimensional model generating unit 203 receives the dense point cloud data transmitted by the dense point cloud generating unit 202, processes the dense point cloud data to obtain first three-dimensional model data, transmits the first three-dimensional model data to a four-dimensional model generating unit 204, the four-dimensional model generating unit 204 receives the first three-dimensional model data generated by the three-dimensional model generating unit 203, processes the three-dimensional model data to obtain four-dimensional model data, transmits the four-dimensional model data to a mapping unit 205, the four-dimensional model generating unit 205 receives the surface model data generated by the four-dimensional model generating unit 204, and processes the four-dimensional model data to obtain three-dimensional model data to be combined with three-dimensional model data to be processed to generate a three-dimensional model 301;
specifically, the three-dimensional model generating module 2 is configured to convert plane information into three-dimensional information, obtain picture data acquired by the module 1, import the picture data into the three-dimensional model generating module 2, generate coarse point cloud data of the target model by the coarse point cloud generating unit 201 through a photo alignment function, input the coarse point cloud data into the dense point cloud generating unit 202, the dense point cloud generating unit 202 uses the coarse point cloud data obtained in the previous step to perform sorting, delete the coarse point cloud data of an unnecessary part, reserve a complete coarse point cloud part which wants to generate dense point cloud, generate the sorted coarse point cloud data into dense point cloud data after finishing, and input the dense point cloud data into the three-dimensional model generating unit 203, the three-dimensional model generating unit 203 sorts the obtained dense point cloud numbers, deletes some useless parts, retains dense point cloud data capable of generating models, generates dense point cloud as first three-dimensional model data, inputs the first three-dimensional model data into the four-sided model generating unit 204, processes the first three-dimensional model data by the four-sided model generating unit 204 to obtain four-sided model data, inputs the four-sided model data into the map baking unit 205, processes the four-sided model data by the map baking unit 205 to obtain second three-dimensional model data, performs map baking on the picture data acquired by the acquisition module 1 and the second three-dimensional model data, bakes the map texture information on the second three-dimensional model, and simultaneously generates the second three-dimensional model data and baked map data
S3, outputting the preview image file, wherein the combining unit 301 receives the second three-dimensional model data and the baked map data transmitted by the map baking unit 205, processes the second three-dimensional model data and the baked map data to obtain three-dimensional scanning asset data, transmits the three-dimensional scanning asset data to the three-dimensional lens generating unit 302, and the three-dimensional lens generating unit 302 receives the three-dimensional scanning asset data transmitted by the combining unit 301, processes the three-dimensional scanning asset data to obtain complete scene data of a complete scene to be generated, generates lens data, transmits the lens data to the deriving unit 303, and the deriving unit 303 receives the lens data generated by the three-dimensional lens generating unit 302, and processes the lens data to obtain the preview image file.
According to the method, the scene generated by acquiring the real image data of the scene to be generated is more accurate, the manufacturing period and the manufacturing difficulty of traditional manufacturing are shortened, a manufacturer is not required to revise the complex scene, a large amount of manufacturing cost is saved, accurate references are provided for the subsequent later manufacturing of the project, the problem that the manufacturing is repeated due to the problems of proportion, reality degree and the like can be solved as long as the three-dimensional scanning asset data are manufactured again according to the scanned model, the manufacturing cost of the project is saved, the director and the director of the film can enter the lens manufacturing link of the project in advance, the scene is obtained, and the camera path or track is not required to be adjusted according to the actual manufacturing, so that the manufacturing efficiency is greatly improved.
It should be understood that the specific order or hierarchy of steps in the processes disclosed are examples of exemplary approaches. Based on design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged without departing from the scope of the present disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
In the foregoing detailed description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the subject matter require more features than are expressly recited in each claim. Rather, as the following claims reflect, application lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate preferred embodiment of this application.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. The processor and the storage medium may reside as discrete components in a user terminal.
For a software implementation, the techniques described in this disclosure may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. These software codes may be stored in memory units and executed by processors. The memory unit may be implemented within the processor or external to the processor, in which case it can be communicatively coupled to the processor via various means as is known in the art.
The foregoing description includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the aforementioned embodiments, but one of ordinary skill in the art may recognize that many further combinations and permutations of various embodiments are possible. Accordingly, the embodiments described herein are intended to embrace all such alterations, modifications and variations that fall within the scope of the appended claims. Furthermore, as used in the specification or claims, the term "comprising" is intended to be inclusive in a manner similar to the term "comprising," as interpreted when employed as a transitional word in a claim. Furthermore, any use of the term "or" in the specification of the claims is intended to mean "non-exclusive or".

Claims (4)

1. A system for generating a special film Pre-vis based on three-dimensional scanning, comprising: the device comprises an acquisition module, a three-dimensional model generation module and an output module;
the acquisition module is used for acquiring image data of a scene to be generated, processing the image data and transmitting the processed image data to the three-dimensional model generation module;
the three-dimensional model generation module is used for receiving the image data transmitted by the acquisition module, and processing the image data to obtain three-dimensional model data and mapping data;
the three-dimensional model generation module comprises a coarse point cloud generation unit, a dense point cloud generation unit, a three-dimensional model generation unit, a four-sided model generation unit and a picture baking unit, wherein the coarse point cloud generation unit is used for receiving single-frame picture data transmitted by the segmentation unit, processing the picture data to generate coarse point cloud data, transmitting the generated coarse point cloud data to the dense point cloud generation unit, the dense point cloud generation unit is used for receiving the coarse point cloud data generated by the coarse point cloud generation unit, processing the coarse point cloud data to obtain dense point cloud data, transmitting the generated dense point cloud data to the three-dimensional model generation unit, processing the dense point cloud data to obtain first three-dimensional model data, transmitting the first three-dimensional model data to the four-sided model generation unit, processing the three-dimensional model data to obtain four-sided model data, transmitting the four-sided model data to the picture generation unit, transmitting the four-sided model data to the map unit, processing the four-sided model data to the four-sided model generation unit, and processing the four-sided model data to obtain three-dimensional model data to be baked to obtain a three-dimensional scene to be processed by the map data;
the output module is used for receiving the three-dimensional model data and the map data transmitted by the three-dimensional model generation module, and processing the three-dimensional model data and the map data to generate a preview image file;
the output module comprises a combination unit, a three-dimensional lens generation unit and a derivation unit, wherein the combination unit is used for receiving the second three-dimensional model data and the baked map data transmitted by the map baking unit, processing the second three-dimensional model data and the baked map data to obtain three-dimensional scanning asset data, transmitting the three-dimensional scanning asset data to the three-dimensional lens generation unit, the three-dimensional lens generation unit is used for receiving the three-dimensional scanning asset data transmitted by the combination unit, processing to obtain complete scene data of a complete scene to be generated and generating lens data, transmitting the lens data to the derivation unit, and the derivation unit is used for receiving the lens data generated by the three-dimensional lens generation unit, and processing the lens data to obtain a preview image file.
2. The system for generating special film Pre-vis based on three-dimensional scanning of claim 1, wherein the acquisition module comprises an acquisition unit and a segmentation unit, the acquisition unit is used for acquiring image data of a scene to be generated and transmitting the acquired image data to the segmentation unit, the segmentation unit is used for receiving the image data acquired by the acquisition unit and processing the image data to obtain single-frame picture data, and the single-frame picture data is transmitted to the three-dimensional model generation module.
3. A system for generating a specialty film Pre-vis based on three-dimensional scanning as claimed in claim 2, wherein said acquisition unit acquires image data of a scene to be generated including video data and picture data.
4. A method for generating a special film Pre-vis based on three-dimensional scanning, applied to the system for generating the special film Pre-vis based on three-dimensional scanning as claimed in claim 1, comprising the following steps:
s1, data acquisition, namely acquiring image data of a scene to be generated, transmitting the acquired image data to a segmentation unit, receiving the image data acquired by the acquisition unit by the segmentation unit, processing the image data to obtain single-frame image data, and transmitting the single-frame image data to a coarse point cloud generation unit;
s2, generating a three-dimensional model, wherein a coarse point cloud generating unit receives single-frame picture data transmitted by a dividing unit, processes the picture data to generate coarse point cloud data, transmits the generated coarse point cloud data to a dense point cloud generating unit, the dense point cloud generating unit receives the coarse point cloud data generated by the coarse point cloud generating unit and processes the coarse point cloud data to obtain dense point cloud data, the generated dense point cloud data is transmitted to a three-dimensional model generating unit, the three-dimensional model generating unit receives the dense point cloud data transmitted by the dense point cloud generating unit, processes the dense point cloud data to obtain first three-dimensional model data, the first three-dimensional model data is transmitted to a four-sided model generating unit, the four-sided model generating unit receives the first three-dimensional model data generated by the three-dimensional model generating unit, processes the three-dimensional model data to obtain four-sided model data, the four-sided model data is transmitted to a paste map baking unit, the paste map baking unit receives the four-sided model data generated by the four-sided model generating unit, processes the second three-dimensional model data of a scene to be generated, and the paste map data to be transmitted to the three-dimensional model data of the paste unit;
s3, outputting the preview image file, wherein the combining unit receives the second three-dimensional model data and the baked map data transmitted by the map baking unit, processes the second three-dimensional model data and the baked map data to obtain three-dimensional scanning asset data, transmits the three-dimensional scanning asset data to the three-dimensional lens generating unit, and the three-dimensional lens generating unit receives the three-dimensional scanning asset data transmitted by the combining unit, processes the three-dimensional scanning asset data to obtain complete scene data of a complete scene to be generated and generates lens data, transmits the lens data to the deriving unit, and the deriving unit receives the lens data generated by the three-dimensional lens generating unit and processes the lens data to obtain the preview image file.
CN202010722088.3A 2020-07-24 2020-07-24 System and method for generating special film Pre-vis based on three-dimensional scanning Active CN111986309B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010722088.3A CN111986309B (en) 2020-07-24 2020-07-24 System and method for generating special film Pre-vis based on three-dimensional scanning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010722088.3A CN111986309B (en) 2020-07-24 2020-07-24 System and method for generating special film Pre-vis based on three-dimensional scanning

Publications (2)

Publication Number Publication Date
CN111986309A CN111986309A (en) 2020-11-24
CN111986309B true CN111986309B (en) 2023-11-28

Family

ID=73439358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010722088.3A Active CN111986309B (en) 2020-07-24 2020-07-24 System and method for generating special film Pre-vis based on three-dimensional scanning

Country Status (1)

Country Link
CN (1) CN111986309B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396677B (en) * 2020-11-25 2023-01-13 武汉艺画开天文化传播有限公司 Animation production method, electronic device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056666A (en) * 2016-05-27 2016-10-26 美屋三六五(天津)科技有限公司 Three-dimensional model processing method and three-dimensional model processing system
CN106780680A (en) * 2016-11-28 2017-05-31 幻想现实(北京)科技有限公司 Three-dimensional animation generation method, terminal and system based on augmented reality
CN107194671A (en) * 2017-05-31 2017-09-22 首汇焦点(北京)科技有限公司 A kind of movie and television play makes whole process aided management system
CN109945845A (en) * 2019-02-02 2019-06-28 南京林业大学 A kind of mapping of private garden spatial digitalized and three-dimensional visualization method
CN111241615A (en) * 2019-12-31 2020-06-05 国网山西省电力公司晋中供电公司 Highly realistic multi-source fusion three-dimensional modeling method for transformer substation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8872818B2 (en) * 2013-03-15 2014-10-28 State Farm Mutual Automobile Insurance Company Methods and systems for capturing the condition of a physical structure

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056666A (en) * 2016-05-27 2016-10-26 美屋三六五(天津)科技有限公司 Three-dimensional model processing method and three-dimensional model processing system
CN106780680A (en) * 2016-11-28 2017-05-31 幻想现实(北京)科技有限公司 Three-dimensional animation generation method, terminal and system based on augmented reality
CN107194671A (en) * 2017-05-31 2017-09-22 首汇焦点(北京)科技有限公司 A kind of movie and television play makes whole process aided management system
CN109945845A (en) * 2019-02-02 2019-06-28 南京林业大学 A kind of mapping of private garden spatial digitalized and three-dimensional visualization method
CN111241615A (en) * 2019-12-31 2020-06-05 国网山西省电力公司晋中供电公司 Highly realistic multi-source fusion three-dimensional modeling method for transformer substation

Also Published As

Publication number Publication date
CN111986309A (en) 2020-11-24

Similar Documents

Publication Publication Date Title
CN112311965B (en) Virtual shooting method, device, system and storage medium
CN101297545B (en) Imaging device and image processing device
JP5319415B2 (en) Image processing apparatus and image processing method
US20110234841A1 (en) Storage and Transmission of Pictures Including Multiple Frames
US11095871B2 (en) System that generates virtual viewpoint image, method and storage medium
CN110517209B (en) Data processing method, device, system and computer readable storage medium
CN111710049B (en) Method and device for determining ambient illumination in AR scene
CN110895822A (en) Method of operating a depth data processing system
JP2014071850A (en) Image processing apparatus, terminal device, image processing method, and program
WO2023155353A1 (en) Depth image acquisition method and apparatus, and depth system, terminal and storage medium
CN111353965B (en) Image restoration method, device, terminal and storage medium
CN111986309B (en) System and method for generating special film Pre-vis based on three-dimensional scanning
CN107707816A (en) A kind of image pickup method, device, terminal and storage medium
CN212519183U (en) Virtual shooting system for camera robot
WO2024001309A1 (en) Method and apparatus for generating and producing template for infrared thermal image analysis report
CN114898068B (en) Three-dimensional modeling method, device, equipment and storage medium
CN111489323A (en) Double-light-field image fusion method, device and equipment and readable storage medium
US20210400192A1 (en) Image processing apparatus, image processing method, and storage medium
CN114266694A (en) Image processing method, apparatus and computer storage medium
JP7378963B2 (en) Image processing device, image processing method, and computer program
JP2020150517A (en) Image processing apparatus, image processing method, computer program and storage medium
TWI823491B (en) Optimization method of a depth estimation model, device, electronic equipment and storage media
CN104584529A (en) Image processing device, image capture device, and program
JP7395259B2 (en) Image processing device, image processing method, computer program and storage medium
WO2020199141A1 (en) Overexposure recovery processing method, device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant