CN116843819B - Green curtain infinite extension method based on illusion engine - Google Patents
Green curtain infinite extension method based on illusion engine Download PDFInfo
- Publication number
- CN116843819B CN116843819B CN202310834032.0A CN202310834032A CN116843819B CN 116843819 B CN116843819 B CN 116843819B CN 202310834032 A CN202310834032 A CN 202310834032A CN 116843819 B CN116843819 B CN 116843819B
- Authority
- CN
- China
- Prior art keywords
- green
- virtual
- cloth
- curtain
- green cloth
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 239000004744 fabric Substances 0.000 claims abstract description 138
- 238000009877 rendering Methods 0.000 claims abstract description 20
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 7
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 7
- 230000000007 visual effect Effects 0.000 claims abstract description 7
- 238000012545 processing Methods 0.000 claims description 7
- 239000013598 vector Substances 0.000 claims description 7
- 239000000463 material Substances 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 3
- 238000004091 panning Methods 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 238000010276 construction Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000002045 lasting effect Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
- G06T17/205—Re-meshing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2021—Shape modification
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a green curtain infinite extension method based on a illusion engine, which comprises the following steps: the camera input source is connected with the illusion engine, and green screen matting is carried out on the input source through a synthesis module of the illusion engine; programming to generate a cube grid, and constructing a virtual green curtain space according to a camera view angle; generating a plurality of single sections to form a virtual green cloth area under a virtual green curtain space; creating a scene capturing component, respectively rendering a virtual green curtain space and a virtual green cloth area, and outputting a corresponding scene capturing rendering texture map; and superposing the output scene capture rendering texture map and the input source picture, and cutting the input source picture according to the virtual green cloth area, so that the effect that the actual green curtain space can be extended in real time in the virtual scene according to the camera visual angle is achieved.
Description
Technical Field
The invention relates to the fields of virtual production, virtual release meeting, virtual live broadcast and the like, in particular to a green curtain infinite extension method based on a illusion engine.
Background
Under virtual live broadcast applications such as e-commerce, release meeting, the diversity and the differentiation of live broadcast mechanism and live broadcast platform are great, and many are because green interscreen space restriction, and green cloth scope is little, and the non-green cloth picture outside the green curtain scope that can not be shot in camera activity in-process leads to wearing group problem, has also reduced the range of motion of personage in the virtual live broadcast to distance and shooting scene between camera and the live broadcast main part are different, influence live broadcast effect has been shortened. The problem that non-green cloth pictures outside the range of the green curtain are shot due to space limitation can be solved, the cost and budget can be greatly controlled, and the method becomes an important ring of urgent demands in the field of virtual live broadcasting.
Therefore, those skilled in the art are dedicated to develop a method for infinitely extending a green curtain based on a phantom Engine (UE 5 for short) to solve at least the problem that a non-green-curtain picture outside the green-curtain range is shot in the camera moving process due to a smaller green-curtain range under the limitation of a real green-curtain space in the related art, so that the live effect is not affected by the limitation, distance and scene between the camera and the live subject, and the cost and live budget are greatly saved without expanding the green-curtain space.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the present invention aims to solve the problem of lasting due to the fact that the scope of the green cloth is smaller under the limitation of the real space between the green curtains in the related art, and the non-green cloth pictures outside the scope of the green curtains are shot during the camera moving process.
In order to achieve the above purpose, the invention provides a green curtain infinite extension method based on a illusion engine, which comprises the following steps:
step S1: the camera input source is connected with the illusion engine, and green screen matting is carried out on the input source through a synthesis module of the illusion engine;
step S2: programming to generate a cube grid, and constructing a virtual green curtain space according to the camera view angle;
step S3: generating a plurality of single sections to form a virtual green cloth area under the virtual green curtain space;
step S4: creating a scene capturing component, respectively rendering the virtual green curtain space and the virtual green cloth area, and outputting a corresponding scene capturing rendering texture map;
step S5: and superposing the output scene capturing rendering texture map and the input source picture, and cutting the input source picture according to the virtual green cloth area to realize real-time extension of the green curtain according to the visual angle.
Further, the step S1 further includes:
and the green curtain input source picture is connected into the illusion engine through the SDI or an external acquisition card, green curtain image matting is carried out on the input source through a synthesis module of the illusion engine, and the illusion engine material property of the image-matted input source picture is endowed to the image matting panel grid body and is placed in a virtual scene.
Further, the step S2 further includes:
constructing a cube grid with an initial size through a programming generation component, wherein the cube grid comprises a front wall, a ground, a left wall and a right wall; the camera is swung left and right, the width, height and depth of the programmed cube grid are continuously adjusted according to the green curtain range displayed in the input source picture, so that a virtual green curtain space corresponding to the size of the actual green curtain space is generated in the virtual scene, and meanwhile, the space relation of the virtual green curtain space can be clearly displayed through transparency adjustment.
Further, the step S3 further includes:
and multiplexing all vertex information, normal vectors, UV mapping coordinates and tangential direction information of the virtual green curtain space, continuing to program to generate a single section, and constructing four sections which are respectively front green cloth, ground green cloth, left wall green cloth and right wall green cloth.
Further, the four sections are attached to the virtual green curtain space as sub-objects as the virtual green cloth areas corresponding to the actual green cloth sizes; and continuously shaking the camera back and forth according to the actual green cloth range to adjust the sizes of the front green cloth, the ground green cloth, the left wall green cloth and the right wall green cloth, updating the sections in real time, and filling the four sections of the virtual green cloth area with green corresponding to the actual green cloth.
Further, the method for adjusting the size of the green cloth by shaking the camera comprises the following steps:
swinging the physical camera leftwards, simultaneously adjusting the left edge range of the front green cloth to coincide with the left edge of the green cloth in the actual green curtain space, and if the left edge of the front green cloth exceeds the left range of the front wall, firstly adjusting the front wall and then continuously adjusting the left edge range of the front green cloth;
swinging the physical camera rightward, and simultaneously adjusting the right range of the front green cloth to coincide with the right edge of the green cloth in the actual green curtain space; likewise, if the left edge of the front green cloth exceeds the right range of the front wall, firstly adjusting the front wall and then continuously adjusting the right edge range of the front green cloth;
and similarly, determining the spatial relationship among the final ground green cloth, the left wall green cloth and the right wall green cloth according to the depths of the left wall green cloth, the right wall green cloth and the ground green cloth in the actual green curtain space.
Further, the step S4 further includes:
after the virtual green curtain space and the virtual green cloth area corresponding to the display green curtain space are constructed in the virtual scene, 2 scene capturing components based on the illusion engine are then created to respectively render the corresponding virtual green curtain space and virtual green cloth area, and the scene capturing rendering texture map of the virtual green cloth area are input in real time.
Further, the step S5 further includes:
and superposing the two scene capturing rendering texture maps output in real time and the input source pictures subjected to the image matting processing, wherein the superposition relation is that the input source pictures are superposed with the front green cloth areas, and then the virtual green cloth areas are superposed with the virtual green curtain space.
Further, the step S5 further includes:
based on the virtual green cloth area range constructed in the virtual scene, the input source picture subjected to image matting processing is cut in real time according to the virtual green cloth area, namely, only the part of the input source picture in the area of the virtual green cloth area is reserved, and the rest part is cut.
Further, the normal vector, the UV map coordinates and the tangential direction are all corresponding to vertices, triangular surfaces are all surfaces of the model, the triangular surfaces are connected by using array subscripts of the vertices, and each triangular surface can be connected clockwise and anticlockwise.
The utility model provides a because the space restriction between the green curtain of reality in practical application, green cloth scope is less, can not be prevented to shoot the non-green cloth picture outside the green curtain scope at the camera in-process of swaing and lead to wearing group problem, through can infinitely extend green curtain in virtual scene, make the non-green cloth picture outside the scope tailor into virtual scene according to the visual angle in real time, make narrow and small green cloth picture and virtual scene can seamless link up.
The conception, specific structure, and technical effects of the present invention will be further described with reference to the accompanying drawings to fully understand the objects, features, and effects of the present invention.
Drawings
FIG. 1 is a schematic illustration of a phantom engine based green curtain extension linear flow diagram in accordance with a preferred embodiment of the present invention;
FIG. 2 is a diagram of the spatial relationship of building a virtual green curtain in accordance with a preferred embodiment of the present invention;
FIG. 3 is a schematic triangular view of a preferred embodiment of the present invention;
FIG. 4 is a diagram of a relationship between building virtual green cloth areas in accordance with a preferred embodiment of the present invention;
FIG. 5 is a diagram showing an example of overlapping of virtual green curtain space and green layout area in accordance with a preferred embodiment of the present invention.
Detailed Description
The following description of the preferred embodiments of the present invention refers to the accompanying drawings, which make the technical contents thereof more clear and easy to understand. The present invention may be embodied in many different forms of embodiments and the scope of the present invention is not limited to only the embodiments described herein.
In the drawings, like structural elements are referred to by like reference numerals and components having similar structure or function are referred to by like reference numerals. The dimensions and thickness of each component shown in the drawings are arbitrarily shown, and the present invention is not limited to the dimensions and thickness of each component. The thickness of the components is exaggerated in some places in the drawings for clarity of illustration.
As shown in fig. 1, the present application relates to a green curtain infinite extension method based on a illusion engine un realness engine, and relates to application fields of virtual production, virtual publishing meeting, virtual live broadcast and the like, and the method includes: the method comprises the steps that a green screen input source picture is accessed to an illusion engine (UE 5), and green screen image matting is carried out on the input source through a UE5 synthesis module (UnrealComposite); generating a virtual space corresponding to the size of the actual green curtain space in the virtual scene through a ProducalMesh (ProducalMesh) generated by programming, and naming the virtual space as a virtual green curtain space (GreenRoomBox), and supporting transparency adjustment to display a space relation more clearly; the physical camera view angle is swung, and the width, height and depth of a virtual green curtain space (GreenRoomBox) are adjusted according to the actual green curtain space range. And multiplexing all vertex information Vertics, normal vector Normal, UV map coordinates, tangential direction Tangent and other information based on the generated virtual Green curtain space (Green RoomBox), continuing programming to generate a single section (ProducalMeshSection), constructing four section areas of front, ground, left wall Green cloth and right wall Green cloth into sub-objects, adding the sub-objects to the virtual Green curtain space (GreenRoom Box) as a virtual Green cloth range corresponding to the actual Green cloth size in the Green curtain space, and naming the virtual Green cloth area (GreenArea). And continuously shaking the camera to adjust the section size and updating a UE5 section grid component (MeshSection) in real time according to the actual green cloth range. And then, respectively rendering only a virtual green curtain space (GreenRoomBox) and a virtual green cloth area (GreenRoomBox) by creating two UE5 scene capturing components (SceneCapture), outputting a scene capturing rendering texture map (RenderTarget) of the virtual green curtain space (GreenRoomBox) and a scene capturing rendering texture map (RenderTarget) of the virtual green cloth area (GreenArea), superposing the scene capturing rendering texture map (RenderTarget) and an input source picture, and cutting the input source picture in real time through the virtual green cloth area (GreenArea), so that the effect that the actual green curtain space infinitely extends in the virtual scene according to the camera view angle is finally achieved.
In order to make the technical problems, technical solutions and advantages to be solved more apparent, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
The embodiment provides a green curtain infinite extension method based on a illusion engine Unrealkine, which comprises the following detailed implementation processes:
the method comprises the steps that a green screen input source picture is accessed to a Unreal engine through a BMD acquisition card interface (SDI) or an external acquisition card, green screen image matting is carried out on an input source through a UE5 synthesis module (Unrealkomplex), and Material assets (materials) of the input source picture UE5 subjected to image matting processing are endowed to a default position on a surface mesh plane (PlanMesh) of an image matting surface piece UE and are placed in a virtual scene;
a programmable generating unit (ProceduralMeshBox) of an initial size (300 x 300 cm) is constructed by a programmable generating UE5, and the initial position of the Box is based on the position of the input source picture PlaneMesh in the scene, and the Box comprises four parts of a front wall, a ground, a left wall and a right wall.
Swinging the visual angle of the physical camera leftwards, and simultaneously adjusting the left area of the front wall to coincide with the left wall edge in the actual green curtain space; swinging the visual angle of the physical camera rightward, and simultaneously adjusting the size of the right area of the front wall to coincide with the edge of the right wall in the actual green curtain space; and continuing to swing the pitching angle of the physical camera, and adjusting the size of the ground depth area and the area range corresponding to the left wall and the right wall.
As shown in fig. 2, by adjusting, a virtual green curtain space (greenroomex) corresponding to the size of the actual green curtain space is constructed in the virtual scene, and at the same time, transparency adjustment of the virtual green curtain space (greenroomex) needs to be supported to display the spatial relationship more clearly;
based on a programmed cube grid (ProducalMeshBox), multiplexing all vertex information of a virtual Green curtain space (Green RoomBox), normal vectors, UV mapping coordinates, tangential direction and other information, continuously programming to generate a single section (ProducalMeshsection), and constructing four sections serving as actual Green cloth area ranges in the Green curtain space, namely a front Green cloth area, a ground Green cloth area, a left wall Green cloth area and a right wall Green cloth area;
the normal vector, the UV map coordinates and the tangential direction are all corresponding to the vertexes, trianges are all surfaces of the model, for example, each cross Section rectangular plane is provided with four vertexes, two triangular surfaces are connected by the four vertexes, a Section can be normally displayed, trianges are connected by using array subscript of the vertexes, each triangular surface can be connected clockwise and anticlockwise, and the difference is the orientation of the surfaces.
As shown in fig. 3, in this embodiment, according to the cross-sectional orientation, all triangular faces (trianges) of the model are 0, 2, and 1; 0. 3, 1 Array; if three array subscripts are used to connect a triangle, the two triangles are 021 and 123, respectively.
As shown in fig. 4, four sections based on the construction are attached as sub-objects to a virtual green curtain space (GreenRoom Box) as a virtual green cloth area (GreenArea) corresponding to the actual green cloth size. Swinging the physical camera leftwards, simultaneously adjusting the left range of the front green cloth to coincide with the left edge of the green cloth in the actual green curtain space, and if the left edge of the front green cloth exceeds the left range of the front wall, firstly adjusting the front wall and then continuously adjusting the left edge area of the front green cloth;
swinging the physical camera angle to the right, and simultaneously adjusting the right range of the front green cloth to coincide with the right edge of the green cloth in the actual green curtain space; likewise, if the right edge of the front green cloth exceeds the right range of the front wall, firstly adjusting the front wall and then continuously adjusting the right edge area of the front green cloth;
similarly, the space relation among the final ground green cloth, the left wall green cloth and the right wall green cloth is determined continuously according to the depths of the left wall green cloth, the right wall green cloth and the ground green cloth in the actual green curtain space;
the size of the front green cloth area, the size of the ground green cloth area, the size of the left wall green cloth area and the size of the right wall green cloth area are adjusted, the mesh component (Meshsection) of the UE5 is updated in real time, and the four sections of the virtual green cloth area (GreenAreaBox) are filled to be green and correspond to the actual green cloth. Wherein, each green cloth area range can not exceed the virtual green curtain space (GreenRoomBox) green curtain space range;
after constructing a virtual Green curtain space (GreenRoombox) and a virtual Green cloth Area (GreenArea) corresponding to the display Green curtain space in the virtual Scene, then creating a UE5 Scene capturing component (SceneCapture) and a virtual Green cloth Area (GreenRoombox) UE5 Scene capturing component (SceneCapture) of the virtual Green curtain space (GreenRoombox) respectively only render the corresponding virtual Green curtain space (GreenRoombox) and virtual Green cloth Area (Green Area), and inputting a corresponding Scene capturing rendering texture map (RenderTarget) in real time;
as shown in fig. 5, a virtual green curtain space (greenroomex) scene capture rendering texture map (RenderTarget) and a virtual green area (GreenArea) scene capture rendering texture map (RenderTarget) output in real time are performed with an input source picture subjected to matting processing. The superposition relation is that the input source picture is superposed with the front green cloth area, and then the virtual green cloth area (GreenArea) is superposed with the virtual green curtain space (GreenRoomBox); the input source picture is positioned in the virtual green cloth area (GreenArea) area, and the virtual green cloth area (GreenArea) is positioned in the virtual green curtain space (GreenRoomBox) area;
based on a virtual green cloth area (GreenArea) constructed in the virtual scene, the input source picture subjected to image matting processing is cut in real time according to the virtual green cloth area (GreenArea), namely, only the part of the input source picture in the virtual green cloth area (GreenArea) is reserved, and the rest part is cut, so that the effect that the actual green screen space can be extended in real time in the virtual scene according to the visual angle of a camera is finally achieved.
The foregoing describes in detail preferred embodiments of the present invention. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the invention without requiring creative effort by one of ordinary skill in the art. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by the person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.
Claims (10)
1. The green curtain infinite extension method based on the illusion engine is characterized by comprising the following steps:
step S1: the camera input source is connected with the illusion engine, and green screen matting is carried out on the input source through a synthesis module of the illusion engine;
step S2: programming to generate a cube grid, and constructing a virtual green curtain space according to the camera view angle;
step S3: generating four sections to form a virtual green cloth area under the virtual green curtain space;
step S4: creating a scene capturing component, respectively rendering the virtual green curtain space and the virtual green cloth area, and outputting a corresponding scene capturing rendering texture map;
step S5: and superposing the output scene capturing rendering texture map and the input source picture, and cutting the input source picture according to the virtual green cloth area to realize real-time extension of the green curtain according to the visual angle.
2. The illusion engine-based green curtain infinite extension method according to claim 1 wherein the step S1 further comprises:
and the green curtain input source picture is connected into the illusion engine through the SDI or an external acquisition card, green curtain image matting is carried out on the input source through a synthesis module of the illusion engine, and the illusion engine material property of the image-matted input source picture is endowed to the image matting panel grid body and is placed in a virtual scene.
3. The illusion engine-based green curtain infinite extension method according to claim 1 wherein the step S2 further comprises:
constructing a cube grid with an initial size through a programming generation component, wherein the cube grid comprises a front wall, a ground, a left wall and a right wall; the camera is swung left and right, the width, height and depth of the programmed cube grid are continuously adjusted according to the green curtain range displayed in the input source picture, so that a virtual green curtain space corresponding to the size of the actual green curtain space is generated in the virtual scene, and meanwhile, the space relation of the virtual green curtain space can be clearly displayed through transparency adjustment.
4. The illusion engine-based green curtain infinite extension method according to claim 1 wherein the step S3 further comprises:
and multiplexing all vertex information, normal vectors, UV mapping coordinates and tangential direction information of the virtual green curtain space, continuing to program to generate a single section, and constructing four sections which are respectively front green cloth, ground green cloth, left wall green cloth and right wall green cloth.
5. The illusion engine based green curtain infinite extension method according to claim 4 wherein the four sections are attached as sub-objects to the virtual green curtain space as the virtual green cloth area corresponding to an actual green cloth size; and continuously shaking the camera back and forth according to the actual green cloth range to adjust the sizes of the front green cloth, the ground green cloth, the left wall green cloth and the right wall green cloth, updating the sections in real time, and filling the four sections of the virtual green cloth area with green corresponding to the actual green cloth.
6. The illusion engine-based green curtain infinite extension method according to claim 5 wherein the method of adjusting the size of the green cloth by panning the camera is:
swinging the physical camera leftwards, simultaneously adjusting the left edge range of the front green cloth to coincide with the left edge of the green cloth in the actual green curtain space, and if the left edge of the front green cloth exceeds the left range of the front wall, firstly adjusting the front wall and then continuously adjusting the left edge range of the front green cloth;
swinging the physical camera rightward, and simultaneously adjusting the right range of the front green cloth to coincide with the right edge of the green cloth in the actual green curtain space; likewise, if the left edge of the front green cloth exceeds the right range of the front wall, firstly adjusting the front wall and then continuously adjusting the right edge range of the front green cloth;
and similarly, determining the spatial relationship among the final ground green cloth, the left wall green cloth and the right wall green cloth according to the depths of the left wall green cloth, the right wall green cloth and the ground green cloth in the actual green curtain space.
7. The illusion engine-based green curtain infinite extension method according to claim 1, wherein the step S4 further includes:
after the virtual green curtain space and the virtual green cloth area corresponding to the display green curtain space are constructed in the virtual scene, 2 scene capturing components based on the illusion engine are then created to respectively render the corresponding virtual green curtain space and virtual green cloth area, and the scene capturing rendering texture map of the virtual green cloth area are input in real time.
8. The illusion engine-based green curtain infinite extension method according to claim 1 wherein step S5 further comprises:
and superposing the two scene capturing rendering texture maps output in real time and the input source pictures subjected to the image matting processing, wherein the superposition relation is that the input source pictures are superposed with the front green cloth areas, and then the virtual green cloth areas are superposed with the virtual green curtain space.
9. The illusion engine-based green curtain infinite extension method according to claim 1 wherein step S5 further comprises:
based on the virtual green cloth area range constructed in the virtual scene, the input source picture subjected to image matting processing is cut in real time according to the virtual green cloth area, namely, only the part of the input source picture in the area of the virtual green cloth area is reserved, and the rest part is cut.
10. The illusion engine-based green curtain infinite extension method according to claim 4 wherein the normal vector, the UV map coordinates and the tangential direction are all faces of a model, triangular faces are all faces of the model, the triangular faces are connected by array subscripts of the vertices, each triangular face can be connected clockwise and anticlockwise.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310834032.0A CN116843819B (en) | 2023-07-10 | 2023-07-10 | Green curtain infinite extension method based on illusion engine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310834032.0A CN116843819B (en) | 2023-07-10 | 2023-07-10 | Green curtain infinite extension method based on illusion engine |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116843819A CN116843819A (en) | 2023-10-03 |
CN116843819B true CN116843819B (en) | 2024-02-02 |
Family
ID=88174035
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310834032.0A Active CN116843819B (en) | 2023-07-10 | 2023-07-10 | Green curtain infinite extension method based on illusion engine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116843819B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015123775A1 (en) * | 2014-02-18 | 2015-08-27 | Sulon Technologies Inc. | Systems and methods for incorporating a real image stream in a virtual image stream |
KR101922968B1 (en) * | 2017-07-12 | 2018-11-28 | 주식회사 볼트홀 | Live streaming method for virtual reality contents and system thereof |
CN114003331A (en) * | 2021-11-10 | 2022-02-01 | 浙江博采传媒有限公司 | LED circular screen virtual reality synthesis method and device, storage medium and electronic equipment |
CN114663633A (en) * | 2022-03-24 | 2022-06-24 | 航天宏图信息技术股份有限公司 | AR virtual live broadcast method and system |
CN115035147A (en) * | 2022-06-29 | 2022-09-09 | 卡莱特云科技股份有限公司 | Matting method, device and system based on virtual shooting and image fusion method |
CN115866160A (en) * | 2022-11-10 | 2023-03-28 | 北京电影学院 | Low-cost movie virtualization production system and method |
CN115909496A (en) * | 2022-11-29 | 2023-04-04 | 深圳市龙华区龙为小学 | Two-dimensional hand posture estimation method and system based on multi-scale feature fusion network |
-
2023
- 2023-07-10 CN CN202310834032.0A patent/CN116843819B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015123775A1 (en) * | 2014-02-18 | 2015-08-27 | Sulon Technologies Inc. | Systems and methods for incorporating a real image stream in a virtual image stream |
KR101922968B1 (en) * | 2017-07-12 | 2018-11-28 | 주식회사 볼트홀 | Live streaming method for virtual reality contents and system thereof |
CN114003331A (en) * | 2021-11-10 | 2022-02-01 | 浙江博采传媒有限公司 | LED circular screen virtual reality synthesis method and device, storage medium and electronic equipment |
CN114663633A (en) * | 2022-03-24 | 2022-06-24 | 航天宏图信息技术股份有限公司 | AR virtual live broadcast method and system |
CN115035147A (en) * | 2022-06-29 | 2022-09-09 | 卡莱特云科技股份有限公司 | Matting method, device and system based on virtual shooting and image fusion method |
CN115866160A (en) * | 2022-11-10 | 2023-03-28 | 北京电影学院 | Low-cost movie virtualization production system and method |
CN115909496A (en) * | 2022-11-29 | 2023-04-04 | 深圳市龙华区龙为小学 | Two-dimensional hand posture estimation method and system based on multi-scale feature fusion network |
Non-Patent Citations (2)
Title |
---|
《紫外线》部分特技镜头制作;徐欣;《电视字幕(特技与动画)》(第第5期期);36-38 * |
基于示波仪器判读的绿幕拍摄素材现场检测工作模式实践研究;毛颖;杨征;;深圳职业技术学院学报(第03期);28-31 * |
Also Published As
Publication number | Publication date |
---|---|
CN116843819A (en) | 2023-10-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9288476B2 (en) | System and method for real-time depth modification of stereo images of a virtual reality environment | |
CN103426163B (en) | System and method for rendering affected pixels | |
US5694533A (en) | 3-Dimensional model composed against textured midground image and perspective enhancing hemispherically mapped backdrop image for visual realism | |
CN107705241B (en) | Sand table construction method based on tile terrain modeling and projection correction | |
CN106296783A (en) | A kind of combination space overall situation 3D view and the space representation method of panoramic pictures | |
EP1646010B1 (en) | Method for generating textures for continuous mapping | |
KR20070086037A (en) | Method for inter-scene transitions | |
CN105957048A (en) | 3D panorama display method and system of shooting image through fish eye lens | |
Oliveira | Image-based modeling and rendering techniques: A survey | |
CN102520970A (en) | Dimensional user interface generating method and device | |
KR20200043458A (en) | Methods for creating and modifying images of 3D scenes | |
WO2023108161A2 (en) | Camera lens feed to drive defocus | |
CN105184738A (en) | Three-dimensional virtual display device and method | |
CN106780759A (en) | Method, device and the VR systems of scene stereoscopic full views figure are built based on picture | |
CN111857625B (en) | Method for correcting special-shaped curved surface and fusing edges | |
CN103632390A (en) | Method for realizing naked eye 3D (three dimensional) animation real-time making by using D3D (Direct three dimensional) technology | |
US9454845B2 (en) | Shadow contouring process for integrating 2D shadow characters into 3D scenes | |
CN115103134A (en) | LED virtual shooting cutting synthesis method | |
JP6682984B2 (en) | Free-viewpoint video display device | |
CN116843819B (en) | Green curtain infinite extension method based on illusion engine | |
CN102693065A (en) | Method for processing visual effect of stereo image | |
CN102438108B (en) | Film processing method | |
KR101348929B1 (en) | A multiview image generation method using control of layer-based depth image | |
Chu et al. | Animating Chinese landscape paintings and panorama using multi-perspective modeling | |
JP3853477B2 (en) | Simple display device for 3D terrain model with many objects arranged on its surface and its simple display method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |