CN116843819A - Green curtain infinite extension method based on illusion engine - Google Patents

Green curtain infinite extension method based on illusion engine Download PDF

Info

Publication number
CN116843819A
CN116843819A CN202310834032.0A CN202310834032A CN116843819A CN 116843819 A CN116843819 A CN 116843819A CN 202310834032 A CN202310834032 A CN 202310834032A CN 116843819 A CN116843819 A CN 116843819A
Authority
CN
China
Prior art keywords
green
virtual
cloth
curtain
green cloth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310834032.0A
Other languages
Chinese (zh)
Other versions
CN116843819B (en
Inventor
洪煦
费凯敏
吴锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Suihuan Intelligent Technology Co ltd
Original Assignee
Shanghai Suihuan Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Suihuan Intelligent Technology Co ltd filed Critical Shanghai Suihuan Intelligent Technology Co ltd
Priority to CN202310834032.0A priority Critical patent/CN116843819B/en
Publication of CN116843819A publication Critical patent/CN116843819A/en
Application granted granted Critical
Publication of CN116843819B publication Critical patent/CN116843819B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • G06T17/205Re-meshing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a green curtain infinite extension method based on a illusion engine, which comprises the following steps: the camera input source is connected with the illusion engine, and green screen matting is carried out on the input source through a synthesis module of the illusion engine; programming to generate a cube grid, and constructing a virtual green curtain space according to a camera view angle; generating a plurality of single sections to form a virtual green cloth area under a virtual green curtain space; creating a scene capturing component, respectively rendering a virtual green curtain space and a virtual green cloth area, and outputting a corresponding scene capturing rendering texture map; and superposing the output scene capture rendering texture map and the input source picture, and cutting the input source picture according to the virtual green cloth area, so that the effect that the actual green curtain space can be extended in real time in the virtual scene according to the camera visual angle is achieved.

Description

Green curtain infinite extension method based on illusion engine
Technical Field
The application relates to the fields of virtual production, virtual release meeting, virtual live broadcast and the like, in particular to a green curtain infinite extension method based on a illusion engine.
Background
Under virtual live broadcast applications such as e-commerce, release meeting, the diversity and the differentiation of live broadcast mechanism and live broadcast platform are great, and many are because green interscreen space restriction, and green cloth scope is little, and the non-green cloth picture outside the green curtain scope that can not be shot in camera activity in-process leads to wearing group problem, has also reduced the range of motion of personage in the virtual live broadcast to distance and shooting scene between camera and the live broadcast main part are different, influence live broadcast effect has been shortened. The problem that non-green cloth pictures outside the range of the green curtain are shot due to space limitation can be solved, the cost and budget can be greatly controlled, and the method becomes an important ring of urgent demands in the field of virtual live broadcasting.
Therefore, those skilled in the art are dedicated to develop a method for infinitely extending a green curtain based on a phantom Engine (UE 5 for short) to solve at least the problem that a non-green-curtain picture outside the green-curtain range is shot in the camera moving process due to a smaller green-curtain range under the limitation of a real green-curtain space in the related art, so that the live effect is not affected by the limitation, distance and scene between the camera and the live subject, and the cost and live budget are greatly saved without expanding the green-curtain space.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, the present application aims to solve the problem of lasting due to the fact that the scope of the green cloth is smaller under the limitation of the real space between the green curtains in the related art, and the non-green cloth pictures outside the scope of the green curtains are shot during the camera moving process.
In order to achieve the above purpose, the application provides a green curtain infinite extension method based on a illusion engine, which comprises the following steps:
step S1: the camera input source is connected with the illusion engine, and green screen matting is carried out on the input source through a synthesis module of the illusion engine;
step S2: programming to generate a cube grid, and constructing a virtual green curtain space according to the camera view angle;
step S3: generating a plurality of single sections to form a virtual green cloth area under the virtual green curtain space;
step S4: creating a scene capturing component, respectively rendering the virtual green curtain space and the virtual green cloth area, and outputting a corresponding scene capturing rendering texture map;
step S5: and superposing the output scene capturing rendering texture map and the input source picture, and cutting the input source picture according to the virtual green cloth area to realize real-time extension of the green curtain according to the visual angle.
Further, the step S1 further includes:
and the green curtain input source picture is connected into the illusion engine through the SDI or an external acquisition card, green curtain image matting is carried out on the input source through a synthesis module of the illusion engine, and the illusion engine material property of the image-matted input source picture is endowed to the image matting panel grid body and is placed in a virtual scene.
Further, the step S2 further includes:
constructing a cube grid with an initial size through a programming generation component, wherein the cube grid comprises a front wall, a ground, a left wall and a right wall; the camera is swung left and right, the width, height and depth of the programmed cube grid are continuously adjusted according to the green curtain range displayed in the input source picture, so that a virtual green curtain space corresponding to the size of the actual green curtain space is generated in the virtual scene, and meanwhile, the space relation of the virtual green curtain space can be clearly displayed through transparency adjustment.
Further, the step S3 further includes:
and multiplexing all vertex information, normal vectors, UV mapping coordinates and tangential direction information of the virtual green curtain space, continuing to program to generate a single section, and constructing four sections which are respectively front green cloth, ground green cloth, left wall green cloth and right wall green cloth.
Further, the four sections are attached to the virtual green curtain space as sub-objects as the virtual green cloth areas corresponding to the actual green cloth sizes; and continuously shaking the camera back and forth according to the actual green cloth range to adjust the sizes of the front green cloth, the ground green cloth, the left wall green cloth and the right wall green cloth, updating the sections in real time, and filling the four sections of the virtual green cloth area with green corresponding to the actual green cloth.
Further, the method for adjusting the size of the green cloth by shaking the camera comprises the following steps:
swinging the physical camera leftwards, simultaneously adjusting the left edge range of the front green cloth to coincide with the left edge of the green cloth in the actual green curtain space, and if the left edge of the front green cloth exceeds the left range of the front wall, firstly adjusting the front wall and then continuously adjusting the left edge range of the front green cloth;
swinging the physical camera rightward, and simultaneously adjusting the right range of the front green cloth to coincide with the right edge of the green cloth in the actual green curtain space; likewise, if the left edge of the front green cloth exceeds the right range of the front wall, firstly adjusting the front wall and then continuously adjusting the right edge range of the front green cloth;
and similarly, determining the spatial relationship among the final ground green cloth, the left wall green cloth and the right wall green cloth according to the depths of the left wall green cloth, the right wall green cloth and the ground green cloth in the actual green curtain space.
Further, the step S4 further includes:
after the virtual green curtain space and the virtual green cloth area corresponding to the display green curtain space are constructed in the virtual scene, 2 scene capturing components based on the illusion engine are then created to respectively render the corresponding virtual green curtain space and virtual green cloth area, and the scene capturing rendering texture map of the virtual green cloth area are input in real time.
Further, the step S5 further includes:
and superposing the two scene capturing rendering texture maps output in real time and the input source pictures subjected to the image matting processing, wherein the superposition relation is that the input source pictures are superposed with the front green cloth areas, and then the virtual green cloth areas are superposed with the virtual green curtain space.
Further, the step S5 further includes:
based on the virtual green cloth area range constructed in the virtual scene, the input source picture subjected to image matting processing is cut in real time according to the virtual green cloth area, namely, only the part of the input source picture in the area of the virtual green cloth area is reserved, and the rest part is cut.
Further, the normal vector, the UV map coordinates and the tangential direction are all corresponding to vertices, triangular surfaces are all surfaces of the model, the triangular surfaces are connected by using array subscripts of the vertices, and each triangular surface can be connected clockwise and anticlockwise.
The application solves the problem of lasting caused by the fact that the space between the real green curtains is limited, the scope of the green cloth is smaller, and the non-green cloth pictures outside the scope of the green curtains cannot be shot in the swinging process of the camera, and the non-green cloth pictures outside the scope can be cut into virtual scenes in real time according to the visual angle by infinitely extending the green curtains in the virtual scenes, so that the narrow green cloth pictures and the virtual scenes can be connected seamlessly.
The conception, specific structure, and technical effects of the present application will be further described with reference to the accompanying drawings to fully understand the objects, features, and effects of the present application.
Drawings
FIG. 1 is a schematic illustration of a phantom engine based green curtain extension linear flow diagram in accordance with a preferred embodiment of the present application;
FIG. 2 is a diagram of the spatial relationship of building a virtual green curtain in accordance with a preferred embodiment of the present application;
FIG. 3 is a schematic triangular view of a preferred embodiment of the present application;
FIG. 4 is a diagram of a relationship between building virtual green cloth areas in accordance with a preferred embodiment of the present application;
FIG. 5 is a diagram showing an example of overlapping of virtual green curtain space and green layout area in accordance with a preferred embodiment of the present application.
Detailed Description
The following description of the preferred embodiments of the present application refers to the accompanying drawings, which make the technical contents thereof more clear and easy to understand. The present application may be embodied in many different forms of embodiments and the scope of the present application is not limited to only the embodiments described herein.
In the drawings, like structural elements are referred to by like reference numerals and components having similar structure or function are referred to by like reference numerals. The dimensions and thickness of each component shown in the drawings are arbitrarily shown, and the present application is not limited to the dimensions and thickness of each component. The thickness of the components is exaggerated in some places in the drawings for clarity of illustration.
As shown in fig. 1, the application relates to a green curtain infinite extension method based on a illusion engine un realness engine, which relates to application fields of virtual manufacture, virtual release meeting, virtual live broadcast and the like, and the method comprises the following steps: the method comprises the steps that a green screen input source picture is accessed to an illusion engine (UE 5), and green screen image matting is carried out on the input source through a UE5 synthesis module (UnrealComposite); generating a virtual space corresponding to the size of the actual green curtain space in the virtual scene through a ProducalMesh (ProducalMesh) generated by programming, and naming the virtual space as a virtual green curtain space (GreenRoomBox), and supporting transparency adjustment to display a space relation more clearly; the physical camera view angle is swung, and the width, height and depth of a virtual green curtain space (GreenRoomBox) are adjusted according to the actual green curtain space range. And multiplexing all vertex information Vertics, normal vector Normal, UV map coordinates, tangential direction Tangent and other information based on the generated virtual Green curtain space (Green RoomBox), continuing programming to generate a single section (ProducalMeshSection), constructing four section areas of front, ground, left wall Green cloth and right wall Green cloth into sub-objects, adding the sub-objects to the virtual Green curtain space (GreenRoom Box) as a virtual Green cloth range corresponding to the actual Green cloth size in the Green curtain space, and naming the virtual Green cloth area (GreenArea). And continuously shaking the camera to adjust the section size and updating a UE5 section grid component (MeshSection) in real time according to the actual green cloth range. And then, respectively rendering only a virtual green curtain space (GreenRoomBox) and a virtual green cloth area (GreenRoomBox) by creating two UE5 scene capturing components (SceneCapture), outputting a scene capturing rendering texture map (RenderTarget) of the virtual green curtain space (GreenRoomBox) and a scene capturing rendering texture map (RenderTarget) of the virtual green cloth area (GreenArea), superposing the scene capturing rendering texture map (RenderTarget) and an input source picture, and cutting the input source picture in real time through the virtual green cloth area (GreenArea), so that the effect that the actual green curtain space infinitely extends in the virtual scene according to the camera view angle is finally achieved.
In order to make the technical problems, technical solutions and advantages to be solved more apparent, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
The embodiment provides a green curtain infinite extension method based on a illusion engine Unrealkine, which comprises the following detailed implementation processes:
the method comprises the steps that a green screen input source picture is accessed to a Unreal engine through a BMD acquisition card interface (SDI) or an external acquisition card, green screen image matting is carried out on an input source through a UE5 synthesis module (Unrealkomplex), and Material assets (materials) of the input source picture UE5 subjected to image matting processing are endowed to a default position on a surface mesh plane (PlanMesh) of an image matting surface piece UE and are placed in a virtual scene;
a programmable generating unit (ProceduralMeshBox) of an initial size (300 x 300 cm) is constructed by a programmable generating UE5, and the initial position of the Box is based on the position of the input source picture PlaneMesh in the scene, and the Box comprises four parts of a front wall, a ground, a left wall and a right wall.
Swinging the visual angle of the physical camera leftwards, and simultaneously adjusting the left area of the front wall to coincide with the left wall edge in the actual green curtain space; swinging the visual angle of the physical camera rightward, and simultaneously adjusting the size of the right area of the front wall to coincide with the edge of the right wall in the actual green curtain space; and continuing to swing the pitching angle of the physical camera, and adjusting the size of the ground depth area and the area range corresponding to the left wall and the right wall.
As shown in fig. 2, by adjusting, a virtual green curtain space (greenroomex) corresponding to the size of the actual green curtain space is constructed in the virtual scene, and at the same time, transparency adjustment of the virtual green curtain space (greenroomex) needs to be supported to display the spatial relationship more clearly;
based on a programmed cube grid (ProducalMeshBox), multiplexing all vertex information of a virtual Green curtain space (Green RoomBox), normal vectors, UV mapping coordinates, tangential direction and other information, continuously programming to generate a single section (ProducalMeshsection), and constructing four sections serving as actual Green cloth area ranges in the Green curtain space, namely a front Green cloth area, a ground Green cloth area, a left wall Green cloth area and a right wall Green cloth area;
the normal vector, the UV map coordinates and the tangential direction are all corresponding to the vertexes, trianges are all surfaces of the model, for example, each cross Section rectangular plane is provided with four vertexes, two triangular surfaces are connected by the four vertexes, a Section can be normally displayed, trianges are connected by using array subscript of the vertexes, each triangular surface can be connected clockwise and anticlockwise, and the difference is the orientation of the surfaces.
As shown in fig. 3, in this embodiment, according to the cross-sectional orientation, all triangular faces (trianges) of the model are 0, 2, and 1; 0. 3, 1 Array; if three array subscripts are used to connect a triangle, the two triangles are 021 and 123, respectively.
As shown in fig. 4, four sections based on the construction are attached as sub-objects to a virtual green curtain space (GreenRoom Box) as a virtual green cloth area (GreenArea) corresponding to the actual green cloth size. Swinging the physical camera leftwards, simultaneously adjusting the left range of the front green cloth to coincide with the left edge of the green cloth in the actual green curtain space, and if the left edge of the front green cloth exceeds the left range of the front wall, firstly adjusting the front wall and then continuously adjusting the left edge area of the front green cloth;
swinging the physical camera angle to the right, and simultaneously adjusting the right range of the front green cloth to coincide with the right edge of the green cloth in the actual green curtain space; likewise, if the right edge of the front green cloth exceeds the right range of the front wall, firstly adjusting the front wall and then continuously adjusting the right edge area of the front green cloth;
similarly, the space relation among the final ground green cloth, the left wall green cloth and the right wall green cloth is determined continuously according to the depths of the left wall green cloth, the right wall green cloth and the ground green cloth in the actual green curtain space;
the size of the front green cloth area, the size of the ground green cloth area, the size of the left wall green cloth area and the size of the right wall green cloth area are adjusted, the mesh component (Meshsection) of the UE5 is updated in real time, and the four sections of the virtual green cloth area (GreenAreaBox) are filled to be green and correspond to the actual green cloth. Wherein, each green cloth area range can not exceed the virtual green curtain space (GreenRoomBox) green curtain space range;
after constructing a virtual Green curtain space (GreenRoombox) and a virtual Green cloth Area (GreenArea) corresponding to the display Green curtain space in the virtual Scene, then creating a UE5 Scene capturing component (SceneCapture) and a virtual Green cloth Area (GreenRoombox) UE5 Scene capturing component (SceneCapture) of the virtual Green curtain space (GreenRoombox) respectively only render the corresponding virtual Green curtain space (GreenRoombox) and virtual Green cloth Area (Green Area), and inputting a corresponding Scene capturing rendering texture map (RenderTarget) in real time;
as shown in fig. 5, a virtual green curtain space (greenroomex) scene capture rendering texture map (RenderTarget) and a virtual green area (GreenArea) scene capture rendering texture map (RenderTarget) output in real time are performed with an input source picture subjected to matting processing. The superposition relation is that the input source picture is superposed with the front green cloth area, and then the virtual green cloth area (GreenArea) is superposed with the virtual green curtain space (GreenRoomBox); the input source picture is positioned in the virtual green cloth area (GreenArea) area, and the virtual green cloth area (GreenArea) is positioned in the virtual green curtain space (GreenRoomBox) area;
based on a virtual green cloth area (GreenArea) constructed in the virtual scene, the input source picture subjected to image matting processing is cut in real time according to the virtual green cloth area (GreenArea), namely, only the part of the input source picture in the virtual green cloth area (GreenArea) is reserved, and the rest part is cut, so that the effect that the actual green screen space can be extended in real time in the virtual scene according to the visual angle of a camera is finally achieved.
The foregoing describes in detail preferred embodiments of the present application. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the application without requiring creative effort by one of ordinary skill in the art. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by the person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.

Claims (10)

1. The green curtain infinite extension method based on the illusion engine is characterized by comprising the following steps:
step S1: the camera input source is connected with the illusion engine, and green screen matting is carried out on the input source through a synthesis module of the illusion engine;
step S2: programming to generate a cube grid, and constructing a virtual green curtain space according to the camera view angle;
step S3: generating a plurality of single sections to form a virtual green cloth area under the virtual green curtain space;
step S4: creating a scene capturing component, respectively rendering the virtual green curtain space and the virtual green cloth area, and outputting a corresponding scene capturing rendering texture map;
step S5: and superposing the output scene capturing rendering texture map and the input source picture, and cutting the input source picture according to the virtual green cloth area to realize real-time extension of the green curtain according to the visual angle.
2. The illusion engine-based green curtain infinite extension method according to claim 1 wherein the step S1 further comprises:
and the green curtain input source picture is connected into the illusion engine through the SDI or an external acquisition card, green curtain image matting is carried out on the input source through a synthesis module of the illusion engine, and the illusion engine material property of the image-matted input source picture is endowed to the image matting panel grid body and is placed in a virtual scene.
3. The illusion engine-based green curtain infinite extension method according to claim 1 wherein the step S2 further comprises:
constructing a cube grid with an initial size through a programming generation component, wherein the cube grid comprises a front wall, a ground, a left wall and a right wall; the camera is swung left and right, the width, height and depth of the programmed cube grid are continuously adjusted according to the green curtain range displayed in the input source picture, so that a virtual green curtain space corresponding to the size of the actual green curtain space is generated in the virtual scene, and meanwhile, the space relation of the virtual green curtain space can be clearly displayed through transparency adjustment.
4. The illusion engine-based green curtain infinite extension method according to claim 1 wherein the step S3 further comprises:
and multiplexing all vertex information, normal vectors, UV mapping coordinates and tangential direction information of the virtual green curtain space, continuing to program to generate a single section, and constructing four sections which are respectively front green cloth, ground green cloth, left wall green cloth and right wall green cloth.
5. The illusion engine based green curtain infinite extension method according to claim 4 wherein the four sections are attached as sub-objects to the virtual green curtain space as the virtual green cloth area corresponding to an actual green cloth size; and continuously shaking the camera back and forth according to the actual green cloth range to adjust the sizes of the front green cloth, the ground green cloth, the left wall green cloth and the right wall green cloth, updating the sections in real time, and filling the four sections of the virtual green cloth area with green corresponding to the actual green cloth.
6. The illusion engine-based green curtain infinite extension method according to claim 5 wherein the method of adjusting the size of the green cloth by panning the camera is:
swinging the physical camera leftwards, simultaneously adjusting the left edge range of the front green cloth to coincide with the left edge of the green cloth in the actual green curtain space, and if the left edge of the front green cloth exceeds the left range of the front wall, firstly adjusting the front wall and then continuously adjusting the left edge range of the front green cloth;
swinging the physical camera rightward, and simultaneously adjusting the right range of the front green cloth to coincide with the right edge of the green cloth in the actual green curtain space; likewise, if the left edge of the front green cloth exceeds the right range of the front wall, firstly adjusting the front wall and then continuously adjusting the right edge range of the front green cloth;
and similarly, determining the spatial relationship among the final ground green cloth, the left wall green cloth and the right wall green cloth according to the depths of the left wall green cloth, the right wall green cloth and the ground green cloth in the actual green curtain space.
7. The illusion engine-based green curtain infinite extension method according to claim 1, wherein the step S4 further includes:
after the virtual green curtain space and the virtual green cloth area corresponding to the display green curtain space are constructed in the virtual scene, 2 scene capturing components based on the illusion engine are then created to respectively render the corresponding virtual green curtain space and virtual green cloth area, and the scene capturing rendering texture map of the virtual green cloth area are input in real time.
8. The illusion engine-based green curtain infinite extension method according to claim 1 wherein step S5 further comprises:
and superposing the two scene capturing rendering texture maps output in real time and the input source pictures subjected to the image matting processing, wherein the superposition relation is that the input source pictures are superposed with the front green cloth areas, and then the virtual green cloth areas are superposed with the virtual green curtain space.
9. The illusion engine-based green curtain infinite extension method according to claim 1 wherein step S5 further comprises:
based on the virtual green cloth area range constructed in the virtual scene, the input source picture subjected to image matting processing is cut in real time according to the virtual green cloth area, namely, only the part of the input source picture in the area of the virtual green cloth area is reserved, and the rest part is cut.
10. The illusion engine-based green curtain infinite extension method according to claim 4 wherein the normal vector, the UV map coordinates and the tangential direction are all faces of a model, triangular faces are all faces of the model, the triangular faces are connected by array subscripts of the vertices, each triangular face can be connected clockwise and anticlockwise.
CN202310834032.0A 2023-07-10 2023-07-10 Green curtain infinite extension method based on illusion engine Active CN116843819B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310834032.0A CN116843819B (en) 2023-07-10 2023-07-10 Green curtain infinite extension method based on illusion engine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310834032.0A CN116843819B (en) 2023-07-10 2023-07-10 Green curtain infinite extension method based on illusion engine

Publications (2)

Publication Number Publication Date
CN116843819A true CN116843819A (en) 2023-10-03
CN116843819B CN116843819B (en) 2024-02-02

Family

ID=88174035

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310834032.0A Active CN116843819B (en) 2023-07-10 2023-07-10 Green curtain infinite extension method based on illusion engine

Country Status (1)

Country Link
CN (1) CN116843819B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015123775A1 (en) * 2014-02-18 2015-08-27 Sulon Technologies Inc. Systems and methods for incorporating a real image stream in a virtual image stream
KR101922968B1 (en) * 2017-07-12 2018-11-28 주식회사 볼트홀 Live streaming method for virtual reality contents and system thereof
CN114003331A (en) * 2021-11-10 2022-02-01 浙江博采传媒有限公司 LED circular screen virtual reality synthesis method and device, storage medium and electronic equipment
CN114663633A (en) * 2022-03-24 2022-06-24 航天宏图信息技术股份有限公司 AR virtual live broadcast method and system
CN115035147A (en) * 2022-06-29 2022-09-09 卡莱特云科技股份有限公司 Matting method, device and system based on virtual shooting and image fusion method
CN115866160A (en) * 2022-11-10 2023-03-28 北京电影学院 Low-cost movie virtualization production system and method
CN115909496A (en) * 2022-11-29 2023-04-04 深圳市龙华区龙为小学 Two-dimensional hand posture estimation method and system based on multi-scale feature fusion network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015123775A1 (en) * 2014-02-18 2015-08-27 Sulon Technologies Inc. Systems and methods for incorporating a real image stream in a virtual image stream
KR101922968B1 (en) * 2017-07-12 2018-11-28 주식회사 볼트홀 Live streaming method for virtual reality contents and system thereof
CN114003331A (en) * 2021-11-10 2022-02-01 浙江博采传媒有限公司 LED circular screen virtual reality synthesis method and device, storage medium and electronic equipment
CN114663633A (en) * 2022-03-24 2022-06-24 航天宏图信息技术股份有限公司 AR virtual live broadcast method and system
CN115035147A (en) * 2022-06-29 2022-09-09 卡莱特云科技股份有限公司 Matting method, device and system based on virtual shooting and image fusion method
CN115866160A (en) * 2022-11-10 2023-03-28 北京电影学院 Low-cost movie virtualization production system and method
CN115909496A (en) * 2022-11-29 2023-04-04 深圳市龙华区龙为小学 Two-dimensional hand posture estimation method and system based on multi-scale feature fusion network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐欣: "《紫外线》部分特技镜头制作", 《电视字幕(特技与动画)》, no. 5, pages 36 - 38 *
毛颖;杨征;: "基于示波仪器判读的绿幕拍摄素材现场检测工作模式实践研究", 深圳职业技术学院学报, no. 03, pages 28 - 31 *

Also Published As

Publication number Publication date
CN116843819B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN103426163B (en) System and method for rendering affected pixels
US9288476B2 (en) System and method for real-time depth modification of stereo images of a virtual reality environment
US5694533A (en) 3-Dimensional model composed against textured midground image and perspective enhancing hemispherically mapped backdrop image for visual realism
CN107993276B (en) Panoramic image generation method and device
KR20070086037A (en) Method for inter-scene transitions
Oliveira Image-based modeling and rendering techniques: A survey
CN102520970A (en) Dimensional user interface generating method and device
KR20200043458A (en) Methods for creating and modifying images of 3D scenes
WO2023108161A2 (en) Camera lens feed to drive defocus
CN106780759A (en) Method, device and the VR systems of scene stereoscopic full views figure are built based on picture
CN105184738A (en) Three-dimensional virtual display device and method
CN111857625B (en) Method for correcting special-shaped curved surface and fusing edges
CN104751506B (en) A kind of Cluster Rendering method and apparatus for realizing three-dimensional graphics images
US9454845B2 (en) Shadow contouring process for integrating 2D shadow characters into 3D scenes
CN103632390A (en) Method for realizing naked eye 3D (three dimensional) animation real-time making by using D3D (Direct three dimensional) technology
CN101668126B (en) Method for realizing unlimited blue-box function used in virtual studio systems
CN115103134A (en) LED virtual shooting cutting synthesis method
JP6682984B2 (en) Free-viewpoint video display device
CN116843819B (en) Green curtain infinite extension method based on illusion engine
JP2014164003A (en) Virtual indoor space display device
CN102438108B (en) Film processing method
KR101348929B1 (en) A multiview image generation method using control of layer-based depth image
CN105954969A (en) 3D engine applied to phantom imaging and implementation method thereof
JP3853477B2 (en) Simple display device for 3D terrain model with many objects arranged on its surface and its simple display method
Jacquemin et al. Shadow casting for soft and engaging immersion in augmented virtuality artworks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant