WO2023223759A1 - 情報処理装置、情報処理方法、撮影システム - Google Patents
情報処理装置、情報処理方法、撮影システム Download PDFInfo
- Publication number
- WO2023223759A1 WO2023223759A1 PCT/JP2023/015648 JP2023015648W WO2023223759A1 WO 2023223759 A1 WO2023223759 A1 WO 2023223759A1 JP 2023015648 W JP2023015648 W JP 2023015648W WO 2023223759 A1 WO2023223759 A1 WO 2023223759A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- camera
- image
- captured
- display
- Prior art date
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 78
- 238000003672 processing method Methods 0.000 title claims description 4
- 238000003384 imaging method Methods 0.000 title description 7
- 238000012545 processing Methods 0.000 claims abstract description 277
- 238000009877 rendering Methods 0.000 claims description 138
- 238000000034 method Methods 0.000 claims description 55
- 230000008569 process Effects 0.000 claims description 41
- 238000000926 separation method Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 22
- 238000004519 manufacturing process Methods 0.000 description 19
- 230000006870 function Effects 0.000 description 18
- 238000000605 extraction Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 10
- 239000000203 mixture Substances 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 238000003860 storage Methods 0.000 description 7
- 230000015572 biosynthetic process Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 6
- 238000003786 synthesis reaction Methods 0.000 description 6
- 239000000284 extract Substances 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 238000012937 correction Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 230000002194 synthesizing effect Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005401 electroluminescence Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 238000011895 specific detection Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/2187—Live feed
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Definitions
- the present technology relates to an information processing device, an information processing method, and a photographing system, and particularly relates to processing of monitor video during photographing.
- Patent Document 1 discloses a system technology for photographing a performer performing in front of a background image.
- the appearance of the background should differ depending on the position of the camera relative to the display and the shooting direction. If the background image is simply displayed, the background will not change even if the camera position or shooting direction changes, resulting in an unnatural image. Therefore, by changing the background image (at least the image within the camera's angle of view on the display) so that it looks the same as it actually looks, depending on the camera position and shooting direction, we can make it look like the actual location. To make it possible to shoot images equivalent to those shot on the ground.
- An information processing device is configured to display an image displayed on a display that displays a corresponding image and a specific image generated corresponding to one of a plurality of cameras in a time-sharing manner, and a display that is separate from the display.
- a first captured video including the object and the specific video among videos of the object captured by a camera, the specific video is converted into a corresponding video generated corresponding to the camera that captured the first captured video.
- FIG. 1 is an explanatory diagram of a photographing system according to an embodiment of the present technology.
- FIG. 3 is an explanatory diagram of a background image according to the camera position of the photographing system according to the embodiment.
- FIG. 3 is an explanatory diagram of a background image according to the camera position of the photographing system according to the embodiment.
- FIG. 2 is an explanatory diagram of a video content production process according to an embodiment.
- FIG. 1 is a block diagram of a photographing system according to an embodiment. It is a flowchart of background image generation of the photographing system of an embodiment.
- FIG. 1 is a block diagram of a photographing system using multiple cameras according to an embodiment.
- FIG. 1 is a block diagram of an information processing device according to an embodiment.
- FIG. 3 is an explanatory diagram of overlapping inner frustums for each camera.
- FIG. 2 is a block diagram of a configuration example including a switcher. 5 is a flowchart of switching control of a switcher.
- FIG. 6 is an explanatory diagram of an example of processing that affects switching delay time.
- FIG. 2 is a block diagram of a configuration example of the first embodiment.
- FIG. 2 is an explanatory diagram of a photographed video and a black background photographed video in the embodiment.
- 7 is a flowchart of background video rendering processing according to the first embodiment.
- FIG. 2 is an explanatory diagram of the configuration of a video processing section according to the first embodiment. 7 is a flowchart of processing by the video processing unit according to the first embodiment.
- FIG. 2 is a block diagram of a configuration example of a second embodiment.
- FIG. 7 is an explanatory diagram of the configuration of a video processing section according to a second embodiment. It is a flowchart of the processing of the video processing part of 2nd Embodiment.
- FIG. 7 is a block diagram of a configuration example of a third embodiment. It is a block diagram of the example of composition of a 4th embodiment. It is a flowchart of the processing of the video processing part of a 4th embodiment. It is a block diagram of the example of composition of a 5th embodiment.
- FIG. 7 is an explanatory diagram of the configuration of a video processing section according to a fifth embodiment. It is a flowchart of the process of the video processing part of 5th Embodiment.
- video or “image” includes both still images and moving images.
- video does not only refer to the state displayed on the display, but may also comprehensively refer to video data that is not displayed on the display.
- the background video before being displayed on the display, the video shot by the camera, and the background video and shot video switched by the switcher are video data rather than the actually displayed video, but for convenience, It is written as "background video”, “photographed video”, etc.
- a plurality of cameras are written as “camera 502a”, “camera 502b” by adding “a”, “b”, “c”, etc. to the reference numerals, but when they are collectively referred to as “camera 502", they are referred to as “camera 502".
- the circuit parts, functions, signals, etc. corresponding to each camera are distinguished by adding “a”, “b”, “c”, etc. to the number or letter used as a code, but when referring to them collectively, “a” ”, “b” and “c” are excluded.
- FIG. 1 schematically shows an imaging system 500.
- This photographing system 500 is a system for photographing as a virtual production, and the figure shows some of the equipment arranged in a photographing studio.
- a performance area 501 is provided where performers 510 perform acting and other performances.
- a large-sized display device is arranged at least on the back surface of the performance area 501, as well as on the left and right sides and the top surface.
- the device type of the display device is not limited, the figure shows an example in which an LED wall 505 is used as an example of a large display device.
- One LED wall 505 forms a large panel by connecting and arranging a plurality of LED panels 506 vertically and horizontally.
- the size of the LED wall 505 here is not particularly limited, but may be any size necessary or sufficient to display the background when photographing the performer 510.
- a necessary number of lights 580 are arranged at necessary positions above or to the sides of the performance area 501 to illuminate the performance area 501.
- a camera 502 for shooting movies and other video content is arranged near the performance area 501.
- the position of the camera 502 can be moved by a cameraman 512, and the shooting direction, angle of view, etc. can be controlled.
- the camera 502 may move or change the angle of view automatically or autonomously.
- the camera 502 may be mounted on a pan head or a moving object.
- images of the performer 510 in the performance area 501 and images displayed on the LED wall 505 are photographed together. For example, by displaying a landscape as the background video vB on the LED wall 505, it is possible to capture a video that is similar to when the performer 510 is actually performing in the scene.
- An output monitor 503 is placed near the performance area 501. On this output monitor 503, the video being photographed by the camera 502 is displayed in real time as a monitor video vM. This allows the director and staff involved in producing video content to check the video being shot.
- the photographing system 500 that photographs the performance of the performer 510 with the LED wall 505 in the background in the photographing studio has various advantages compared to green-back photography.
- Post-production after shooting is also more efficient than in the case of green screen shooting. This is because there are cases where so-called chroma key synthesis can be made unnecessary, and there are cases where color correction and reflection synthesis can be made unnecessary. Additionally, even if chromakey composition is required during shooting, no additional background screen is required, which also helps improve efficiency.
- the background video vB will be explained with reference to FIGS. 2 and 3. Even if the background video vB is displayed on the LED wall 505 and photographed together with the performer 510, if the background video vB is simply displayed, the background of the photographed video will be unnatural. This is because the background, which is actually three-dimensional and has depth, is used as the two-dimensional background image vB.
- the camera 502 can photograph performers 510 in the performance area 501 from various directions, and can also perform zoom operations.
- the performer 510 is not standing still in one place either. Then, the actual appearance of the background of the performer 510 should change depending on the position, shooting direction, angle of view, etc. of the camera 502, but such a change cannot be obtained in the background image vB as a flat image. Therefore, the background image vB is changed so that the background looks the same as it actually looks, including the parallax.
- FIG. 2 shows the camera 502 photographing the performer 510 from a position on the left side of the diagram
- FIG. 3 shows the camera 502 photographing the performer 510 from a position on the right side of the diagram.
- a shooting area video vBC is shown within a background video vB.
- the portion of the background video vB excluding the shooting area video vBC is called an "outer frustum”
- the shooting area video vBC is called an "inner frustum”.
- the background video vB described here refers to the entire video displayed as a background, including the shooting area video vBC (inner frustum).
- the range of this photographing area video vBC corresponds to the range actually photographed by the camera 502 within the display surface of the LED wall 505.
- the shooting area video vBC is a video that is transformed according to the position of the camera 502, the shooting direction, the angle of view, etc., so as to represent the scene that would actually be seen when the position of the camera 502 was taken as a viewpoint. ing.
- 3D background data which is a 3D (three dimensions) model
- the 3D background data is rendered sequentially in real time based on the viewpoint position of the camera 502. do.
- the range of the photographing area video vBC is a slightly wider range than the range photographed by the camera 502 at that time. This is to prevent the outer frustum image from being reflected due to rendering delay when the photographed range changes slightly due to panning, tilting, zooming, etc. of the camera 502, and to prevent the outer frustum image from being reflected. This is to avoid the influence of diffracted light.
- the image of the shooting area image vBC rendered in real time in this way is combined with the image of the outer frustum.
- the outer frustum image used in the background image vB is rendered in advance based on 3D background data, but the image is incorporated into a part of the outer frustum image as a shooting area image vBC rendered in real time. In this way, the entire background image vB is generated.
- a monitor video vM including a performer 510 and a background is displayed on the output monitor 503, and this is the photographed video.
- the background in this monitor video vM is the shooting area video vBC.
- the background included in the shot video is a real-time rendered video.
- the photographing system 500 of the embodiment not only displays the background image vB in a two-dimensional manner, but also displays the photographing area image so that it is possible to photograph an image similar to that obtained when actually filming on location. Background video vB including vBC is changed in real time.
- the process of producing video content as a virtual production that is shot using the shooting system 500 will be explained.
- the video content production process is roughly divided into three stages. They are asset creation ST1, production ST2, and post-production ST3.
- Asset creation ST1 is a process of creating 3D background data for displaying background video vB.
- the background video vB is generated by performing rendering in real time using 3D background data during shooting.
- 3D background data as a 3D model is created in advance.
- Examples of methods for producing 3D background data include full CG (Full Computer Graphics), point cloud scanning, and photogrammetry.
- Full CG is a method of creating 3D models using computer graphics. Although this method requires the most man-hours and time among the three methods, it is suitable for use when it is desired to use an unrealistic video or a video that is difficult to photograph as the background video vB.
- Point cloud data scanning involves measuring distance from a certain position using, for example, LiDAR, and then taking a 360-degree image from the same position with a camera, and adding the color of the image taken by the camera to the point measured by LiDAR. This is a method of generating a 3D model using point cloud data by loading data. Compared to full CG, 3D models can be created in a shorter time. It is also easier to create high-definition 3D models than photogrammetry.
- Photogrammetry is a photogrammetry technique that analyzes parallax information to obtain dimensions and shapes from two-dimensional images obtained by photographing objects from multiple viewpoints. You can create 3D models in a short time. Note that point cloud information acquired by a lidar may be used in generating 3D data by photogrammetry.
- a 3D model that becomes 3D background data is created using, for example, these methods.
- these methods may be used in combination.
- a part of a 3D model created using point cloud data scanning or photogrammetry is created using CG and then synthesized.
- Production ST2 is a step in which shooting is performed in a shooting studio as shown in FIG.
- the elemental technologies in this case include real-time rendering, background display, camera tracking, and lighting control.
- Real-time rendering is a rendering process for obtaining a shooting area video vBC at each point in time (each frame of the background video vB), as explained in FIGS. 2 and 3. This is to perform rendering on the 3D background data created in asset creation ST1 from a viewpoint according to the position of the camera 502 at each point in time.
- Camera tracking is performed to obtain photographic information by the camera 502, and tracks positional information, photographing direction, angle of view, etc. of the camera 502 at each point in time.
- photographic information including these By providing photographic information including these to the rendering engine in correspondence with each frame, real-time rendering can be performed according to the viewpoint position of the camera 502 and the like.
- the shooting information is information that is linked or associated with the video as metadata. It is assumed that the photographing information includes position information of the camera 502 at each frame timing, camera orientation, angle of view, focal length, F value (aperture value), shutter speed, lens information, and the like.
- Illumination control refers to controlling the state of illumination in the photographing system 500, specifically controlling the light amount, emitted color, illumination direction, etc. of the light 580. For example, lighting control is performed according to the time and location settings of the scene to be photographed.
- Post-production ST3 indicates various processes performed after shooting. For example, video correction, video adjustment, clip editing, video effects, etc. are performed.
- Image correction may include color gamut conversion and color matching between cameras and materials.
- Image adjustment may include color adjustment, brightness adjustment, contrast adjustment, etc.
- Clip editing may include clip cutting, order adjustment, time length adjustment, and the like.
- CG video or special effect video may be synthesized.
- FIG. 5 is a block diagram showing the configuration of the imaging system 500 outlined in FIGS. 1, 2, and 3.
- the photographing system 500 shown in FIG. 5 includes the LED wall 505 formed by the plurality of LED panels 506, the camera 502, the output monitor 503, and the light 580 described above.
- the photographing system 500 further includes a rendering engine 520, an asset server 530, a sync generator 540, an operation monitor 550, a camera tracker 560, an LED processor 570, a lighting controller 581, and a display controller 590, as shown in FIG.
- Each of the LED processors 570 is provided corresponding to one or more LED panels 506, and drives the corresponding one or more LED panels 506 to display images.
- the sync generator 540 generates a synchronization signal for synchronizing the frame timing of the video displayed by the LED panel 506 and the frame timing of the image taken by the camera 502, and supplies it to each LED processor 570 and the camera 502. However, this does not prevent the output from the sync generator 540 from being supplied to the rendering engine 520.
- the camera tracker 560 generates information captured by the camera 502 at each frame timing and supplies it to the rendering engine 520. For example, the camera tracker 560 detects the position information of the camera 502 relative to the position of the LED wall 505 or a predetermined reference position, and the shooting direction of the camera 502 as one of the shooting information, and supplies these to the rendering engine 520. . As a specific detection method using the camera tracker 560, reflectors are placed randomly on the ceiling, and the position is detected from the reflected light of infrared light emitted from the camera tracker 560 attached to the camera 502. There is a way to do it.
- a detection method there is also a method of estimating the self-position of the camera 502 by using gyro information mounted on the pan head of the camera 502 or the main body of the camera 502, or by image recognition of an image taken by the camera 502.
- the camera 502 may supply the rendering engine 520 with the angle of view, focal length, F value, shutter speed, lens information, etc. as photographic information.
- the asset server 530 is a server that stores the 3D model created in the asset creation ST1, that is, the 3D background data, in a recording medium, and can read out the 3D model as needed. That is, it functions as a DB (data base) of 3D background data.
- the rendering engine 520 performs processing to generate a background video vB to be displayed on the LED wall 505.
- the rendering engine 520 reads the necessary 3D background data from the asset server 530.
- the rendering engine 520 generates an image of the outer frustum to be used as the background image vB by rendering the 3D background data as viewed from a prespecified spatial coordinate.
- the rendering engine 520 uses the shooting information supplied from the camera tracker 560 and the camera 502 to specify the viewpoint position with respect to the 3D background data, and renders the shooting area video vBC (inner frustum). I do.
- the rendering engine 520 synthesizes the captured area video vBC rendered for each frame with the outer frustum generated in advance to generate a background video vB as one frame of video data. The rendering engine 520 then transmits the generated one frame of video data to the display controller 590.
- the display controller 590 generates a divided video signal nD by dividing one frame of video data into video parts to be displayed on each LED panel 506, and transmits the divided video signal nD to each LED panel 506. At this time, the display controller 590 may perform calibration according to individual differences such as color development between display units/manufacturing errors. Note that the display controller 590 may not be provided and the rendering engine 520 may perform these processes. That is, the rendering engine 520 may generate the divided video signal nD, perform calibration, and transmit the divided video signal nD to each LED panel 506.
- the entire background video vB is displayed on the LED wall 505 by each LED processor 570 driving the LED panel 506 based on the received divided video signal nD.
- the background video vB includes a shooting area video vBC rendered according to the position of the camera 502 at that time.
- the camera 502 can thus photograph the performance of the performer 510 including the background video vB displayed on the LED wall 505.
- the video obtained by shooting with the camera 502 is recorded on a recording medium inside the camera 502 or in an external recording device (not shown), and is also supplied in real time to the output monitor 503 and displayed as a monitor video vM.
- the operation monitor 550 displays an operation image vOP for controlling the rendering engine 520.
- the engineer 511 can perform necessary settings and operations regarding rendering of the background video vB while viewing the operation image vOP.
- the lighting controller 581 controls the emission intensity, emission color, irradiation direction, etc. of the light 580.
- the lighting controller 581 may control the light 580 asynchronously with the rendering engine 520, or may control the light 580 in synchronization with photographing information and rendering processing. Therefore, the lighting controller 581 may control the light emission based on instructions from the rendering engine 520 or a master controller (not shown).
- FIG. 6 shows a processing example of the rendering engine 520 in the photographing system 500 having such a configuration.
- the rendering engine 520 reads the 3D background data to be used this time from the asset server 530 in step S10, and develops it in an internal work area. Then, an image to be used as an outer frustum is generated.
- the rendering engine 520 repeats the processing from step S30 to step S60 at each frame timing of the background image vB until it determines in step S20 that the display of the background image vB based on the read 3D background data is finished.
- step S30 the rendering engine 520 acquires shooting information from the camera tracker 560 and camera 502. This confirms the position and state of the camera 502 to be reflected in the current frame.
- step S40 the rendering engine 520 performs rendering based on the shooting information. That is, the viewpoint position for the 3D background data is specified based on the position of the camera 502, the shooting direction, or the angle of view to be reflected in the current frame, and rendering is performed. At this time, image processing that reflects focal length, F-number, shutter speed, lens information, etc. can also be performed. Through this rendering, video data as a photographed area video vBC can be obtained.
- step S50 the rendering engine 520 performs processing to synthesize the outer frustum, which is the entire background image, and the image reflecting the viewpoint position of the camera 502, that is, the shooting area image vBC.
- this process is a process of combining an image generated by reflecting the viewpoint of the camera 502 with an image of the entire background rendered at a certain reference viewpoint.
- one frame of background video vB displayed on the LED wall 505, that is, background video vB including the shooting area video vBC is generated.
- step S60 The process of step S60 is performed by the rendering engine 520 or the display controller 590.
- the rendering engine 520 or the display controller 590 generates a divided video signal nD, which is obtained by dividing one frame of the background video vB into videos to be displayed on the individual LED panels 506. Calibration may also be performed. Then, each divided video signal nD is transmitted to each LED processor 570.
- the background video vB including the shooting area video vBC captured by the camera 502 is displayed on the LED wall 505 at each frame timing.
- FIG. 7 shows an example of a configuration in which a plurality of cameras 502a and 502b are used.
- the cameras 502a and 502b are configured to be able to take pictures in the performance area 501 independently.
- each camera 502a, 502b and each LED processor 570 are maintained in synchronization by a sync generator 540.
- Output monitors 503a and 503b are provided corresponding to the cameras 502a and 502b, and are configured to display images taken by the corresponding cameras 502a and 502b as monitor images vMa and vMb.
- camera trackers 560a and 560b are provided corresponding to the cameras 502a and 502b, and detect the positions and shooting directions of the corresponding cameras 502a and 502b, respectively.
- the shooting information from the camera 502a and camera tracker 560a and the shooting information from the camera 502b and camera tracker 560b are sent to the rendering engine 520.
- the rendering engine 520 can perform rendering to obtain the background video vB of each frame using the shooting information from either the camera 502a side or the camera 502b side.
- FIG. 7 shows an example in which two cameras 502a and 502b are used, it is also possible to perform imaging using three or more cameras 502.
- the shooting area videos vBC corresponding to the respective cameras 502 interfere with each other.
- a shooting area video vBC corresponding to the camera 502a is shown, but when using an image from the camera 502b, a shooting area video vBC corresponding to the camera 502b is shown.
- the shooting area videos vBC corresponding to the cameras 502a and 502b are simply displayed, they will interfere with each other. Therefore, it is necessary to devise a method for displaying the photographing area video vBC.
- the information processing device 70 is a device capable of information processing, especially video processing, such as a computer device.
- the information processing device 70 is assumed to be a personal computer, a workstation, a mobile terminal device such as a smartphone or a tablet, a video editing device, or the like.
- the information processing device 70 may be a computer device configured as a server device or an arithmetic device in cloud computing.
- the information processing device 70 can function as a 3D model production device that produces a 3D model in asset creation ST1.
- the information processing device 70 can also function as a rendering engine 520 and an asset server 530 that constitute the photographing system 500 used in the production ST2.
- the information processing device 70 can also function as a video editing device that performs various video processing in post-production ST3.
- the information processing device 70 is also used as a switcher 600, which will be described later with reference to FIG. 10 and the like.
- the switcher 600 has the functions of the information processing device 70 to perform control and calculation.
- the switcher 600 includes the video processing unit 18, but the video processing unit 18 is configured by the information processing device 70.
- the video processing unit 24 (24a, 24b, 24c) in FIGS. 18, 21, and 24 is a device that performs rendering like the rendering engine 520, but this video processing unit 24 also performs information processing. It can be configured by the device 70.
- the CPU 71 of the information processing device 70 shown in FIG. Execute various processes according to the programmed program.
- the RAM 73 also appropriately stores data necessary for the CPU 71 to execute various processes.
- the video processing unit 85 is configured as a processor that performs various video processing.
- the processor is capable of performing one or more of video processing such as 3D model generation processing, rendering, DB processing, video editing processing, video analysis/detection processing, and video extraction and composition.
- This video processing unit 85 can be realized by, for example, a CPU separate from the CPU 71, a GPU (Graphics Processing Unit), a GPGPU (General-purpose computing on graphics processing units), an AI (artificial intelligence) processor, or the like. Note that the video processing section 85 may be provided as a function within the CPU 71.
- the CPU 71, ROM 72, RAM 73, nonvolatile memory section 74, and video processing section 85 are interconnected via a bus 83.
- An input/output interface 75 is also connected to this bus 83.
- the input/output interface 75 is connected to an input section 76 consisting of an operator or an operating device.
- an input section 76 consisting of an operator or an operating device.
- various operators and operating devices such as a keyboard, a mouse, a key, a dial, a touch panel, a touch pad, and a remote controller are assumed.
- a user's operation is detected by the input unit 76, and a signal corresponding to the input operation is interpreted by the CPU 71.
- a microphone is also assumed as the input section 76. Voices uttered by the user can also be input as operation information.
- a display section 77 consisting of an LCD (Liquid Crystal Display) or an organic EL (electro-luminescence) panel, and an audio output section 78 consisting of a speaker etc. are connected to the input/output interface 75, either integrally or separately.
- the display unit 77 is a display unit that performs various displays, and is configured by, for example, a display device provided in the casing of the information processing device 70, a separate display device connected to the information processing device 70, or the like.
- the display unit 77 displays various images, operation menus, icons, messages, etc., ie, as a GUI (Graphical User Interface), on the display screen based on instructions from the CPU 71.
- GUI Graphic User Interface
- the input/output interface 75 may also be connected to a storage section 79 and a communication section 80, which are comprised of an HDD (Hard Disk Drive), solid-state memory, or the like.
- HDD Hard Disk Drive
- solid-state memory solid-state memory
- the storage unit 79 can store various data and programs.
- a DB can also be configured in the storage unit 79.
- the storage unit 79 can be used to construct a DB that stores a group of 3D background data.
- the communication unit 80 performs communication processing via a transmission path such as the Internet, and communicates with various devices such as an external DB, editing device, and information processing device through wired/wireless communication, bus communication, and the like.
- the communication unit 80 may access a DB as the asset server 530 or receive shooting information from the camera 502 or camera tracker 560. can do.
- the information processing device 70 used in post-production ST3 it is also possible to access the DB as the asset server 530 through the communication unit 80.
- a drive 81 is also connected to the input/output interface 75 as required, and a removable recording medium 82 such as a magnetic disk, optical disk, magneto-optical disk, or semiconductor memory is appropriately loaded.
- the drive 81 can read video data, various computer programs, etc. from the removable recording medium 82.
- the read data is stored in the storage section 79, and the video and audio included in the data are outputted on the display section 77 and the audio output section 78. Further, computer programs and the like read from the removable recording medium 82 are installed in the storage unit 79 as necessary.
- software for the processing of this embodiment can be installed via network communication by the communication unit 80 or the removable recording medium 82.
- the software may be stored in advance in the ROM 72, storage unit 79, or the like.
- the background video vB photographed by the camera 502 is not the entire background video vB displayed on the LED wall 505, but the range of the photographed area video vBC (hereinafter also referred to as "inner frustum vBC").
- the video content of the inner frustum vBC is rendered frame by frame by the rendering engine 520 according to the position and shooting direction of the camera 502, and is displayed on the LED wall 505 while being incorporated into the entire background video vB. Therefore, the range and contents of the inner frustum vBC in the background video vB differ depending on the camera position for each frame, as explained in FIGS. 2 and 3. Further, the display corresponds to each camera 502. In other words, the inner frustum vBC is generated corresponding to each camera 502, and is an example of the corresponding video in the present disclosure.
- inner frustums vBCa and vBCb corresponding to the respective cameras 502a and 502b are displayed on the LED wall 505.
- the inner frustums vBCa and vBCb may overlap as shown in FIG. 9A.
- the images vC shot by the cameras 502a and 502b will be images that include appropriate backgrounds, but if they overlap as shown in FIG. 9, the background images may not be correct. It's gone.
- a plurality of systems of background images vB including the respective inner frustum vBC are selectively displayed on the LED wall 505.
- the switching timing of the background video vB and the switching timing of the main video are controlled so that when the video vC captured by a certain camera 502 includes the background video vB of another camera 502, the main video Prevent it from being output as .
- FIG. 10 A configuration example for this purpose is shown in FIG. FIG. 10 is based on the configuration shown in FIG. 7, with the addition of a switcher 600 for switching the captured video vC and an output device 610.
- a switcher 600 for switching the captured video vC and an output device 610.
- Components explained in FIG. 5 or FIG. 7 are designated by the same reference numerals, and redundant explanation will be avoided.
- camera signal processing units 515a and 515b are shown that perform signal processing of captured video signals. Although omitted in FIGS. 5 and 7, the camera signal processing units 515a and 515b may be formed by a processor within the cameras 502a and 502b, or may be provided as a separate unit from the cameras 502a and 502b. It's okay.
- the video signal photographed by the camera 502a is subjected to development processing, resizing processing, etc. by the camera signal processing unit 515a, and is inputted to the switcher 600 as a photographed video vCa.
- the video signal photographed by the camera 502b is subjected to development processing, resizing processing, etc. by the camera signal processing section 515b, and is inputted to the switcher 600 as a photographed video vCb.
- the photographing information IFa and IFb including the camera position, photographing direction, etc. from the camera trackers 560a and 560b are supplied to the rendering engine 520, respectively.
- the angle of view, focal length, F value, shutter speed, lens information, etc. of the cameras 502a and 502b are also included in the photographing information IFa and IFb, and are supplied to the rendering engine 520, respectively.
- the rendering engine 520 is configured by one or more information processing devices 70.
- the rendering engine 520 is configured to have a plurality of rendering functions as the rendering units 21 and 22, and is configured to be able to simultaneously perform at least rendering corresponding to each of the cameras 502a and 502b.
- the rendering unit 21 renders the inner frustum vBCa corresponding to the camera 502a based on the photographing information IFa regarding the camera 502a, incorporates it into the overall background, and outputs a background image vBa that matches the camera 502a. Furthermore, the rendering unit 22 renders the inner frustum vBCb corresponding to the camera 502b based on the photographing information IFb regarding the camera 502b, incorporates it into the overall background, and outputs a background image vBb that matches the camera 502b.
- the switcher 600 is provided with a switch unit 11, which inputs the background images vBa and vBb, and selectively outputs the selected images.
- the background video vBa or vBb selected by this switch unit 11 becomes the background video vB supplied to the LED wall 505.
- the background video vB is processed by the display controller 590 and distributed to the plurality of LED processors 570. 10 (not shown) is driven. As a result, the background video vB is displayed on the LED wall 505.
- the cameras 502a and 502b photograph the background video vB of the LED wall 505 and the performer 510. As described above, the images vCa and vCb captured by the cameras 502a and 502b are input to the switcher 600.
- the switcher 600 is provided with a switch unit 12 that inputs captured images vCa and vCb.
- the switch unit 12 selectively selects one of the photographed videos vCa and vCb, and outputs the selected one as the main video vCm.
- the main video vCm is supplied to the output device 610.
- the output device 610 may be a recording device that records the main video vCm on a recording medium, or may be a video transmitting device that broadcasts and transmits the main video vCm. Further, the output device 610 may be a web server or the like that distributes the main video vCm.
- the switcher 600 has the switch section 11 that selects the background video vB and the switch section 12 that selects the main video vCm as described above, and a switcher controller (hereinafter referred to as "SW controller") as a control section that controls these. 10.
- the SW controller 10 can be configured by an information processing device 70 as shown in FIG.
- the SW controller 10 may have a configuration that includes at least the CPU 71, ROM 72, RAM 73, nonvolatile memory section 74, and input/output interface 75 shown in FIG.
- This SW controller 10 performs switching control of the switch sections 11 and 12 in response to the occurrence of the switching trigger KP.
- the SW controller 10 generates a control signal C1 to instruct the switch unit 11 to switch. Further, the SW controller 10 instructs the switch unit 12 to switch using a control signal C2.
- the switching trigger KP is a trigger generated by, for example, an operator's switching operation of the main line video vCm (switching operation of the camera 502 for the main line video vCm).
- the operator can use the switcher panel 611 to perform various operations including switching the main video vCm.
- the switching trigger KP is not limited to the operator's operation, and may be generated automatically.
- the switching trigger KP is generated by automatic switching control according to a predetermined sequence.
- the switching trigger KP is generated by AI control that acts as an operator.
- FIG. 10 shows a multiviewer 612.
- the multi-viewer 612 is a monitor device that divides and displays monitor images vM (vMa, vMb) of images vC captured by each camera 502, for example, within one screen.
- the operator of the switcher 600 and the like can check the content of images captured by each camera 502 using the multi-viewer 612.
- FIG. 11 shows switching control of the switch units 11 and 12 by the SW controller 10.
- the SW controller 10 proceeds from step S101 to step S102, and performs switching control of the switch unit 11 using the control signal C1. That is, the switch unit 11 is immediately switched in response to the occurrence of the switching trigger KP.
- the SW controller 10 waits for a predetermined time set as the switching delay time Tdl in step S103. After the time as the switching delay time Tdl has elapsed, the SW controller 10 performs switching control of the switch unit 12 using the control signal C2 in step S104. That is, when the SW controller 10 causes the switch unit 11 to select the background video vBa in step S102, the SW controller 10 causes the switch unit 12 to select the photographed video vCa in step S104. Similarly, when the switch unit 11 is caused to select the background video vBb in step S102, the switch unit 12 is caused to select the photographed video vCb in step S104.
- the SW controller 10 first switches the switch section 11 in response to the switching trigger KP, and after the switching delay time Tdl passes, switches the switch section 12.
- the switching delay time Tdl will be explained. For example, assume that the selection state is switched from camera 502a to camera 502b. In this case, the switching delay time Tdl is the period from the time when the background video vBb for the camera 502b is selected by the switch unit 11 until the captured video vCb obtained by photographing the background video vBb by the camera 502b is input to the switch unit 12. This is the time equivalent to a time lag.
- the settings for the operation mode on the LED panel 506 side include the frame rate and various signal processing performed by the display controller 590 and the LED processor 570, and these LED side operation modes Depending on the time, there will be a delay time until the display changes.
- signal processing the delay time from switching of the switch unit 12 to switching of the display varies depending on, for example, processing of resizing the video to match the LED wall 505 (LED panel 506) side.
- the setting of the camera's operating mode for photographing is also related to time lag.
- the operation mode related to camera photography is the operation mode of the camera 502 and the camera signal processing unit 515.
- the operation mode related to photographing by the camera will also be referred to as the "camera-side operation mode.”
- a delay occurs depending on the frame rate, shutter speed, readout area from the image sensor, processing contents of the camera signal processing unit 515, etc. as settings of the camera 502.
- FIG. 12 shows an example of the readout range from the image sensor 30. Assume that the solid line represents the entire pixel area of the image sensor 30.
- the photoelectric conversion signal from the image sensor 30 may be read out from the entire pixel area as indicated by the solid line, or may be read out in various ways depending on the readout mode, such as the range shown by the dotted line, the range shown by the broken line, or the range shown by the dashed-dotted line.
- the delay time differs depending on the difference in these read ranges. There is also a delay due to signal processing and resizing processing for the captured video vC.
- These camera-side operation modes cause a time lag between when the background video vBb displayed on the LED wall 505 is photographed (exposed) by the camera 502b and when the photographed video vCb is input to the switch unit 12.
- the switching delay time Tdl is set according to these LED-side operating modes and camera-side operating modes.
- the switch unit 11 switches to the background video vBb for the camera 502b, for example, when the background video vBb is captured by the camera 502b and the captured video vCb is input to the switch unit 12, the switch unit 12 By performing the switching, the main video vCm can be set to the video vCb captured by the camera 502b.
- the switching timing of the switch unit 12 is early, the captured video vCb captured by the camera 502b at the time when the background video vBa is displayed becomes the main video vCm. This means that a video with an incorrect background is included in the main video vCm.
- an appropriate switching delay time Tdl is set and the switching timing is controlled as shown in FIG.
- the switching unit 11 immediately displays the background images to which the switching destination is to be displayed. Can output vB. In other words, there is no rendering delay.
- the switch unit 12 By switching the switch unit 12 at the timing when the switched background video vB is input to the switch unit 12, the main video vCm does not become an incorrect background video.
- FIG. 10 shows an example of shooting with two cameras 502a and 502b, when shooting with three or more cameras 502, the process shown in FIG. It can be a video vCm.
- the monitor video vM may become incorrect.
- the camera 502a is selected as the main video vCm.
- the LED wall 505 displays a background image vB including the inner frustum vBCa as a corresponding image of the camera 502a.
- the inner frustum vBCb is not displayed as the corresponding image of the camera 502b. Therefore, the monitor video vM of the camera 502b that is viewed by a cameraman, an operator of the switcher 600, etc. is a video with an incorrect background.
- the inner frustum vBCa of the camera 502a is reflected in the captured image vCb of the camera 502b.
- the image becomes a background image that does not reflect the parallax according to the position of the camera 502b.
- the cameraman operates the camera 502 while monitoring the image captured by the camera using the viewfinder, monitor panel, etc. of the camera 502 he is operating.
- the camera operator 502 is not selected as the main video vCm, the camera operator often performs zoom operations to adjust the angle of view or change the position and shooting direction. At this time, it is inconvenient for the cameraman that the correct background cannot be seen on the monitor image.
- an operator of the switcher 600 performs a switching operation while viewing the main line video vCm as well as the video of other cameras 502 using the multi-viewer 612, for example. At this time, it is inconvenient that the images of the cameras 502 other than the camera 502 currently selected as the main video vCm are not the correct background.
- the monitor video vM of other cameras is correct. Make it a background.
- FIG. 13 shows an example of the system configuration of the first embodiment. Note that the same parts as in FIGS. 5, 7, and 10 are given the same reference numerals to avoid redundant explanation.
- FIG. 13 shows a case where photography is performed using three cameras 502a, 502b, and 502c.
- images vCa, vCb, and vCc captured by each camera 502 are input to the switch section 12 of the switcher 600.
- the rendering units 21, 22, and 23 generate corresponding background images vBa, vBb, and vBc, respectively.
- the configuration is such that the background images vBa, vBb, and vBc are selected by the switch unit 11.
- the SW controller 10 switches the switch unit 11 by, for example, the process shown in FIG. 11, it causes the switch unit 12 to switch after the switching delay time Tdl has elapsed.
- step S104 the SW controller 10 controls the switch unit 12 to execute switching to the video image vC captured by the camera 502 corresponding to the background video vB after switching by the switch unit 11.
- switch units 11 and 12 are configured in one switcher 600, but the switcher having the switch unit 11 and the switcher having the switch unit 12 are separate devices. There may be. Regardless of the configuration, the SW controller 10 provided within one of the switchers or a separate device must be able to perform the control shown in FIG. 11 and control the switching timing of the switch units 11 and 12. Bye.
- a video processing section 18 is provided within the switcher 600. Note that in this figure, as an example, the video processing section 18 is included in the switcher 600, but the video processing section 18 may be configured as a separate device from the switcher 600.
- Inner frustum vBCa, vBCb, and vBCc which are the corresponding images of the cameras 502a, 502b, and 502c, are input to the video processing unit 18 from the rendering units 21, 22, and 23.
- the video processing unit 18 generates a monitor video vM (vMa, vMb, vMc) according to the input of these inner frustums vBCa, vBCb, and vBCc, and sends the monitor video vM (vMa, vMb, vMc) to the multiviewer 612 and camera signal processing units 515a, 515b, and 515c. and output it.
- the display of the background video vB and the operation of the camera 502 are performed as follows.
- the output of the background images vBa, vBb, and vBc by the rendering units 21, 22, and 23 has a frame rate of, for example, 60 frames per second (60 fps).
- the input background video vB (60 fps) and a black video also at 60 fps are alternately displayed at 120 fps.
- a black image is an image where the entire screen is black.
- the LED processor 570 displays the background video vB input from the switcher 600 on the LED panel 506 at a cycle of 1 frame at 120 fps, and then displays a black video generated internally at a cycle of 1 frame at 120 fps. Displaying on the LED panel 506 is repeated.
- the background video vB and the black video BK are alternately displayed at a period T1 of one frame at 120 fps.
- Each camera 502 synchronizes and captures images on the LED wall 505 side at 120 fps.
- frames including the background video vB and objects such as the performer 510 and frames including the black video BK and objects such as the performer 510 are alternately photographed.
- the camera signal processing unit 515 (515a, 515b, 515c) separates the 120 fps frames into odd frames and even frames and outputs them in two systems.
- One system is a captured video vC shown in FIG. 14, in which each frame is continuous at a cycle T2 at 60 fps. This is a video in which the inner frustum vBCa, the performer 510, etc. in the background video vB are shown.
- the other system is a black background photographed video vCbk, in which each frame is continuous at a cycle T2 at 60 fps. This is an image in which the black image BK and the performer 510 and the like are shown.
- the camera signal processing unit 515a outputs the photographed video vCa and the black background photographed video vCbka.
- the camera signal processing unit 515b outputs a captured video vCb and a black background captured video vCbkb
- the camera signal processing unit 515c outputs a captured video vCc and a black background captured video vCbkc.
- the photographed videos vCa, vCb, and vCc are input to the switch unit 12, and one is selected as the main video vCm.
- the captured video vC selected as the main video vCm is a video with the correct background, but the other captured videos vC have incorrect backgrounds (inner frustum vBC), so they are not used as the monitor video vM. .
- the black background photographed videos vCbka, vCbka, and vCbkc are input to the video processing unit 18.
- the video processing unit 18 uses the black background photographed videos vCbka, vCbkb, vCbkc and the inner frustum vBCa, vBCb, vBCc from the rendering units 21, 22, 23 as described above to generate monitor videos vMa, vMb, vMc. generate.
- Each of the rendering units 21, 22, and 23 performs the process shown in FIG. 15 to output the inner frustums vBCa, vBCb, and vBCc.
- the processing example in FIG. 15 is obtained by adding step S70 to the processing described in FIG. 6.
- the processing from step S10 to step S60 is the same as that in FIG.
- step S70 the rendering units 21, 22, and 23 output inner frustum vBC (vBCa, vBCb, vBCc). That is, in this step S70, the inner frustum vBC generated in step S40 is output without being combined with the entire background image.
- the video processing unit 18 has a functional configuration as shown in FIG.
- the video processing unit 18 is provided with functions as video processing units 18a, 18b, and 18c in parallel in correspondence with the three cameras 502a, 502b, and 502c.
- Each video processing section 18a, 18b, 18c has a processing function as an object extraction section 41 and a composition section 42.
- the object extracting unit 41 uses luminance keying to remove the black image from the input black background photographed image vCbka and extracts only the object image.
- the synthesizing unit 42 synthesizes the extracted object image and the input inner frustum vBCa. In other words, processing is performed to replace the black video portion of the black background photographed video vCbka with the inner frustum vBCa.
- the output of the combining unit 42 becomes the monitor video vMa for the camera 502a.
- the video processing unit 18b receives a black background photographed video vCbkb taken by the camera 502b and an inner frustum vBCb which is a corresponding video of the camera 502b.
- the object extracting unit 41 uses luminance keying to remove the black image from the input black background photographed image vCbkb and extracts only the object image.
- the synthesizing unit 42 synthesizes the extracted object image and the input inner frustum vBCb, and replaces the black image portion in the black background photographed image vCbkb with the inner frustum vBCb. This output becomes the monitor video vMb for the camera 502b.
- the video processing unit 18c receives a black background photographed video vCbkc taken by the camera 502c and an inner frustum vBCc which is a corresponding video of the camera 502c.
- the object extracting unit 41 uses luminance keying to remove the black image from the input black background photographed image vCbkc and extracts only the object image.
- the synthesizing unit 42 synthesizes the extracted object image and the input inner frustum vBCc, and replaces the black image portion in the black background photographed image vCbkc with the inner frustum vBCc. This output becomes the monitor video vMc for the camera 502c.
- FIG. 17 shows a flowchart of the above-described processing by the video processing units 18a, 18b, and 18c.
- the video processing units 18a, 18b, and 18c repeat the processing from step S201 to step S205 at each frame timing until it is determined in step S200 that the monitor output is finished.
- the video processing unit 18 (18a, 18b, 18c) inputs the black background photographed video vCbk (vCbka, vCbkb, vCbkc).
- the video processing unit 18 inputs the inner frustum vBC (vBCa, vBCb, vBCc).
- the video processing unit 18 extracts the object video from the black background photographed video vCbk.
- step S204 the video processing unit 18 combines the object video and the inner frustum vBC.
- step S205 the video processing unit 18 outputs the video with the background replaced by the compositing process as a monitor video vM (vMa, vMb, vMc).
- monitor videos vMa, vMb, and vMc are supplied to multiviewer 612 and displayed respectively.
- the backgrounds of objects such as the performer 510 are replaced with inner frustum vBC corresponding to the camera 502 that took the images. Therefore, an operator or the like can monitor the correct background image for each camera 502a, 502b, 502c.
- monitor images vMa, vMb, and vMc are supplied to camera signal processing units 515a, 515b, and 515c, respectively, and are supplied from the camera signal processing units 515a, 515b, and 515c to cameras 502a, 502b, and 502c, and displayed on a viewfinder or the like.
- Ru Each cameraman operating the cameras 502a, 502b, and 502c shoots images with his or her own camera, not with the incorrect background that was actually shot, but with the correct background according to the camera position and shooting direction. You can see the condition.
- the background video vB and the black video are alternately displayed on the LED wall 505, but the black video is used for the video processing of extracting the object video in the object extraction unit 41.
- This is a specific video. In other words, it is a specific image for separating object images within one frame.
- Such specific video is not limited to black video.
- any material that can be used for image separation using chroma key may be used, such as a green image that is entirely green or a blue image that is entirely blue.
- Composite green video using chroma key improves the video quality, but if green video or blue video is displayed at a high frame rate in a time-division manner, there is a risk that the flickering of the green video or blue video will increase the visual or psychological burden on the performer 510. be.
- a black image it is considered that the burden on the performer 510 can be reduced.
- the black hair of the performer 510 may also be removed, but it is the monitor video vM that is generated by the video processing unit 18.
- the specific image be a black image.
- FIG. 18 shows a configuration example of the second embodiment.
- the difference from the previous example of FIG. 13 is that the video processing section 24 (24a, 24b, 24c) is provided separately from the switcher 600, and each of the video processing sections 24a, 24b, 24c has a rendering function. It is.
- black background photographed videos vCbka, vCbkb, and vCbkc output from camera signal processing units 515a, 515b, and 515c are input to video processing units 24a, 24b, and 24c via switcher 600, respectively.
- the black background photographed videos vCbka, vCbkb, and vCbkc may be input directly to the video processing units 24a, 24b, and 24c without going through the switcher 600.
- the photographing information IFa, IFb, and IFc supplied from the camera trackers 560a, 560b, and 560c to the rendering engine 520 are also supplied to the video processing units 24a, 24b, and 24c, respectively.
- FIG. 19 shows the functional configuration of the video processing units 24a, 24b, and 24c.
- Video processing units 24a, 24b, and 24c corresponding to the three cameras 502a, 502b, and 502c have processing functions as an object extraction unit 41, a composition unit 42, and a rendering unit 43, respectively.
- the object extraction section 41 and the synthesis section 42 have the functions described in FIG. 16.
- the rendering unit 43 like the rendering units 21, 22, and 23 in the rendering engine 520, has a rendering function that generates an image of the inner frustum vBC. However, the rendering unit 43 differs from the rendering units 21, 22, and 23 of the rendering engine 520 in that it only needs to generate the inner frustum vBC, and does not need to synthesize the inner frustum vBC with the surrounding background image.
- the rendering unit 43 of the video processing unit 24a inputs the photographing information IFa regarding the camera 502a, and generates an inner frustum vBCa, which is a corresponding video of the camera 502a, as a background video at a viewpoint position based on the photographing information IFa. Then, in the video processing section 24a, the composition section 42 combines the image of the object extracted by the object extraction section 41 and the inner frustum vBCa. That is, the black video portion in the black background photographed video vCbka is replaced with the inner frustum vBCa. The output of the combining unit 42 becomes the monitor video vMa for the camera 502a.
- the rendering unit 43 of the video processing unit 24b inputs the photographing information IFb regarding the camera 502b, and generates an inner frustum vBCb, which is a corresponding video of the camera 502b, as a background video at a viewpoint position based on the photographing information IFb. Then, in the video processing unit 24b, the video of the object extracted by the object extraction unit 41 and the inner frustum vBCb are combined by the combining unit 42, and the black video portion in the black background photographed video vCbkb is replaced with the inner frustum vBCb. The output of the combining unit 42 becomes the monitor video vMb for the camera 502b.
- the rendering unit 43 of the video processing unit 24c inputs the photographing information IFc regarding the camera 502c, and generates an inner frustum vBCc, which is a corresponding video of the camera 502c, as a background video at a viewpoint position based on the photographing information IFc. Then, in the video processing unit 24c, a combining unit 42 combines the object video extracted by the object extraction unit 41 and the inner frustum vBCc, and replaces the black video portion in the black background photographed video vCbkc with the inner frustum vBCc. The output of the combining unit 42 becomes the monitor video vMc for the camera 502c.
- the processing of the video processing units 24a, 24b, and 24c described above is shown in a flowchart in FIG.
- the video processing units 24a, 24b, and 24c repeat the processing from step S301 to step S306 at each frame timing until it is determined in step S300 that the monitor output is finished.
- the video processing unit 24 (24a, 24b, 24c) inputs the shooting information IF (IFa, IFb, IFc).
- the video processing unit 24 generates inner frustum vBC (vBCa, vBCb, vBCc) based on the imaging information IF.
- step S303 the video processing unit 24 inputs the black background photographed video vCbk (vCbka, vCbkb, vCbkc).
- step S304 the video processing unit 24 extracts the object video from the black background photographed video vCbk.
- step S305 the video processing unit 24 combines the object video and the inner frustum vBC generated by the rendering unit 43.
- step S306 the video processing unit 24 outputs the video with the background replaced by the compositing process as a monitor video vM (vMa, vMb, vMc).
- monitor videos vMa, vMb, and vMc are supplied to multiviewer 612 and displayed respectively. Additionally, monitor images vMa, vMb, and vMc are supplied to camera signal processing units 515a, 515b, and 515c, respectively, and are supplied from the camera signal processing units 515a, 515b, and 515c to cameras 502a, 502b, and 502c, and displayed on a viewfinder or the like. Ru. Note that the monitor images vMa, vMb, and vMc may be supplied to the camera signal processing units 515a, 515b, and 515c via the switcher 600.
- the operator and the cameraman of each camera 502 can monitor images of the correct background.
- FIG. 21 shows a configuration example of the third embodiment. This is a configuration in which the rendering engine 520 renders only the background video vB of the camera 502 selected as the main video vCm among the plurality of cameras 502.
- the photographic information IFa, IFb, and IFc from the camera trackers 560a, 560b, and 560c are alternatively supplied to the rendering engine 520 via the selector 28.
- the selector 28 selects one of the photographing information IFa, IFb, and IFc according to the control signal C3 from the SW controller 10 of the switcher 600, and supplies it to the rendering engine 520.
- the rendering engine 520 generates an inner frustum vBC based on the input imaging information IF, generates a background video vB including the inner frustum vBC, and outputs it to the display controller 590.
- the switcher 600 in this example includes a switch section 12 and an SW controller 10.
- the SW controller 10 controls switching of the selector 28 in accordance with the selection of the main line video vCm by the switch unit 12. That is, the SW controller 10 causes the selector 28 to select the photographing information IFa during a period in which the switch section 12 selects the photographed video vCa of the camera 502a as the main video vCm. Similarly, the SW controller 10 causes the selector 28 to select the shooting information IFb during the period when the captured video vCb of the camera 502b is selected as the main video vCm, and selects the captured video vCc of the camera 502c as the main video vCm. During this period, the photographing information IFc is selected by the selector 28.
- the main line video vCm becomes a correct background video no matter which camera 502 is selected.
- the rendering engine 520 since the rendering engine 520 only needs to synthesize the inner frustum vBC corresponding to one camera 502, the processing load is lower than, for example, the configuration shown in FIG. 18.
- the photographing information IFa, IFb, and IFc from the camera trackers 560a, 560b, and 560c are also supplied to the video processing units 24a, 24b, and 24c.
- the functional configuration and processing of the video processing units 24a, 24b, and 24c are the same as those in FIGS. 19 and 20.
- monitor images vMa, vMb, and vMc corresponding to the three cameras 502a, 502b, and 502c are generated and supplied to the multiviewer 612.
- monitor images vMa, vMb, and vMc are supplied to cameras 502a, 502b, and 502c via camera signal processing units 515a, 515b, and 515c, and displayed on a viewfinder or the like.
- the operator and the cameraman of each camera 502 can monitor images of the correct background.
- FIG. 22 shows a configuration example of the fourth embodiment.
- the configuration example in FIG. 22 is close to the configuration example in FIG. 13, but the processing of the rendering units 21, 22, and 23 in the rendering engine 520 is different.
- the rendering units 21, 22, and 23 receive photographic information IFa, IFb, and IFc, respectively, and parameters PMTa, PMTb, and PMTc from the camera signal processing units 515a, 515b, and 515c, respectively.
- the parameters PMT (PMTa, PMTb, PMTc) referred to here are parameters related to camera images, and in particular parameters that affect the brightness and color of the images. Specific examples include values such as exposure and white balance that are operated by a cameraman. Alternatively, it may be an adjustment of color tone, a value of a video effect, or the like.
- monitor images vMa, vMb, and vMc are obtained by combining inner frustums vBCa, vBCb, and vBCc in the image processing unit 18; , vBCc are images in which the exposure adjustment and white balance adjustment of the cameras 502a, 502b, and 502c are not reflected. Therefore, even if the cameraman adjusts the exposure or white balance with the camera 502, the adjustments are not reflected in the background portions of the monitor images vMa, vMb, and vMc. Further, for this reason, the brightness and color tone of the background portion and the object portion of the monitor images vMa, vMb, and vMc may differ from each other. These factors make it inconvenient for photographers to adjust exposure and white balance.
- parameters PMTa, PMTb, and PMTc are supplied to the rendering units 21, 22, and 23 in order to reflect the cameraman's operation on the monitor video vM.
- the rendering units 21, 22, and 23 then perform processing as shown in FIG.
- the processing example in FIG. 23 is obtained by adding steps S71 and S72 to the processing described in FIG. 6.
- the processing from step S10 to step S60 is the same as that in FIG.
- step S71 the rendering units 21, 22, and 23 perform signal processing on the inner frustum vBC (vBCa, vBCb, vBCc) generated in step S40 using parameters PMTa, PMTb, and PMTc. In other words, processing is performed to create an image with exposure and white balance adjusted. Then, in step S72, the rendering units 21, 22, and 23 output the inner frustum vBC' after the signal processing was performed in step S71.
- the rendering units 21, 22, and 23 output inner frustums vBCa', vBCb', and vBCc that have been subjected to video signal processing that reflects the parameters PMTa, PMTb, and PMTc, in addition to the background images vBa, vBb, and vBc. ' are output respectively.
- the background images vBa, vBb, and vBc supplied to the LED wall 505 are not subjected to image processing that reflects the parameters PMTa, PMTb, and PMTc. This is because the exposure and white balance adjusted during photographing by the camera 502 are then reflected.
- the video processing unit 18 performs the processing described in FIG. 17 using the functional configuration described in FIG. 16.
- what is input are inner frustums vBCa', vBCb', and vBCc' that have been subjected to video processing that reflects the parameters PMTa, PMTb, and PMTc. Therefore, the generated monitor images vMa, vMb, and vMc become background images that reflect the exposure and white balance adjusted by the cameraman. As a result, the monitor images vMa, vMb, and vMc become more appropriate images.
- FIG. 22 shows a remote controller 615, which outputs an operation signal RC to, for example, camera signal processing units 515a, 515b, and 515c.
- a cameraman, operator, or the like can perform exposure adjustment, white balance adjustment, color tone adjustment, video effects, etc. by operating the camera 502 itself or by using the remote controller 615. In such a case, it is preferable that the processing of the rendering engine 520 is performed according to the parameter PMT corresponding to the operation.
- FIG. 24 shows a configuration example of the fifth embodiment. This is similar to the configuration example in FIG. 21, but is an example in which the parameters PMTa, PMTb, and PMTc are reflected in the video processing units 24a, 24b, and 24c.
- parameters PMTa, PMTb, and PMTc from camera signal processing units 515a, 515b, and 515c are input to video processing units 24a, 24b, and 24c, respectively.
- FIG. 25 shows the functional configuration of the video processing units 24a, 24b, and 24c.
- Video processing units 24a, 24b, and 24c corresponding to the three cameras 502a, 502b, and 502c have processing functions as an object extraction unit 41, a synthesis unit 42, a rendering unit 43, and a signal processing unit 44, respectively.
- the object extraction section 41, composition section 42, and rendering section 43 have the functions described in FIG. 19.
- the signal processing unit 44 performs signal processing on the inner frustum vBC generated by the rendering unit 43 according to the parameter PMT.
- the rendering unit 43 of the video processing unit 24a inputs the shooting information IFa regarding the camera 502a, and generates an inner frustum vBCa, which is a corresponding image of the camera 502a, as a background image at a viewpoint position based on the shooting information IFa.
- the inner frustum vBCa is subjected to signal processing according to the parameter PMTa in the signal processing section 44, and the inner frustum vBCa' is supplied to the synthesis section 42. Then, the image of the object extracted by the object extracting section 41 and the inner frustum vBCa' are combined by the combining section 42 and output as a monitor video vMa for the camera 502a.
- the rendering unit 43 of the video processing unit 24b inputs the shooting information IFb regarding the camera 502b, and generates an inner frustum vBCb, which is a corresponding image of the camera 502b, as a background image at a viewpoint position based on the shooting information IFb.
- This inner frustum vBCb is subjected to signal processing according to the parameter PMTb in the signal processing section 44, and the inner frustum vBCb' is supplied to the combining section 42. Then, the image of the object extracted by the object extracting section 41 and the inner frustum vBCb' are combined by the combining section 42 and output as a monitor video vMb for the camera 502a.
- the rendering unit 43 of the video processing unit 24c inputs the photographing information IFc regarding the camera 502c, and generates an inner frustum vBCc, which is a corresponding video of the camera 502c, as a background video at a viewpoint position based on the photographing information IFc.
- This inner frustum vBCc is subjected to signal processing according to the parameter PMTc in the signal processing section 44, and the inner frustum vBCc' is supplied to the synthesis section 42. Then, the image of the object extracted by the object extracting section 41 and the inner frustum vBCc' are combined by the combining section 42 and output as a monitor video vMc for the camera 502c.
- FIG. 26 is a flowchart showing the processing of the video processing units 24a, 24b, and 24c described above.
- the video processing units 24a, 24b, and 24c repeat the processes from step S301A to step S306 at each frame timing until it is determined in step S300 that the monitor output is finished.
- the video processing unit 24 (24a, 24b, 24c) inputs the photographing information IF (IFa, IFb, IFc) and the parameters PMT (PMTa, PMTb, PMTc).
- the video processing unit 24 generates inner frustum vBC (vBCa, vBCb, vBCc) based on the shooting information IF.
- step S310 the video processing unit 24 performs signal processing on the inner frustum vBC according to the parameter PMT.
- Steps after step S305 are the same as those in FIG. 20.
- the objects to be synthesized are inner frustums vBCa', vBCb', and vBCc'.
- the operator and the cameraman of each camera 502 can monitor the video that is the correct background video and that reflects the exposure adjustment and white balance adjustment of the camera 502. can do.
- the video processing section 18 (18a, 18b, 18c) and the video processing section 24 (24a, 24b, 24c) have been described as examples of the information processing device 70.
- These information processing devices 70 have LEDs that time-divisionally display an inner frustum vBC (corresponding video) generated corresponding to one of the cameras 502 and a black video (specific video).
- a black background photographed image vCbk (first photographed image) is input.
- video processing is performed to replace the black video included in the input black background photographed video vCbk with the inner frustum vBC generated corresponding to the camera 502 that photographed the video.
- a background video vB including an inner frustum vBC as a corresponding video corresponding to one camera is photographed by multiple cameras 502
- the photographed video vC of the camera 502 that does not correspond to the inner frustum vBC may not be the correct background. do not have. Therefore, the black background photographed video vCbk is input, and the black video portion is replaced with the inner frustum vBC corresponding to each camera 502. This makes it possible to obtain correct background images for all cameras 502.
- the main video vCm is used instead of the monitor video vM generated by the video processing unit 18 or the video processing unit 24. It may also be used as a monitor video vM. In other words, the monitor video vM may be generated by the video processing unit 18 or the video processing unit 24 for the camera 502 whose captured video vC is not the main video vCm.
- the captured image vC (second It is assumed that the captured video) is a video that can be selected as the main video vCm constituting the content.
- the main line video vCm is a video selected by the switcher 600 from among the videos taken by each camera 502 of the inner frustum vBC displayed on the LED wall 505. In other words, the original video as a virtual production is output as the main video vCm.
- the video generated by the video processing of the video processing section 18 or the video processing section 24 is output as the monitor video vM used for monitor display.
- the monitor video vM of each camera 502 becomes a video that includes the correct background and object for each camera 502. Become.
- the monitor video vM generated by video processing by the video processing unit 18 or the video processing unit 24 is output as a video used for monitor display on the camera 502.
- the monitor image vM (vNa, vMb, vMc)
- the photographers of all the cameras 502 can view the LED wall 505.
- the image will contain the correct background and objects for the camera 502. Therefore, the cameraman can appropriately adjust the angle of view, move the shooting position, and move the shooting direction while checking the correct background state. This is particularly suitable for when a cameraman performs field angle adjustment or angle adjustment in a state where the captured video vC of the camera 502 that he operates is not set as the main video vCm.
- the monitor video vM generated by the video processing of the video processing unit 18 or the video processing unit 24 is used as the video used for monitor display in the multi-viewer 612 that displays multiple videos in one screen. I decided to output it.
- a monitor video vM (vNa, vMb, vMc) with the background black video replaced with inner frustum vBC is supplied to the multiviewer 612 and displayed. This allows the operator or the like who operates the switcher 600 to view the captured images of each camera 502 against an appropriate background, and thus performs a switching operation while appropriately determining the state of each captured image vC.
- the video processing unit 18 replaces the black video portion with the input inner frustum vBC for each frame of the black background photographed video vCbk (Fig. 13 to 17, 22 and 23). For example, as in the examples explained in FIGS. 13 to 17, the video processing unit 18 inputs the inner frustum vBC supplied from the rendering engine 520, and synthesizes this with the object video extracted from the black background photographed video vCbk. . This makes it possible to generate an image in which the black background is replaced with inner frustum vBC.
- an inner frustum vBC for each camera 502 is generated by the rendering engine 520, and a plurality of generated inner frustum vBCs are selectively displayed on the LED wall 505.
- the video processing unit 18 replaces the black video portion of each frame of the black background photographed video vCbk with the inner frustum vBC (or vBC') supplied from the rendering engine 520. (See FIGS. 13 to 17, 22, and 23).
- the rendering units 21, 22, and 23 of the rendering engine 520 generate background images vBa, vBb, and vBc each including the inner frustum vBC corresponding to the camera.
- the video processing unit 18 inputs the inner frustum vBC supplied from the rendering engine 520 and combines it with the object video extracted from the black background photographed video vCbk.
- the switcher 600 since the background images vBa, vBb, and vBc corresponding to the selected camera are generated after switching is always performed by the switcher 600 corresponding to each camera, the switching by the switcher 600 occurs.
- a new background video vB can be displayed on the LED wall 505 immediately. This is because generation of the background video vB of another camera is not started in response to a switching instruction from the switcher 600.
- the inner frustum vBC generated for each camera by the rendering units 21, 22, and 23 is not wasted and is utilized for monitoring even during a period when it is not selected by the switcher 600.
- the video processing unit 24 performs a process of generating an inner frustum vBC corresponding to the camera 502 that captured the black background photographed video vCbk, and a process of generating each of the black background photographed video vCbk.
- the black video portion of the frame was replaced with the generated inner frustum vBC (see FIGS. 18 to 20, 21, and 24 to 26).
- the video processing unit 24 generates the inner frustum vBC of the corresponding camera 502 in the rendering unit 43, and uses this as the object extracted from the black background photographed video vCbk. Combine with video.
- the video processing unit 24 only needs to be able to render the inner frustum vBC in the rendering unit 43 for generating the monitor video vM.
- the rendering unit 43 since the video is not used as the main video vCm, the video quality required for the main video vCm is not required, and the video quality required for the monitor video vM is sufficient. Therefore, the rendering unit 43 only needs to perform video processing with a relatively light load, and is not required to have as much processing power as the rendering units 21, 22, and 23 of the rendering engine 520. Therefore, the video processing section 24 including the rendering section 43 can be constructed from a relatively inexpensive information processing device.
- the rendering engine 520 generates only the inner frustum vBC for the camera 502 that is alternatively selected among the plurality of cameras 502 that perform imaging, and the generated inner frustum vBC is generated by the rendering engine 520.
- the system configuration is such that the background video vB including the frustum vBC is displayed on the LED wall 505.
- the video processing unit 24 performs a process of generating an inner frustum vBC corresponding to the camera 502 that captured the black background photographed video vCbk, and converts the black video portion into the generated inner frustum for each frame of the black background photographed video vCbk.
- the process of replacing it with vBC (or vBC') is performed (see FIGS. 21, 24 to 26).
- the rendering engine 520 generates the background video vB for the camera 502 selected by the switch unit 12 of the switcher 600.
- the processing load on the rendering engine 520 for generating the high-definition background video vB to be included in the main video vCm is reduced. This is because it is not necessary to generate background images vBa, vBb, and vBc corresponding to the plurality of cameras 502, as in the example of FIG. 18, for example.
- Even in the case of such a system configuration by providing the video processing unit 24 including the rendering unit 43, it is possible to generate a correct background monitor video vM for each camera 502.
- the video processing unit 18 or the video processing unit 24 processes the black video portion of each frame of the black background photographed video vCbk into the video of the camera 502 that photographed the black background photographed video vCbk. Processing is performed to replace the inner frustum vBC (vBCa', vBCb', vBCc') to which signal processing according to the parameter PMT has been added (see FIGS. 22 to 26).
- vBCa', vBCb', vBCc' inner frustum vBC
- an inner frustum vBC for each camera 502 is generated by a rendering engine 520, and a plurality of generated inner frustum vBCs are selectively displayed on an LED wall 505.
- the video processing unit 18 converts the black video portion of each frame of the black background photographed video vCbk into inner frustum vBC (vBCa', vBCb', vBCc') which is subjected to signal processing according to the parameter PMT by the rendering engine 520. This is an example of performing a process of replacing it with (see FIGS. 22 and 23).
- the video data of the inner frustum vBC is subjected to video signal processing according to the parameters PMT such as exposure and white balance, so that adjustment operations of the camera 502, etc. It is possible to realize the correct monitor display with a corresponding background.
- the video processing unit 24 generates an inner frustum vBC corresponding to the camera 502 that captured the black background photographed video vCbk, performs signal processing according to the parameter PMT, and then performs signal processing on the black background. It is assumed that the black video portion of the photographed video vCbk is replaced (see FIGS. 24 to 26).
- signal processing is performed on the inner frustum vBC according to the parameter PMT.
- the display image of the LED wall 505 is a black image which is a specific image and an inner frustum vBC (background image vB including the inner frustum vBC) which is a corresponding image, alternately for each frame. It is assumed that the image is displayed.
- Each camera 502 then shoots each frame in synchronization with the display, and outputs in parallel a black background shot video vCbk, which is a series of frames including black images, and a shot video vC, which is a series of frames including inner frustum vBC. do.
- the video processing unit 18 and the video processing unit 24 input the black background photographed video vCbk and replace each frame with the inner frustum vBC corresponding to the camera 502. good.
- the specific video in the embodiment is a video used for video separation processing.
- a video such as a black video, a green video, or a blue video, which can be separated by chroma keying or the like as the specific video, video processing for generating the monitor video vM becomes easy.
- the specific video used in the video separation process is a black video
- the black background photographed video vCbk is a video in which an object is photographed against the black video background.
- the corresponding video in the embodiment is a background video generated according to the position or shooting direction of the camera 502.
- the corresponding video is the inner frustum vBC rendered based on the shooting information including the position of the camera 502 with respect to the LED wall 505 and the shooting direction.
- the monitor video vM of each camera is a video in which the inner frustum vBC has been optimized.
- the program of the embodiment is a program that causes a processor such as a CPU, a DSP, or a device including these to execute the processes shown in FIGS. 17, 20, and 26 described above. That is, the program of the embodiment is a display image of the LED wall 505 that displays the inner frustum vBC generated corresponding to one camera 502 among the plurality of cameras 502 and a specific image such as a black image in a time-sharing manner.
- Information processing in the video processing units 18, 24, etc. which inputs a black background photographed video vCbk (first photographed video) including the object and the specific video from among the videos taken with the camera 502 of the object and the performer 510, etc.
- the device 70 is caused to perform video processing.
- the video processing is video processing in which a specific video included in the black background photographed video vCbk is replaced with an inner frustum vBC generated corresponding to the camera 502 that photographed the black background photographed video vCbk.
- the information processing device 70 that executes the above-described video processing can be realized by various computer devices.
- Such a program can be recorded in advance in an HDD as a recording medium built into equipment such as a computer device, or in a ROM in a microcomputer having a CPU.
- such programs can be used for flexible discs, CD-ROMs (Compact Disc Read Only Memory), MO (Magneto Optical) discs, DVDs (Digital Versatile Discs), Blu-ray Discs (registered trademark), magnetic It can be stored (recorded) temporarily or permanently in a removable recording medium such as a disk, semiconductor memory, or memory card.
- a removable recording medium can be provided as so-called package software.
- a program In addition to installing such a program into a personal computer or the like from a removable recording medium, it can also be downloaded from a download site via a network such as a LAN (Local Area Network) or the Internet.
- LAN Local Area Network
- Such a program is suitable for widely providing the information processing device 70 of the embodiment.
- a program for example, by downloading a program to a personal computer, communication device, mobile terminal device such as a smartphone or tablet, mobile phone, game device, video device, PDA (Personal Digital Assistant), etc., these devices can be used as the information processing device of the present disclosure. 70.
- a video processing unit that performs video processing for a first captured video including the object and the specific video to replace the specific video with a corresponding video generated corresponding to the camera that captured the first captured video.
- Information processing equipment (2)
- a second captured image including a corresponding image generated in correspondence with the object and the one camera can be selected as the main image from among the image displayed on the display and the image captured by a camera of the object.
- Corresponding images for each camera are generated by a rendering engine, and the plurality of generated corresponding images are selectively displayed on the display, and In the video processing, a process is performed to replace the specific video portion of each frame of the first captured video with a corresponding video supplied from the rendering engine.
- Information according to any one of (1) to (6) above. Processing equipment.
- In the video processing A process of generating a corresponding image corresponding to the camera that captured the first captured image; The information processing device according to any one of (1) to (5) above, wherein for each frame of the first captured video, a process is performed to replace the portion of the specific video with a generated corresponding video.
- the video processing includes, for each frame of the first captured video, a process of replacing the specific video portion with a corresponding video that has been subjected to signal processing according to parameters regarding the video of the camera that captured the first captured video.
- the information processing device according to any one of (1) to (9) above.
- (11) Corresponding images for each camera are generated by a rendering engine, and the plurality of generated corresponding images are selectively displayed on the display, and In the video processing, the specific video portion of each frame of the first captured video is processed by the rendering engine to produce a corresponding video that is subjected to signal processing according to parameters regarding the video of the camera that captured the first captured video.
- the information processing device according to any one of (1), (2), (3), (4), (5), (6), (7), and (10) above.
- the displayed video on the display is a video in which the specific video and the corresponding video are alternately displayed on a frame-by-frame basis, In a photographing system, each camera outputs in parallel the first photographed video in which frames including the specific video are consecutive and the second photographic video in which frames including the corresponding video are consecutive;
- the information processing device according to (2) above wherein the first captured video is input and the video processing is performed.
- the specific video is a black video
- the information processing device according to any one of (1) to (13), wherein the first photographed video is a video in which an object is photographed against a black video background.
- the information processing device according to any one of (1) to (15), wherein the corresponding video is a background video generated according to the position or shooting direction of the camera.
- An information processing method comprising: performing video processing on a first captured video including the object and the specific video to replace the specific video with a corresponding video generated in accordance with a camera that captured the first captured video.
- the information processing device may, for a first photographed video including the object and the specific video, among videos taken with a camera of the display video of the display and an object separate from the display, take the specific video as:
- a photographing system comprising a video processing unit that performs video processing to replace the first photographed video with a corresponding video generated corresponding to the camera that took the photograph.
- Switcher controller (SW controller) 11 Switch section 12 Switch section 18, 18a, 18b, 18c Video processing section 21, 22, 23 Rendering section 24, 24a, 24b, 24c Video processing section 28 Selector 41 Object extraction section 42 Combining section 43 Rendering section 44 Signal processing section 70 information processing equipment, 71 CPU 85 Video processing unit 502 Camera 505 LED wall 510 Performers 515a, 515b Camera signal processing unit 520 Rendering engine 600 Switcher 612 Multiviewer vB Background video vBC Shooting area video (inner frustum) vC, vCa, vCb, vCc Photographed video vCbk Black background photographed video vCm Main line video vM, vMa, vMb, vMc Monitor video
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Studio Devices (AREA)
Abstract
Description
また近年はグリーンバック撮影に代わって、大型のディスプレイを設置したスタジオにおいて、ディスプレイに背景映像を表示させ、その前で演者が演技を行うことで、演者と背景を撮影できる撮影システムも開発され、いわゆるバーチャルプロダクション(Virtual Production)、インカメラVFX(In-Camera VFX)、またはLED(Light Emitting Diode)ウォールバーチャルプロダクション(LED Wall Virtual Production)として知られている。
下記特許文献1には、背景映像の前で演技する演者を撮影するシステムの技術が開示されている。
例えば背景等の対応映像をディスプレイに表示させて複数カメラで撮影を行う場合において、特定映像を表示させ、特定映像とオブジェクトを撮影するタイミングを設ける。そしてその場合の第1撮影映像については、特定映像部分をカメラ毎に対応する対応映像に置き換える処理を行う。
<1.撮影システム及びコンテンツ制作>
<2.情報処理装置の構成>
<3.複数カメラ及びスイッチャーを用いた撮影システム>
<4.第1の実施の形態>
<5.第2の実施の形態>
<6.第3の実施の形態>
<7.第4の実施の形態>
<8.第5の実施の形態>
<9.まとめ及び変型例>
例えば実施の形態において、ディスプレイでの表示に至る前における背景映像、カメラによる撮影映像、スイッチャーで切り替えられる背景映像や撮影映像は、実際に表示されている映像ではなく映像データであるが、便宜上、「背景映像」「撮影映像」等と表記する。
本開示の技術を適用できる撮影システム及び映像コンテンツの制作について説明する。
図1は撮影システム500を模式的に示している。この撮影システム500はバーチャルプロダクションとしての撮影を行うシステムで、図では撮影スタジオに配置される機材の一部を示している。
なお背景映像vBのうちで撮影領域映像vBCを除いた部分は「アウターフラスタム」と呼ばれ、撮影領域映像vBCは「インナーフラスタム」と呼ばれる。
ここで説明している背景映像vBとは、撮影領域映像vBC(インナーフラスタム)を含んで背景として表示される映像全体を指す。
なお、実際には撮影領域映像vBCの範囲は、その時点でカメラ502によって撮影される範囲よりも少し広い範囲とされる。これはカメラ502のパン、チルトやズームなどにより撮影される範囲が若干変化したときに、描画遅延によってアウターフラスタムの映像が映り込んでしまうことを防止するためや、アウターフラスタムの映像からの回折光による影響を避けるためである。
このようにリアルタイムでレンダリングされた撮影領域映像vBCの映像は、アウターフラスタムの映像と合成される。背景映像vBで用いられるアウターフラスタムの映像は、予め3D背景データに基づいてレンダリングしたものであるが、そのアウターフラスタムの映像の一部に、リアルタイムでレンダリングした撮影領域映像vBCとして映像を組み込むことで、全体の背景映像vBを生成している。
なお、フォトグラメトリによる3Dデータ生成において、ライダーで取得した点群情報を用いても良い。
撮影情報としては各フレームタイミングでのカメラ502の位置情報、カメラの向き、画角、焦点距離、F値(絞り値)、シャッタースピード、レンズ情報などを含むことが想定される。
映像の調整として色調整、輝度調整、コントラスト調整などが行われる場合がある。
クリップ編集として、クリップのカット、順番の調整、時間長の調整などが行われる場合がある。
映像エフェクトとして、CG映像や特殊効果映像の合成などが行われる場合がある。
図5は、図1、図2、図3で概要を説明した撮影システム500の構成を示すブロック図である。
カメラトラッカー560による具体的な検出手法としては、天井にランダムに反射板を配置して、それらに対してカメラ502に組み付けられたカメラトラッカー560から照射された赤外光の反射光から位置を検出する方法がある。また検出手法としては、カメラ502の雲台やカメラ502の本体に搭載されたジャイロ情報や、カメラ502の撮影映像の画像認識によりカメラ502の自己位置推定する方法もある。
またレンダリングエンジン520は、1フレーム毎の処理として、カメラトラッカー560やカメラ502から供給された撮影情報を用いて3D背景データに対する視点位置等を特定して撮影領域映像vBC(インナーフラスタム)のレンダリングを行う。
なお、ディスプレイコントローラ590を設けず、これらの処理をレンダリングエンジン520が行うようにしてもよい。つまりレンダリングエンジン520が分割映像信号nDを生成し、キャリブレーションを行い、各LEDパネル506に対して分割映像信号nDの伝送を行うようにしてもよい。
そしてアウターフラスタムとして用いる映像を生成する。
但し、複数のカメラ502を用いる場合、それぞれのカメラ502に対応する撮影領域映像vBCが干渉するという事情がある。例えば図7のように2台のカメラ502a、502bを用いる例では、カメラ502aに対応する撮影領域映像vBCを示しているが、カメラ502bの映像を用いる場合、カメラ502bに対応する撮影領域映像vBCも必要になる。単純に各カメラ502a、502bに対応するそれぞれの撮影領域映像vBCを表示させると、それらが互いに干渉する。このため撮影領域映像vBCの表示に関する工夫が必要とされる。
次に、アセットクリエイションST1、プロダクションST2、ポストプロダクションST3で用いることができる情報処理装置70の構成例を図8で説明する。
情報処理装置70は、コンピュータ機器など、情報処理、特に映像処理が可能な機器である。この情報処理装置70としては、具体的には、パーソナルコンピュータ、ワークステーション、スマートフォンやタブレット等の携帯端末装置、ビデオ編集装置等が想定される。また情報処理装置70は、クラウドコンピューティングにおけるサーバ装置や演算装置として構成されるコンピュータ装置であってもよい。
また情報処理装置70は、プロダクションST2で用いる撮影システム500を構成するレンダリングエンジン520やアセットサーバ530としても機能できる。
また情報処理装置70は、ポストプロダクションST3における各種映像処理を行う映像編集装置としても機能できる。
特に図13,図22の例ではスイッチャー600内に映像処理部18を備えるが、映像処理部18は情報処理装置70により構成される。
なお映像処理部85はCPU71内の機能として設けられてもよい。
入力部76によりユーザの操作が検知され、入力された操作に応じた信号はCPU71によって解釈される。
入力部76としてはマイクロフォンも想定される。ユーザの発する音声を操作情報として入力することもできる。
表示部77は各種表示を行う表示部であり、例えば情報処理装置70の筐体に設けられるディスプレイデバイスや、情報処理装置70に接続される別体のディスプレイデバイス等により構成される。
表示部77は、CPU71の指示に基づいて表示画面上に各種の画像、操作メニュー、アイコン、メッセージ等、即ちGUI(Graphical User Interface)としての表示を行う。
例えば情報処理装置70がアセットサーバ530として機能する場合、記憶部79を利用して3D背景データ群を格納するDBを構築できる。
例えば情報処理装置70がレンダリングエンジン520や後述の映像処理部24として機能する場合、通信部80によりアセットサーバ530としてのDBにアクセスしたり、カメラ502やカメラトラッカー560からの撮影情報を受信したりすることができる。
またポストプロダクションST3に用いる情報処理装置70の場合も、通信部80によりアセットサーバ530としてのDBにアクセスすることなども可能である。
ドライブ81により、リムーバブル記録媒体82からは映像データや、各種のコンピュータプログラムなどを読み出すことができる。読み出されたデータは記憶部79に記憶されたり、データに含まれる映像や音声が表示部77や音声出力部78で出力されたりする。またリムーバブル記録媒体82から読み出されたコンピュータプログラム等は必要に応じて記憶部79にインストールされる。
実施の形態では、例えば図7で述べたように複数のカメラ502(502a,502b・・・)を用いて撮影を行う場合を想定する。
複数のカメラ502でLEDウォール505や演者510等を撮影した複数系統の撮影映像vCは、後述するスイッチャー600によって択一的に切り替えられていわゆる本線映像として出力される。なお、もちろん本線映像とは別に、各撮影映像vCのそれぞれが記録や送信されるようにすることもできる。
そしてインナーフラスタムvBCの映像内容は、フレーム毎に、カメラ502の位置や撮影方向に応じてレンダリングエンジン520でレンダリングされ、背景映像vBの全体内に組み込まれてLEDウォール505に表示される。従って背景映像vBのうちのインナーフラスタムvBCの範囲や内容は図2,図3で説明したようにフレーム毎のカメラ位置等に応じて異なるものとなる。また個々のカメラ502に対応した表示となる。
つまりインナーフラスタムvBCは、各カメラ502のそれぞれに対応して生成されるもので、本開示でいう対応映像の例となる。
本線映像を切り替えるときは、背景映像vBの切り替えタイミングと、本線映像の切り替えタイミングを制御して、或るカメラ502による撮影映像vCに他のカメラ502の背景映像vBが含まれる状態が、本線映像として出力されないようにする。
図10は図7のような構成を前提として、撮影映像vCの切り替えを行うスイッチャー600や出力装置610を追加したものである。図5又は図7で説明した構成部位は、同一符号を付しており、重複説明は避ける。
カメラ502aで撮影された映像信号はカメラ信号処理部515aで現像処理、リサイズ処理などされて、撮影映像vCaとしてスイッチャー600に入力される。カメラ502bで撮影された映像信号はカメラ信号処理部515bで現像処理、リサイズ処理などされて、撮影映像vCbとしてスイッチャー600に入力される。
またレンダリング部22は、カメラ502bに関する撮影情報IFbに基づいて、カメラ502bに対応するインナーフラスタムvBCbのレンダリングを行い、全体の背景に組み込み、カメラ502bに合致した背景映像vBbを出力する。
上述のようにカメラ502a,502bによる撮影映像vCa,vCbは、スイッチャー600に入力される。スイッチャー600には撮影映像vCa,vCbを入力するスイッチ部12が設けられている。スイッチ部12は、撮影映像vCa,vCbの一方を択一的に選択し、選択した方を本線映像vCmとして出力する。
ここで出力装置610は、記録媒体に本線映像vCmを記録する記録装置であってもよいし、本線映像vCmを放送・送信する映像送信装置であってもよい。また出力装置610は、本線映像vCmを配信するWebサーバ等であってもよい。
SWコントローラ10は、図8のような情報処理装置70により構成可能である。SWコントローラ10は、少なくとも図8のCPU71、ROM72、RAM73、不揮発性メモリ部74、入出力インタフェース75を備えた構成であればよい。
例えばオペレータはスイッチャーパネル611を用いて本線映像vCmの切り替え操作を含む各種の操作を行うことができる。
スイッチャー600のオペレータ等は、マルチビューワ612により、各カメラ502の撮影内容を確認できる。
上記のような本線映像vCmの切り替えトリガKPが発生したら、SWコントローラ10はステップS101からステップS102に進み、制御信号C1によりスイッチ部11の切り替え制御を行う。すなわち、切り替えトリガKPの発生に応じて即座にスイッチ部11を切り替えさせることになる。
切り替え遅延時間Tdlとしての時間が経過したら、SWコントローラ10はステップS104で、制御信号C2によりスイッチ部12の切り替え制御を行う。即ちSWコントローラ10は、ステップS102においてスイッチ部11に背景映像vBaを選択させた場合は、ステップS104でスイッチ部12に撮影映像vCaを選択させる。同様にステップS102でスイッチ部11に背景映像vBbを選択させた場合は、ステップS104でスイッチ部12に撮影映像vCbを選択させる。
例えばカメラ502aからカメラ502bに選択状態を切り替える場合を想定する。
この場合、切り替え遅延時間Tdlは、スイッチ部11で例えばカメラ502b用の背景映像vBbが選択された時点から、カメラ502bで背景映像vBbを撮影した撮影映像vCbがスイッチ部12に入力されるまでのタイムラグに相当する時間である。
例えば図12にイメージセンサ30からの読み出し範囲の例を示している。実線がイメージセンサ30の全画素領域であるとする。イメージセンサ30からの光電変換信号の読み出しは、実線の全画素領域とされる場合もあるし、点線の範囲、破線の範囲、一点鎖線の範囲など、読み出しモードによって各種の場合がある。これらの読み出し範囲の差によって遅延時間が異なる。また撮影映像vCについての信号処理や、リサイズ処理による遅延もある。
そして、その切り替えた後の背景映像vBがスイッチ部12に入力されるタイミングでスイッチ部12を切り替えることで、本線映像vCmが正しくない背景の映像となることもない。
例えばカメラ502aが本線映像vCmとして選択されているとする。LEDウォール505には図9Bのように、カメラ502aの対応映像としてのインナーフラスタムvBCaを含む背景映像vBが表示されていることになる。
第1の実施の形態のシステム構成例を図13に示す。なお図5、図7、図10と同一部分は同一符号を付して重複説明を避ける。
3台のカメラ502a,502b,502cで撮影することに応じて、レンダリングエンジン520では、レンダリング部21,22,23によって、それぞれに対応する背景映像vBa,vBb,vBcを生成する。そして背景映像vBa,vBb,vBcがスイッチ部11で選択される構成となる。
この場合もSWコントローラ10は、例えば図11の処理により、スイッチ部11を切り替えた場合は、切り替え遅延時間Tdlを経過してからスイッチ部12の切り替えを実行させるようにする。
このように、ステップS104でSWコントローラ10は、スイッチ部12に、スイッチ部11での切り替え後の背景映像vBに対応するカメラ502による撮影映像vCへの切り替えを実行させる制御を行うことになる。
映像処理部18は、これらのインナーフラスタムvBCa,vBCb,vBCcの入力に応じてモニタ映像vM(vMa,vMb,vMc)を生成し、マルチビューワ612やカメラ信号処理部515a,515b,515cに対して出力する。
一方、LEDウォール505上には、入力された背景映像vB(60fps)と、同じく60fpsの黒映像を交互に120fpsで表示させるようにする。黒映像とは、画面全体が黒の映像である。
例えばLEDプロセッサ570は、スイッチャー600から入力される背景映像vBを120fpsでの1フレームの周期でLEDパネル506に表示させ、続いて内部で発生させた黒映像を同じく120fpsでの1フレームの周期でLEDパネル506に表示させることを繰り返す。
これによりLEDウォール505では図14に示すように、120fpsでの1フレームの周期T1で、背景映像vBと黒映像BKが交互に表示される。
これにより背景映像vBと演者510等のオブジェクトを含むフレームと、黒映像BKと演者510等のオブジェクトを含むフレームとが、交互に撮影されることになる。この場合にカメラ信号処理部515(515a,515b,515c)では、この120fpsのフレームを、奇数フレームと偶数フレームで分離して2系統で出力する。
この場合、本線映像vCmとして選択される撮影映像vCは正しい背景の映像であるが、他の撮影映像vCは、背景(インナーフラスタムvBC)が正しくない映像であるため、モニタ映像vMとして使用しない。
なお図15の処理例は、図6で説明した処理にステップS70を加えたものである。ステップS10からステップS60までの処理は図6と同様である。
即ちこのステップS70では、ステップS40で生成したインナーフラスタムvBCを、全体の背景映像と合成していない状態で出力する。
なおインナーフラスタムvBCa,vBCb,vBCcのみを映像処理部18に入力するのは、カメラ502で撮影される画角内の背景の映像はインナーフラスタムvBCの部分のみであるからである。
カメラ502a,502b,502cの3台に対応して、映像処理部18には、映像処理部18a,18b,18cとしての機能が並列的に設けられる。
合成部42は、抽出されたオブジェクトの映像と、入力されたインナーフラスタムvBCaを合成する。つまり黒背景撮影映像vCbkaにおける黒映像部分をインナーフラスタムvBCaに置き換える処理を行うことになる。
そして合成部42の出力が、カメラ502aについてのモニタ映像vMaとなる。
オブジェクト抽出部41は入力される黒背景撮影映像vCbkbについて、ルミナンスキーを用いて黒映像を除き、オブジェクトの映像のみを抽出する。
合成部42は、抽出されたオブジェクトの映像と、入力されたインナーフラスタムvBCbを合成し、黒背景撮影映像vCbkbにおける黒映像部分をインナーフラスタムvBCbに置き換える。この出力が、カメラ502bについてのモニタ映像vMbとなる。
オブジェクト抽出部41は入力される黒背景撮影映像vCbkcについて、ルミナンスキーを用いて黒映像を除き、オブジェクトの映像のみを抽出する。
合成部42は、抽出されたオブジェクトの映像と、入力されたインナーフラスタムvBCcを合成し、黒背景撮影映像vCbkcにおける黒映像部分をインナーフラスタムvBCcに置き換える。この出力が、カメラ502cについてのモニタ映像vMcとなる。
映像処理部18a,18b,18cはステップS200でモニタ出力の終了と判定されるまで、フレームタイミング毎にステップS201からステップS205の処理を繰り返す。
ステップS201で映像処理部18(18a,18b,18c)は黒背景撮影映像vCbk(vCbka,vCbkb,vCbkc)を入力する。
ステップS202で映像処理部18は、インナーフラスタムvBC(vBCa,vBCb,vBCc)を入力する。
ステップS203で映像処理部18は、黒背景撮影映像vCbkからオブジェクトの映像を抽出する。
ステップS204で映像処理部18は、オブジェクトの映像とインナーフラスタムvBCを合成する。
ステップS205で映像処理部18は、合成処理により背景を置き換えた映像をモニタ映像vM(vMa,vMb,vMc)として出力する。
このモニタ映像vMa,vMb,vMcは、演者510等のオブジェクトの背景が、それぞれ撮影したカメラ502に対応するインナーフラスタムvBCに置き換えられている。従ってオペレータ等は、各カメラ502a,502b,502cについて正しい背景の映像をモニタリングすることができる。
カメラ502a,502b,502cを操作する各カメラマンは、それぞれ自分のカメラで撮影している映像を、実際に撮影した正しくない背景の状態ではなく、そのカメラの位置や撮影方向に応じた正しい背景の状態で見ることができる。
グリーン映像をクロマキーで合成したほうが映像品質は上がるが、グリーン映像やブルー映像を時分割的に高いフレームレートで表示させると、その明滅により演者510に対する視覚的或いは心理的な負担が大きくなる恐れがある。一方、黒映像の場合は、そのような演者510に対する負担を小さくできると考えられる。
第2の実施の形態の構成例を図18に示す。
先の図13の例と異なる点は、映像処理部24(24a,24b,24c)をスイッチャー600とは別に設け、かつ映像処理部24a,24b,24cは、それぞれレンダリング機能を有するものとする点である。
なお、黒背景撮影映像vCbka,vCbkb,vCbkcは、スイッチャー600を介さずに直接、映像処理部24a,24b,24cに入力されるようにしてもよい。
カメラ502a,502b,502cの3台に対応する映像処理部24a,24b,24cは、それぞれ、オブジェクト抽出部41、合成部42、レンダリング部43としての処理機能を有する。
レンダリング部43は、レンダリングエンジン520におけるレンダリング部21,22,23と同様に、インナーフラスタムvBCの映像を生成するレンダリング機能を有する。但しレンダリング部43は、インナーフラスタムvBCの生成のみでよく、インナーフラスタムvBCの周囲の背景映像との合成は不要である点が、レンダリングエンジン520のレンダリング部21,22,23とは異なる。
そして、映像処理部24aではオブジェクト抽出部41で抽出したオブジェクトの映像と、インナーフラスタムvBCaを合成部42で合成する。つまり黒背景撮影映像vCbkaにおける黒映像部分をインナーフラスタムvBCaに置き換える。そして合成部42の出力が、カメラ502aについてのモニタ映像vMaとなる。
そして、映像処理部24bではオブジェクト抽出部41で抽出したオブジェクトの映像と、インナーフラスタムvBCbを合成部42で合成し、黒背景撮影映像vCbkbにおける黒映像部分をインナーフラスタムvBCbに置き換える。合成部42の出力が、カメラ502bについてのモニタ映像vMbとなる。
そして、映像処理部24cではオブジェクト抽出部41で抽出したオブジェクトの映像と、インナーフラスタムvBCcを合成部42で合成し、黒背景撮影映像vCbkcにおける黒映像部分をインナーフラスタムvBCcに置き換える。合成部42の出力が、カメラ502cについてのモニタ映像vMcとなる。
映像処理部24a,24b,24cはステップS300でモニタ出力の終了と判定されるまで、フレームタイミング毎にステップS301からステップS306の処理を繰り返す。
ステップS301で映像処理部24(24a,24b,24c)は撮影情報IF(IFa,IFb,IFc)を入力する。
ステップS302で映像処理部24は、撮影情報IFに基づいてインナーフラスタムvBC(vBCa,vBCb,vBCc)を生成する。
ステップS303で映像処理部24は、黒背景撮影映像vCbk(vCbka,vCbkb,vCbkc)を入力する。
ステップS304で映像処理部24は、黒背景撮影映像vCbkからオブジェクトの映像を抽出する。
ステップS305で映像処理部24は、オブジェクトの映像と、レンダリング部43で生成したインナーフラスタムvBCを合成する。
ステップS306で映像処理部24は、合成処理により背景を置き換えた映像をモニタ映像vM(vMa,vMb,vMc)として出力する。
またモニタ映像vMa,vMb,vMcは、それぞれカメラ信号処理部515a,515b,515cに供給され、カメラ信号処理部515a,515b,515cからカメラ502a,502b,502cに供給されてビューファインダー等に表示される。なおモニタ映像vMa,vMb,vMcは、スイッチャー600を介してカメラ信号処理部515a,515b,515cに供給されるようにしてもよい。
第3の実施の形態の構成例を図21に示す。
これはレンダリングエンジン520が、複数のカメラ502のうちで本線映像vCmとして選択されているカメラ502についての背景映像vBのみをレンダリングする構成である。
セレクタ28は、スイッチャー600のSWコントローラ10からの制御信号C3によって、撮影情報IFa,IFb,IFcのいずれかを選択し、レンダリングエンジン520に供給する。
レンダリングエンジン520は、入力された撮影情報IFに基づいてインナーフラスタムvBCの生成を行い、インナーフラスタムvBCを含む背景映像vBを生成して、ディスプレイコントローラ590に出力する。
SWコントローラ10は、スイッチ部12での本線映像vCmの選択に応じてセレクタ28の切り替え制御を行う。即ちSWコントローラ10は、スイッチ部12でカメラ502aの撮影映像vCaが本線映像vCmとして選択される期間は、セレクタ28で撮影情報IFaが選択されるようにする。同様にSWコントローラ10は、カメラ502bの撮影映像vCbが本線映像vCmとして選択される期間は、セレクタ28で撮影情報IFbが選択されるようにし、カメラ502cの撮影映像vCcが本線映像vCmとして選択される期間はセレクタ28で撮影情報IFcが選択されるようにする。
これによりカメラ502a,502b,502cの3台に対応するモニタ映像vMa,vMb,vMcが生成され、マルチビューワ612に供給される。またモニタ映像vMa,vMb,vMcはカメラ信号処理部515a,515b,515cを介してカメラ502a,502b,502cに供給されてビューファインダー等に表示される。
第4の実施の形態の構成例を図22に示す。図22の構成例は図13の構成例に近いが、レンダリングエンジン520におけるレンダリング部21,22,23の処理が異なる。
ここでいうパラメータPMT(PMTa,PMTb,PMTc)とは、カメラの映像に関するパラメータであり、特には映像としての輝度や色に影響するパラメータである。具体的な例としてはカメラマンが操作を行う露出やホワイトバランスなどの値である。或いは色合い(カラートーン)の調整や映像エフェクトの値などであってもよい。
そしてステップS72でレンダリング部21,22,23は、ステップS71で信号処理した後のインナーフラスタムvBC’を出力する。
なお、LEDウォール505に供給される背景映像vBa,vBb,vBcについては、パラメータPMTa,PMTb,PMTcを反映した映像処理は施されない。その後、カメラ502の撮影の際に調整された露出やホワイトバランスが反映されるためである。
従って、生成されるモニタ映像vMa,vMb,vMcは、カメラマンが調整操作した露出やホワイトバランスを反映した背景の映像となる。これにより、モニタ映像vMa,vMb,vMcは、より適切な映像となる。
このような場合に、その操作に応じたパラメータPMTによってレンダリングエンジン520の処理が行われることが好適である。
第5の実施の形態の構成例を図24に示す。これは図21の構成例に近いが、映像処理部24a,24b,24cでパラメータPMTa,PMTb,PMTcを反映させる例である。
カメラ502a,502b,502cの3台に対応する映像処理部24a,24b,24cは、それぞれ、オブジェクト抽出部41、合成部42、レンダリング部43、信号処理部44としての処理機能を有する。
信号処理部44は、レンダリング部43で生成したインナーフラスタムvBCについて、パラメータPMTに応じた信号処理を施す。
映像処理部24a,24b,24cはステップS300でモニタ出力の終了と判定されるまで、フレームタイミング毎にステップS301AからステップS306の処理を繰り返す。
ステップS301Aで映像処理部24(24a,24b,24c)は撮影情報IF(IFa,IFb,IFc)、及びパラメータPMT(PMTa,PMTb,PMTc)を入力する。
ステップS302で映像処理部24は、撮影情報IFに基づいてインナーフラスタムvBC(vBCa,vBCb,vBCc)を生成する。
ステップS310で映像処理部24は、インナーフラスタムvBCについてパラメータPMTに応じた信号処理を施す。
ステップS305以降は図20と同様である。但し合成処理対象となるのはインナーフラスタムvBCa’,vBCb’,vBCc’である。
以上の実施の形態によれば次のような効果が得られる。
実施の形態では情報処理装置70の例として、映像処理部18(18a,18b,18c)や映像処理部24(24a,24b,24c)を説明した。
これらの情報処理装置70は、複数のカメラ502のうちの一のカメラ502に対応して生成されたインナーフラスタムvBC(対応映像)と、黒映像(特定映像)を時分割的に表示するLEDウォール505の表示映像と、オブジェクトとをカメラ502で撮影した映像のうちで、黒背景撮影映像vCbk(第1撮影映像)を入力する。そして入力した黒背景撮影映像vCbkに含まれる黒映像を、その映像を撮影したカメラ502に対応して生成されたインナーフラスタムvBCに置き換える映像処理を行うものとした。
一つのカメラに対応する対応映像としてのインナーフラスタムvBCを含む背景映像vBを複数のカメラ502で撮影する場合には、そのインナーフラスタムvBCに対応しないカメラ502の撮影映像vCは、正しい背景ではない。そこで黒背景撮影映像vCbkを入力し、黒映像部分をカメラ502毎に対応するインナーフラスタムvBCに置き換える。これにより全てのカメラ502について正しい背景の映像を得ることができる。
本線映像vCmは、LEDウォール505に表示されたインナーフラスタムvBCを各カメラ502で撮影した映像のうちで、スイッチャー600で選択された映像である。つまり本線映像vCmとしては、バーチャルプロダクションとしての本来の映像が出力される。
例えば背景の黒映像をインナーフラスタムvBCに置き換えてモニタ映像vM(vNa,vMb,vMc)とすることで、各カメラ502のモニタ映像vMは、それぞれのカメラ502にとって正しい背景とオブジェクトを含む映像となる。
背景の黒映像をインナーフラスタムvBCに置き換えた映像をモニタ映像vM(vNa,vMb,vMc)としてカメラ502にフィードバックされるように構成することで、全てのカメラ502のカメラマンは、LEDウォール505の表示にかかわらず、そのカメラ502にとって正しい背景とオブジェクトを含む映像となる。
従ってカメラマンは、画角調整や撮影位置、撮影方向の移動などを、正しい背景状態を確認しながら適切に実行できる。特にカメラマンが、自分が操作するカメラ502の撮影映像vCが本線映像vCmとされていない状態で、画角調整やアングル調整などを行う場合に極めて好適である。
背景の黒映像をインナーフラスタムvBCに置き換えたモニタ映像vM(vNa,vMb,vMc)をマルチビューワ612に供給し、表示させる。これによりスイッチャー600を操作するオペレータ等が、各カメラ502の撮影映像を適切な背景で視認することができるため、各撮影映像vCの状態を適切に判断しながらスイッチング操作を行うことができる。
例えば図13から図17で説明した例のように、映像処理部18では、レンダリングエンジン520から供給されるインナーフラスタムvBCを入力し、これを黒背景撮影映像vCbkから抽出したオブジェクト映像と合成する。これにより黒背景をインナーフラスタムvBCに置き換えた映像を生成することができる。
例えば図13から図17で説明した例のように、レンダリングエンジン520のレンダリング部21,22,23は、それぞれカメラに対応するインナーフラスタムvBCを含む背景映像vBa,vBb,vBcを生成する。これらはスイッチャー600で選択されてLEDウォール505で表示される。
この構成の場合に、映像処理部18では、レンダリングエンジン520から供給されるインナーフラスタムvBCを入力し、これを黒背景撮影映像vCbkから抽出したオブジェクト映像と合成する。
このようなシステム構成の場合、常時各カメラに対応するスイッチャー600でスイッチングされてから、選択されたカメラに対応する背景映像vBa,vBb,vBcを生成しているため、スイッチャー600による切り替えがあったときに、すぐに新たな背景映像vBをLEDウォール505に表示できる。スイッチャー600の切り替え指示に応じて、別のカメラの背景映像vBの生成を開始するものではないためである。従ってスイッチャー600の切り替え時の背景映像vBの生成のための遅延をなくすことができ、スムースな切り替えが可能になる。
加えて、レンダリング部21,22,23で各カメラについて生成したインナーフラスタムvBCは、スイッチャー600で選択されていない期間でも無駄にならずモニタ用に活用されることになる。
例えば図18から図20で説明した例のように、映像処理部24では、それぞれ対応するカメラ502のインナーフラスタムvBCをレンダリング部43で生成し、これを黒背景撮影映像vCbkから抽出したオブジェクトの映像と合成する。これにより黒背景をインナーフラスタムvBCに置き換えた映像を生成することができる。
この場合、映像処理部24は、レンダリング部43において、あくまでモニタ映像vMの生成のためのインナーフラスタムvBCのレンダリングができればよい。つまり本線映像vCmとしては使用しない映像であるため、本線映像vCmに求められる映像品質は要求されず、モニタ映像vMとして必要な映像品質でよい。従ってレンダリング部43としては、比較的負荷の軽い映像処理を行えばよく、レンダリングエンジン520のレンダリング部21,22,23ほどの処理能力は要求されない。このためレンダリング部43を含む映像処理部24は比較的安価な情報処理装置で構成できる。
例えば図21の例のように、レンダリングエンジン520は、スイッチャー600のスイッチ部12で選択されるカメラ502についての背景映像vBを生成する。
このような構成により、レンダリングエンジン520は、本線映像vCmに含まれることになる高精細な背景映像vBを生成する処理の負担が軽減される。例えば図18の例のように複数のカメラ502に対応する背景映像vBa,vBb,vBcを生成しなくてもよいためである。
またこのようなシステム構成の場合でも、レンダリング部43を含む映像処理部24を備えることで、各カメラ502について正しい背景のモニタ映像vMを生成できる。
これにより、カメラマンの操作、自動調整等によりカメラ502側の露出、ホワイトバランス等が変化される場合において、モニタ映像vMにも、その変更が反映されるため、現状のパラメータに応じた正しいモニタ表示を実現できる。
レンダリングエンジン520のレンダリング部21,22,23において、インナーフラスタムvBCの映像データに露出、ホワイトバランス等のパラメータPMTに応じた映像信号処理を施すようにすることで、カメラ502の調整操作等に応じた背景となる正しいモニタ表示を実現できる。
映像処理部24において、インナーフラスタムvBCに対して、パラメータPMTに応じた信号処理が行われるようにする。この処理後のインナーフラスタムvBC(vBCa’,vBCb’,vBCc’)をオブジェクト映像と合成することで、カメラ502の調整操作等に応じた背景となる正しいモニタ表示を実現できる。パラメータPMTに応じた背景となる正しいモニタ表示を実現できる。
特定映像として黒映像、グリーン映像、ブルー映像など、クロマキーなどにより映像分離を行うことができる映像を用いることで、モニタ映像vMを生成する映像処理が容易となる。
特定映像として黒映像を用いることで、ブルー映像、グリーン映像に比べて、LEDウォール505の高周波周期の映像変化が演者510に与える影響を少なくできる。
即ち実施の形態のプログラムは、複数のカメラ502のうちの一のカメラ502に対応して生成されたインナーフラスタムvBCと黒映像等の特定映像を時分割的に表示するLEDウォール505の表示映像と、演者510等のオブジェクトとを、カメラ502で撮影した映像のうちで、オブジェクトと特定映像を含む黒背景撮影映像vCbk(第1撮影映像)を入力する映像処理部18、24等の情報処理装置70に映像処理を実行させる。その映像処理は、黒背景撮影映像vCbkに含まれる特定映像を、その黒背景撮影映像vCbkを撮影したカメラ502に対応して生成されたインナーフラスタムvBCに置き換える映像処理である。
また、このようなプログラムは、リムーバブル記録媒体からパーソナルコンピュータ等にインストールする他、ダウンロードサイトから、LAN(Local Area Network)、インターネットなどのネットワークを介してダウンロードすることもできる。
(1)
複数のカメラのうちの一のカメラに対応して生成された対応映像と特定映像を時分割的に表示するディスプレイの表示映像と、前記ディスプレイとは別体のオブジェクトとをカメラで撮影した映像のうちで、前記オブジェクトと前記特定映像を含む第1撮影映像について、前記特定映像を、その第1撮影映像を撮影したカメラに対応して生成された対応映像に置き換える映像処理を行う映像処理部を備える
情報処理装置。
(2)
前記ディスプレイの表示映像と、前記オブジェクトとをカメラで撮影した映像のうちで、前記オブジェクトと前記一のカメラに対応して生成された対応映像を含む第2撮影映像が本線映像として選択されうる映像とされる
上記(1)に記載の情報処理装置。
(3)
前記映像処理で生成した映像をモニタ表示に用いる映像として出力する
上記(1)又は(2)に記載の情報処理装置。
(4)
前記映像処理で生成した映像は、カメラにおけるモニタ表示に用いる映像として出力する
上記(1)から(3)のいずれかに記載の情報処理装置。
(5)
前記映像処理で生成した映像は、一画面で複数映像を分割表示するモニタ表示に用いる映像として出力する
上記(1)から(4)のいずれかに記載の情報処理装置。
(6)
前記映像処理では、前記第1撮影映像の各フレームについて、前記特定映像の部分を、入力された対応映像に置き換える処理を行う
上記(1)から(5)のいずれかに記載の情報処理装置。
(7)
各カメラについての対応映像がレンダリングエンジンによって生成され、生成された複数の対応映像が選択的に前記ディスプレイに表示されるとともに、
前記映像処理では、前記第1撮影映像の各フレームの前記特定映像の部分を、前記レンダリングエンジンから供給された対応映像に置き換える処理を行う
上記(1)から(6)のいずれかに記載の情報処理装置。
(8)
前記映像処理では、
前記第1撮影映像を撮影したカメラに対応する対応映像を生成する処理と、
前記第1撮影映像の各フレームについて、前記特定映像の部分を、生成した対応映像に置き換える処理を行う
上記(1)から(5)のいずれかに記載の情報処理装置。
(9)
撮影を行う複数のカメラのうちで、択一的に選択されたカメラについての対応映像のみがレンダリングエンジンによって生成され、生成された対応映像が前記ディスプレイに表示されるシステム構成において、
前記映像処理では、
前記第1撮影映像を撮影したカメラに対応する対応映像を生成する処理と、
前記第1撮影映像の各フレームについて、前記特定映像の部分を、生成した対応映像に置き換える処理を行う
上記(1)から(5)のいずれかに記載の情報処理装置。
(10)
前記映像処理では、前記第1撮影映像の各フレームについて、前記特定映像の部分を、前記第1撮影映像を撮影したカメラの映像に関するパラメータに応じた信号処理が加えられた対応映像に置き換える処理を行う
上記(1)から(9)のいずれかに記載の情報処理装置。
(11)
各カメラについての対応映像がレンダリングエンジンによって生成され、生成された複数の対応映像が選択的に前記ディスプレイに表示されるとともに、
前記映像処理では、前記第1撮影映像の各フレームの前記特定映像の部分を、前記レンダリングエンジンで、前記第1撮影映像を撮影したカメラの映像に関するパラメータに応じた信号処理が加えられた対応映像に置き換える処理を行う
上記(1)(2)(3)(4)(5)(6)(7)(10)のいずれかに記載の情報処理装置。
(12)
前記映像処理では、
前記第1撮影映像を撮影したカメラに対応する対応映像を生成し、生成した対応映像に対して前記第1撮影映像を撮影したカメラの映像に関するパラメータに応じた信号処理を施したうえで、前記第1撮影映像の前記特定映像の部分を対応映像に置き換える
上記(1)(2)(3)(4)(5)(8)(9)(10)のいずれかに記載の情報処理装置。
(13)
前記ディスプレイの表示映像は、前記特定映像と前記対応映像がフレーム毎に交互に表示される映像であり、
各カメラは、前記特定映像を含むフレームが連続する前記第1撮影映像と、前記対応映像を含むフレームが連続する前記第2撮影映像を並列的に出力する撮影システムにおいて、
前記第1撮影映像を入力して前記映像処理を行う
上記(2)に記載の情報処理装置。
(14)
前記特定映像は映像分離処理に用いる映像である
上記(1)から(13)のいずれかに記載の情報処理装置。
(15)
前記特定映像は黒映像であり、
前記第1撮影映像は黒映像を背景としてオブジェクトが撮影された映像である
上記(1)から(13)のいずれかに記載の情報処理装置。
(16)
前記対応映像とは、カメラの位置又は撮影方向に応じて生成された背景映像である
上記(1)から(15)のいずれかに記載の情報処理装置。
(17)
複数のカメラのうちの一のカメラに対応して生成された対応映像と特定映像を時分割的に表示するディスプレイの表示映像と、前記ディスプレイとは別体のオブジェクトとをカメラで撮影した映像のうちで、前記オブジェクトと前記特定映像を含む第1撮影映像について、前記特定映像を、その第1撮影映像を撮影したカメラに対応して生成された対応映像に置き換える映像処理を行う
情報処理方法。
(18)
ディスプレイと、
前記ディスプレイに表示された映像を撮影する複数のカメラと、
前記複数のカメラにより撮影される映像の処理を行う情報処理装置と、
を備え、
前記ディスプレイでは、前記複数のカメラのうちの一のカメラに対応して生成された対応映像と特定映像を時分割的に表示し、
前記情報処理装置は、前記ディスプレイの表示映像と前記ディスプレイとは別体のオブジェクトとをカメラで撮影した映像のうちで、前記オブジェクトと前記特定映像を含む第1撮影映像について、前記特定映像を、その第1撮影映像を撮影したカメラに対応して生成された対応映像に置き換える映像処理を行う映像処理部を備える
撮影システム。
11 スイッチ部
12 スイッチ部
18,18a,18b,18c 映像処理部
21,22,23 レンダリング部
24,24a,24b,24c 映像処理部
28 セレクタ
41 オブジェクト抽出部
42 合成部
43 レンダリング部
44 信号処理部
70 情報処理装置、
71 CPU
85 映像処理部
502 カメラ
505 LEDウォール
510 演者
515a,515b カメラ信号処理部
520 レンダリングエンジン
600 スイッチャー
612 マルチビューワ
vB 背景映像
vBC 撮影領域映像(インナーフラスタム)
vC,vCa,vCb,vCc 撮影映像
vCbk 黒背景撮影映像
vCm 本線映像
vM,vMa,vMb,vMc モニタ映像
Claims (18)
- 複数のカメラのうちの一のカメラに対応して生成された対応映像と特定映像を時分割的に表示するディスプレイの表示映像と、前記ディスプレイとは別体のオブジェクトとをカメラで撮影した映像のうちで、前記オブジェクトと前記特定映像を含む第1撮影映像について、前記特定映像を、その第1撮影映像を撮影したカメラに対応して生成された対応映像に置き換える映像処理を行う映像処理部を備える
情報処理装置。 - 前記ディスプレイの表示映像と、前記オブジェクトとをカメラで撮影した映像のうちで、前記オブジェクトと前記一のカメラに対応して生成された対応映像を含む第2撮影映像が本線映像として選択されうる映像とされる
請求項1に記載の情報処理装置。 - 前記映像処理で生成した映像をモニタ表示に用いる映像として出力する
請求項1に記載の情報処理装置。 - 前記映像処理で生成した映像は、カメラにおけるモニタ表示に用いる映像として出力する
請求項1に記載の情報処理装置。 - 前記映像処理で生成した映像は、一画面で複数映像を分割表示するモニタ表示に用いる映像として出力する
請求項1に記載の情報処理装置。 - 前記映像処理では、前記第1撮影映像の各フレームについて、前記特定映像の部分を、入力された対応映像に置き換える処理を行う
請求項1に記載の情報処理装置。 - 各カメラについての対応映像がレンダリングエンジンによって生成され、生成された複数の対応映像が選択的に前記ディスプレイに表示されるとともに、
前記映像処理では、前記第1撮影映像の各フレームの前記特定映像の部分を、前記レンダリングエンジンから供給された対応映像に置き換える処理を行う
請求項1に記載の情報処理装置。 - 前記映像処理では、
前記第1撮影映像を撮影したカメラに対応する対応映像を生成する処理と、
前記第1撮影映像の各フレームについて、前記特定映像の部分を、生成した対応映像に置き換える処理を行う
請求項1に記載の情報処理装置。 - 撮影を行う複数のカメラのうちで、択一的に選択されたカメラについての対応映像のみがレンダリングエンジンによって生成され、生成された対応映像が前記ディスプレイに表示されるシステム構成において、
前記映像処理では、
前記第1撮影映像を撮影したカメラに対応する対応映像を生成する処理と、
前記第1撮影映像の各フレームについて、前記特定映像の部分を、生成した対応映像に置き換える処理を行う
請求項1に記載の情報処理装置。 - 前記映像処理では、前記第1撮影映像の各フレームについて、前記特定映像の部分を、前記第1撮影映像を撮影したカメラの映像に関するパラメータに応じた信号処理が加えられた対応映像に置き換える処理を行う
請求項1に記載の情報処理装置。 - 各カメラについての対応映像がレンダリングエンジンによって生成され、生成された複数の対応映像が選択的に前記ディスプレイに表示されるとともに、
前記映像処理では、前記第1撮影映像の各フレームの前記特定映像の部分を、前記レンダリングエンジンで、前記第1撮影映像を撮影したカメラの映像に関するパラメータに応じた信号処理が加えられた対応映像に置き換える処理を行う
請求項1に記載の情報処理装置。 - 前記映像処理では、
前記第1撮影映像を撮影したカメラに対応する対応映像を生成し、生成した対応映像に対して前記第1撮影映像を撮影したカメラの映像に関するパラメータに応じた信号処理を施したうえで、前記第1撮影映像の前記特定映像の部分を対応映像に置き換える
請求項1に記載の情報処理装置。 - 前記ディスプレイの表示映像は、前記特定映像と前記対応映像がフレーム毎に交互に表示される映像であり、
各カメラは、前記特定映像を含むフレームが連続する前記第1撮影映像と、前記対応映像を含むフレームが連続する前記第2撮影映像を並列的に出力する撮影システムにおいて、
前記第1撮影映像を入力して前記映像処理を行う
請求項2に記載の情報処理装置。 - 前記特定映像は映像分離処理に用いる映像である
請求項1に記載の情報処理装置。 - 前記特定映像は黒映像であり、
前記第1撮影映像は黒映像を背景としてオブジェクトが撮影された映像である
請求項1に記載の情報処理装置。 - 前記対応映像とは、カメラの位置又は撮影方向に応じて生成された背景映像である
請求項1に記載の情報処理装置。 - 複数のカメラのうちの一のカメラに対応して生成された対応映像と特定映像を時分割的に表示するディスプレイの表示映像と、前記ディスプレイとは別体のオブジェクトとをカメラで撮影した映像のうちで、前記オブジェクトと前記特定映像を含む第1撮影映像について、前記特定映像を、その第1撮影映像を撮影したカメラに対応して生成された対応映像に置き換える映像処理を行う
情報処理方法。 - ディスプレイと、
前記ディスプレイに表示された映像を撮影する複数のカメラと、
前記複数のカメラにより撮影される映像の処理を行う情報処理装置と、
を備え、
前記ディスプレイでは、前記複数のカメラのうちの一のカメラに対応して生成された対応映像と特定映像を時分割的に表示し、
前記情報処理装置は、前記ディスプレイの表示映像と前記ディスプレイとは別体のオブジェクトとをカメラで撮影した映像のうちで、前記オブジェクトと前記特定映像を含む第1撮影映像について、前記特定映像を、その第1撮影映像を撮影したカメラに対応して生成された対応映像に置き換える映像処理を行う映像処理部を備える
撮影システム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202380040142.8A CN119183661A (zh) | 2022-05-20 | 2023-04-19 | 信息处理设备、信息处理方法和成像系统 |
JP2024521627A JPWO2023223759A1 (ja) | 2022-05-20 | 2023-04-19 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022083041 | 2022-05-20 | ||
JP2022-083041 | 2022-05-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023223759A1 true WO2023223759A1 (ja) | 2023-11-23 |
Family
ID=88834973
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2023/015648 WO2023223759A1 (ja) | 2022-05-20 | 2023-04-19 | 情報処理装置、情報処理方法、撮影システム |
Country Status (3)
Country | Link |
---|---|
JP (1) | JPWO2023223759A1 (ja) |
CN (1) | CN119183661A (ja) |
WO (1) | WO2023223759A1 (ja) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001086371A (ja) * | 1999-09-17 | 2001-03-30 | Fuji Photo Film Co Ltd | ビデオ画像の製作方法及び装置 |
JP2002199252A (ja) * | 2000-12-26 | 2002-07-12 | Tokyo Hoso:Kk | スタジオセット用ディスプレイ装置およびこれを用いたスタジオセット |
JP2008092120A (ja) * | 2006-09-29 | 2008-04-17 | Fujifilm Corp | 画像合成システム |
JP2008097191A (ja) * | 2006-10-10 | 2008-04-24 | Fujifilm Corp | 画像合成システム |
JP2010213124A (ja) * | 2009-03-11 | 2010-09-24 | Nippon Telegr & Teleph Corp <Ntt> | 画像撮影・表示方法、画像撮影・表示装置、プログラムおよび記録媒体 |
US20200145644A1 (en) | 2018-11-06 | 2020-05-07 | Lucasfilm Entertainment Company Ltd. LLC | Immersive content production system with multiple targets |
-
2023
- 2023-04-19 JP JP2024521627A patent/JPWO2023223759A1/ja active Pending
- 2023-04-19 CN CN202380040142.8A patent/CN119183661A/zh active Pending
- 2023-04-19 WO PCT/JP2023/015648 patent/WO2023223759A1/ja active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001086371A (ja) * | 1999-09-17 | 2001-03-30 | Fuji Photo Film Co Ltd | ビデオ画像の製作方法及び装置 |
JP2002199252A (ja) * | 2000-12-26 | 2002-07-12 | Tokyo Hoso:Kk | スタジオセット用ディスプレイ装置およびこれを用いたスタジオセット |
JP2008092120A (ja) * | 2006-09-29 | 2008-04-17 | Fujifilm Corp | 画像合成システム |
JP2008097191A (ja) * | 2006-10-10 | 2008-04-24 | Fujifilm Corp | 画像合成システム |
JP2010213124A (ja) * | 2009-03-11 | 2010-09-24 | Nippon Telegr & Teleph Corp <Ntt> | 画像撮影・表示方法、画像撮影・表示装置、プログラムおよび記録媒体 |
US20200145644A1 (en) | 2018-11-06 | 2020-05-07 | Lucasfilm Entertainment Company Ltd. LLC | Immersive content production system with multiple targets |
Also Published As
Publication number | Publication date |
---|---|
JPWO2023223759A1 (ja) | 2023-11-23 |
CN119183661A (zh) | 2024-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2884337B1 (en) | Realistic scene illumination reproduction | |
US10356377B2 (en) | Control and display system with synchronous direct view video array and incident key lighting | |
KR102371031B1 (ko) | 버추얼 프로덕션의 영상 촬영을 위한 장치, 시스템, 방법 및 프로그램 | |
JP2012147379A (ja) | 撮像装置及び撮像装置の制御方法 | |
WO2023007817A1 (ja) | 情報処理装置、映像処理方法、プログラム | |
CN115966143A (zh) | 背景显示设备 | |
US20230077552A1 (en) | Video Game Engine Assisted Virtual Studio Production Process | |
WO2023223759A1 (ja) | 情報処理装置、情報処理方法、撮影システム | |
WO2024004584A1 (ja) | 情報処理装置、情報処理方法、プログラム | |
CN118285093A (zh) | 信息处理设备和信息处理方法 | |
WO2024048295A1 (ja) | 情報処理装置、情報処理方法、プログラム | |
WO2023223758A1 (ja) | スイッチャー装置、制御方法、撮影システム | |
CN116895217A (zh) | 背景显示设备 | |
WO2023238646A1 (ja) | 情報処理装置、情報処理方法、プログラム、情報処理システム | |
WO2024075525A1 (ja) | 情報処理装置およびプログラム | |
WO2023176269A1 (ja) | 情報処理装置、情報処理方法、プログラム | |
JP2024098589A (ja) | 撮像装置、プログラム | |
WO2023047645A1 (ja) | 情報処理装置、映像処理方法、プログラム | |
WO2024042893A1 (ja) | 情報処理装置、情報処理方法、およびプログラム | |
WO2024070618A1 (ja) | 撮像装置、プログラム、撮像システム | |
WO2023090038A1 (ja) | 情報処理装置、映像処理方法、プログラム | |
WO2023047643A1 (ja) | 情報処理装置、映像処理方法、プログラム | |
WO2023189580A1 (ja) | 画像処理装置及び画像処理システム | |
WO2023196845A2 (en) | System and method for providing dynamic backgrounds in live-action videography | |
WO2023196850A2 (en) | System and method for providing dynamic backgrounds in live-action videography |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23807371 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2024521627 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023807371 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2023807371 Country of ref document: EP Effective date: 20241220 |