GB2614698A - Controlling adaptive backdrops - Google Patents
Controlling adaptive backdrops Download PDFInfo
- Publication number
- GB2614698A GB2614698A GB2116457.9A GB202116457A GB2614698A GB 2614698 A GB2614698 A GB 2614698A GB 202116457 A GB202116457 A GB 202116457A GB 2614698 A GB2614698 A GB 2614698A
- Authority
- GB
- United Kingdom
- Prior art keywords
- display
- image
- location
- input image
- controller
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000003044 adaptive effect Effects 0.000 title description 4
- 238000009877 rendering Methods 0.000 claims abstract description 24
- 230000009466 transformation Effects 0.000 claims abstract description 10
- 238000000034 method Methods 0.000 claims description 5
- 230000001131 transforming effect Effects 0.000 claims 1
- 230000001052 transient effect Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3179—Video signal processing therefor
- H04N9/3185—Geometric adjustment, e.g. keystone or convergence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/001—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
- G09G3/003—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/37—Details of the operation on graphic patterns
- G09G5/377—Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/2224—Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3191—Testing thereof
- H04N9/3194—Testing thereof including sensor feedback
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/16—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2370/00—Aspects of data communication
- G09G2370/04—Exchange of auxiliary data, i.e. other than image data, between monitor and graphics controller
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2370/00—Aspects of data communication
- G09G2370/20—Details of the management of multiple sources of image data
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Human Computer Interaction (AREA)
- Architecture (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
- Controls And Circuits For Display Device (AREA)
- Processing Or Creating Images (AREA)
Abstract
A display controller 10 for a backdrop display 2 is configured to: receive an input image of a scene viewed from a first location; receive a second location offset from the first location; transform the input image in response to the offset to form a transformed image; and transmit the transformed image to a display device for display. The first and second locations may be the locations of a mobile camera 3 within a film studio. The display device may be a display wall or a screen and projector. The transformation may comprise selecting a subset of the image, which may have a centre offset from the centre of the image. The backdrop display may provide a backdrop within a video studio. A system comprising the display controller and a rendering engine 6 is also disclosed.
Description
Intellectual Property Office Application No GI321164579 RTM Date:11 May 2023 The following terms are registered trade marks and should be read as such wherever they occur in this document: Unreal Engine (page 3) Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
CONTROLLING ADAPTIVE BACKDROPS
This invention relates to controlling images displayed by adaptive backdrops such as LED walls or projection screens.
When video is being captured, for example for movie filming or current affairs broadcasts, it is becoming increasingly common to display a changing backdrop behind a subject. The subject may be an actor or a presenter. The backdrop may represent a set or a location where the subject is to be depicted. The backdrop can be displayed using a display wall (e.g. formed of multiple LED display panels) or using a projection screen. One advantage of this approach is that it avoids the need for the subject to be in the said location, and allows the location to be a fantasy or historic location. In comparison to the green screen method, in which the backdrop is inserted in video post-processing, it reduces the need to for such post-processing and makes it easier to use the technique for live events.
It is desirable for the image displayed on the backdrop to appear realistic from the point of view of the camera that is being used to capture the video. One known way to accomplish this is to store a three-dimensional model of the scene that is to be the backdrop, to sense the location of the camera relative to the wall or screen on which the backdrop will be displayed and then to form an image of the scene as it would appear on the wall/screen from the location of the camera. The image is then passed to the wall for display, or to a projector for projection onto the screen.
One problem with this approach is that it takes some time for the image to be formed once the location of the camera is known. When the camera is moving, this time delay causes a lag in the change of the image once the camera has moved to a new location. This can be perceived by a viewer and can make the backdrop seem less convincing. 1.
It would be desirable to have an improved mechanism for forming adaptive backdrops so that camera motion may be better accommodated.
According to the present invention there is provided apparatus and methods as set out in the accompanying claims.
The display controller may comprise one or more processors configured to execute code stored in non-transient form to execute the steps it is configured to perform. There may be provided a data carrier storing in non-transient form such code.
The rendering engine may comprise one or more processors configured to execute code stored in non-transient form to execute the steps it is configured to perform. There may be provided a data carrier storing in non-transient form such code.
The present invention will now be described by way of example with reference to the accompanying drawing.
In the drawing: Figure 1 shows a system for capturing video of a subject against a backdrop.
Figure 2 shows an oversized image and a subset of that image of a size appropriate for display on a display wall.
Figure 1 shows a system that could be implemented to capture video of a subject 1 against a background displayed on a display wall 2. The subject could, for example, be one or more actors or presenters or an inanimate object. The display wall is a structure extending in two dimensions that can display a desired image. It could be a matrix of individual display units, for example LED display panels.
A camera 3 is provided for capturing the video. The camera is located so that from the point of view of the camera the display wall 2 appears behind the subject 1. The camera is mounted, e.g. on a wheeled tripod 4, dolly or articulated boom, so that it can be moved when video is being captured.
A positioning system is provided so that the location of the camera can be estimated. Any suitable positioning system can be used. In one example, a sensor 5 mounted on the camera senses the position of markers or transmitters 6 in the studio, and thereby estimates the location of the camera. The positioning system may, for example, be as described in EP 2 962 284. The position of the camera may be estimated at the camera or remotely from the camera.
In the example of figure 1, the sensor 5 estimates the position of the camera and transmits that position to a rendering unit 6. The rendering unit 6 comprises a processor 7 and has access to a memory 8 storing in non-transient form code executable by the processor to cause the rendering engine to operate as described herein. The rendering engine also has access to a scene memory 9. The scene memory stores data defining a scene in such a way that from that data the scene can be portrayed from multiple locations. Conveniently, the scene may be defined by data that defines the three-dimensional structure and appearance of objects in the scene. Such data allows it to be determined how those objects would appear relative to each other from different locations. The rendering engine may implement a rendering engine such as Unreal Engine. The rendering engine receives the camera's position. The rendering engine has previously been provided with information defining the location of the display wall 2. With that information the rendering engine can determine what image needs to be displayed on the wall so that the backdrop will present the scene accurately from the point of view of the camera. This may, for example, be done by tracing rays from the camera position through the wall to the scene as it is defined in an imaginary space behind the wall, and thereby determining the colour, brightness etc. of each element (e.g. pixel) that can be displayed on the wall.
Once that image has been determined it is passed to a display controller 10 local to the wall. The display controller controls the wall to display the desired image. If instead of a display wall a projection screen is used, the display controller may be local to a projector which projects the image on to the screen.
The camera 3 captures video data which is stored in a video data store 11. From there it can be edited and/or post-processed and broadcast if required.
As indicated above, it would be desirable for the system to be able to respond quickly to changes in the position of the camera 3.
One way in which this may be done will now be described. The rendering unit 6 is configured to form an image that extends beyond the edges of the wall 2 at the scale at which the image is to be displayed. This may be done by tracing rays from the camera position through points that lie beyond the edges of the wall so as to determine what colour, brightness etc, would be represented in the scene at those locations. This is illustrated in figure 2. Boundary 20 indicates the margin of the image that is generated. Boundary 21 indicates how much of that image can be displayed on the wall at the desired scale. It will be seen that there are regions within boundary 20 but not within boundary 21. Depending on the complexity of the scene and the processing power of the rendering unit, it may take some perceivable time to generate the image. Successive images may be generated several times per second, for example 30 or 60 times per second. The frequency may be selected depending on the frame rate of the camera, the frame rate of the wall and the expected speed of motion of the camera.
When the oversized image has been formed by the rendering engine it is transmitted to the display controller 10. The display controller also receives the position of the camera. Due to lag in generating the oversized image and transmitting it to the display controller the position of the camera may have changed since the oversized image was generated. In dependence on the latest position of the camera that it has received, the display controller selects a subset of the oversized image and optionally applies a geometric transformation so as to form an adapted image. The display controller then causes the wall to display the adapted image. The framing of the oversized image and any geometric transformation can be quicker operations than that of forming the oversized image. In this way, the display controller can cause the image that is displayed to be adapted promptly to the position of the camera. This can help to reduce any perception that the background to the subject 1 is unnatural.
The display controller 10 has a processor 12 and a memory 13. The memory 13 stores in non-transient form code executable by the processor 12 so that the display controller can perform its functions.
The adaptation of the oversized image will now be described.
In one example, the rendering engine provides the oversized image to the display controller together with an indication of the camera position for which the oversized image was formed. The display controller can then apply an algorithm to frame (i.e. select a subset of) and optionally apply a geometric transformation to the oversized image (or the selected subset of it). For example, if the camera has traversed horizontally by a certain amount, then the display controller may select a subset of the display that is correspondingly offset horizontally from the centre of the oversized image. Or if the camera has traversed vertically by a certain amount, then the display controller may select a subset of the display that is correspondingly offset vertically from the centre of the oversized image. The resulting image may be an imperfect rendering of the scene from the current point of view of the camera, but if the camera is moving at a reasonable speed compared to the frame rate of the display system, any errors can be expected to be minor.
In the discussion above, the rendering engine and the display controller receive the position of the camera. They may also receive the direction of the camera (e.g. the direction of the central axis of the camera's lens). They may also receive information about the properties of the camera's lens. They may also receive information about the state of the camera's lens, for example its zoom and/or aperture setting. Any one or more of these factors may be employed by the rendering engine to form the image and/or by the display controller to adapt that image. Information about the camera other than its position that was used to generate the image may be transmitted by the rendering engine to the display controller to permit the display controller to adapt for changes in the camera's set-up since the image was formed.
The display controller may have information about the location of the screen so that it can establish the location of the camera relative to the screen. That information may also be used to affect the adaptation of the oversized image. For example, when the camera has undergone translation, the amount by which the centre of the adapted image is shifted from the centre of the oversized image may vary depending on the distance of the camera from the wall.
In one example, the camera may have rotated between the image being formed and it being processed by the display controller. In that situation the display controller may apply a suitable trapezoidal or other geometric transformation to the image or part of it to form the adapted image for display.
In some situations the image as formed by the rendering engine may be stretched by the display controller to form the image for display. Then, if desired, the display controller may employ an algorithm to interpolate between pixels of the oversized image to form the adapted image.
It has been found that the present system is especially valuable for accommodating translation of the camera relative to the wall or screen.
The rendering engine may provide the display controller information about the depth of objects depicted at respective locations in the image. This may, for example be provided as a depth map in which for each pixel or block of pixels in the image a value is specified indicating the depth or aggregate or average depth of locations depicted in that pixel or block. The display controller may apply a transformation to parts of the image in dependence on the indicated depth information. For example, a greater shift responsive to camera motion may be applied for pixels or blocks at a greater depth. Interpolation and/or culling of pixels and/or inpainting algorithms can be applied as appropriate to maintain a desired pixel density in the adapted image.
The display controller could be local to the display wall or projector or remote from it. It could be integrated into the wall or to one or more display panels. It could be integrated with the rendering engine.
The camera position sensor could signal the camera's location directly to both the display controller and the rendering engine as illustrated in figure 1. Alternatively it could signal the camera's location to one of those entities, which could then forward it to the other. Other information about the state of the camera, such as its direction and the state of its lens, could be signalled in the same way.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that aspects of the present invention may consist of any such individual feature or combination of features. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention.
The phrase "configured to" or "arranged to" followed by a term defining a condition or function is used herein to indicate that the object of the phrase is in a state in which it has that condition, or is able to perform that function, without that object being modified or further configured.
Claims (11)
- CLAIMS1 A display controller for a backdrop display, the controller being configured to: receive an input image representing a scene as viewed from a first location; receive a second location offset from the first location; transform the input image in response to an offset between the first location and the second location to form a transformed image; and transmit the transformed image to a display device for display.
- 2. A controller for a backdrop display as claimed in claim 1, wherein the first and second locations are the locations of a mobile camera.
- 3. A controller as claimed in claim 1 or 2, wherein the transformation of the input image comprises selecting a subset of the input image to form the transformed image.
- 4. A controller as claimed in claim 3, wherein the transformation of the input image comprises selecting a subset of the input image whose centre is offset from that of the input image to form the transformed image.
- 5. A controller as claimed in any preceding claim, wherein the transformation of the input image comprises applying a geometric transformation to the input image to form the transformed image.
- 6. A controller as claimed in any preceding claim, the controller being configured to receive the first location in conjunction with the input image.
- 7. A system for controlling a backdrop display, the system comprising: a display controller as claimed in any preceding claim; and a rendering engine configured to form the input image from a three-dimensional model of the scene and transmit the input image to the display controller.
- 8. A system as claimed in claim 7, comprising the display device and wherein the display device is (i) a screen and a projector for displaying the image on the screen and/or (H) a display wall.
- 9. A system as claimed in claim 8, wherein the display controller is local to the display device.
- 10. A system as claimed in claim 8 or 9, wherein the display device is arranged as a backdrop in a video studio.
- 11 A method for controlling a backdrop display, the method comprising: receiving an input image representing a scene as viewed from a first location; receiving a second location offset from the first location; transforming the input image in response to an offset between the first location and the second location to form a transformed image; and transmitting the transformed image to a video studio backdrop display device for display.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2116457.9A GB2614698A (en) | 2021-11-15 | 2021-11-15 | Controlling adaptive backdrops |
PCT/GB2022/052892 WO2023084250A1 (en) | 2021-11-15 | 2022-11-15 | Controlling adaptive backdrops |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB2116457.9A GB2614698A (en) | 2021-11-15 | 2021-11-15 | Controlling adaptive backdrops |
Publications (2)
Publication Number | Publication Date |
---|---|
GB202116457D0 GB202116457D0 (en) | 2021-12-29 |
GB2614698A true GB2614698A (en) | 2023-07-19 |
Family
ID=79163528
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB2116457.9A Withdrawn GB2614698A (en) | 2021-11-15 | 2021-11-15 | Controlling adaptive backdrops |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB2614698A (en) |
WO (1) | WO2023084250A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2372169A (en) * | 1999-09-22 | 2002-08-14 | Canadian Space Agency | Method and system for time/motion compensation for head mounted displays |
US7312766B1 (en) * | 2000-09-22 | 2007-12-25 | Canadian Space Agency | Method and system for time/motion compensation for head mounted displays |
JP2020173726A (en) * | 2019-04-12 | 2020-10-22 | 日本放送協会 | Virtual viewpoint conversion device and program |
US20210019898A1 (en) * | 2018-05-07 | 2021-01-21 | Apple Inc. | Scene camera retargeting |
GB2612418A (en) * | 2021-08-27 | 2023-05-03 | Mo Sys Engineering Ltd | Rendering image content |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB201303712D0 (en) | 2013-03-01 | 2013-04-17 | Geissler Michael P A | Optical navigation & positioning system |
US20150379772A1 (en) * | 2014-06-30 | 2015-12-31 | Samsung Display Co., Ltd. | Tracking accelerator for virtual and augmented reality displays |
WO2020097212A1 (en) * | 2018-11-06 | 2020-05-14 | Lucasfilm Entertainment Company Ltd. | Immersive content production system |
CN112040092B (en) * | 2020-09-08 | 2021-05-07 | 杭州时光坐标影视传媒股份有限公司 | Real-time virtual scene LED shooting system and method |
-
2021
- 2021-11-15 GB GB2116457.9A patent/GB2614698A/en not_active Withdrawn
-
2022
- 2022-11-15 WO PCT/GB2022/052892 patent/WO2023084250A1/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2372169A (en) * | 1999-09-22 | 2002-08-14 | Canadian Space Agency | Method and system for time/motion compensation for head mounted displays |
US7312766B1 (en) * | 2000-09-22 | 2007-12-25 | Canadian Space Agency | Method and system for time/motion compensation for head mounted displays |
US20210019898A1 (en) * | 2018-05-07 | 2021-01-21 | Apple Inc. | Scene camera retargeting |
JP2020173726A (en) * | 2019-04-12 | 2020-10-22 | 日本放送協会 | Virtual viewpoint conversion device and program |
GB2612418A (en) * | 2021-08-27 | 2023-05-03 | Mo Sys Engineering Ltd | Rendering image content |
Also Published As
Publication number | Publication date |
---|---|
GB202116457D0 (en) | 2021-12-29 |
WO2023084250A1 (en) | 2023-05-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9591237B2 (en) | Automated generation of panning shots | |
US8208048B2 (en) | Method for high dynamic range imaging | |
US10748243B2 (en) | Image distortion transformation method and apparatus | |
US20220215568A1 (en) | Depth Determination for Images Captured with a Moving Camera and Representing Moving Features | |
CN111557016A (en) | Motion blur simulation | |
US20170019605A1 (en) | Multiple View and Multiple Object Processing in Wide-Angle Video Camera | |
US20060078162A1 (en) | System and method for stabilized single moving camera object tracking | |
KR101538947B1 (en) | The apparatus and method of hemispheric freeviewpoint image service technology | |
CN102148965A (en) | Video monitoring system for multi-target tracking close-up shooting | |
JP2009124685A (en) | Method and system for combining videos for display in real-time | |
EP1843581A2 (en) | Video processing and display | |
KR20150108774A (en) | Method for processing a video sequence, corresponding device, computer program and non-transitory computer-readable medium | |
KR20140090775A (en) | Correction method of distortion image obtained by using fisheye lens and image display system implementing thereof | |
KR101465112B1 (en) | camera system | |
KR101725024B1 (en) | System for real time making of 360 degree VR video base on lookup table and Method for using the same | |
JP2012019399A (en) | Stereoscopic image correction device, stereoscopic image correction method, and stereoscopic image correction system | |
KR101916419B1 (en) | Apparatus and method for generating multi-view image from wide angle camera | |
CN114219895A (en) | Three-dimensional visual image construction method and device | |
WO2018105097A1 (en) | Image synthesis device, image synthesis method, and image synthesis program | |
GB2614698A (en) | Controlling adaptive backdrops | |
CN116664999A (en) | Aviation video and live-action three-dimensional scene fusion method and fusion display method | |
CN105208286A (en) | Photographing method and device for simulating low-speed shutter | |
US20140327745A1 (en) | Rectified Stereoscopic 3D Panoramic Picture | |
WO2022109897A1 (en) | Time-lapse photography method and device, and time-lapse video generation method and device | |
JP3128467B2 (en) | How to convert 2D video to 3D video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |