WO2023084250A1 - Controlling adaptive backdrops - Google Patents

Controlling adaptive backdrops Download PDF

Info

Publication number
WO2023084250A1
WO2023084250A1 PCT/GB2022/052892 GB2022052892W WO2023084250A1 WO 2023084250 A1 WO2023084250 A1 WO 2023084250A1 GB 2022052892 W GB2022052892 W GB 2022052892W WO 2023084250 A1 WO2023084250 A1 WO 2023084250A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
image
location
input image
controller
Prior art date
Application number
PCT/GB2022/052892
Other languages
French (fr)
Inventor
Michael Geissler
Maciej CYBINSKI
Original Assignee
Mo-Sys Engineering Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mo-Sys Engineering Limited filed Critical Mo-Sys Engineering Limited
Publication of WO2023084250A1 publication Critical patent/WO2023084250A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • H04N9/3185Geometric adjustment, e.g. keystone or convergence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/001Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background
    • G09G3/003Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes using specific devices not provided for in groups G09G3/02 - G09G3/36, e.g. using an intermediate record carrier such as a film slide; Projection systems; Display of non-alphanumerical information, solely or in combination with alphanumerical information, e.g. digital display on projected diapositive as background to produce spatial visual effects
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3191Testing thereof
    • H04N9/3194Testing thereof including sensor feedback
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/16Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/04Exchange of auxiliary data, i.e. other than image data, between monitor and graphics controller
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/20Details of the management of multiple sources of image data

Definitions

  • This invention relates to controlling images displayed by adaptive backdrops such as LED walls or projection screens.
  • the subject may be an actor or a presenter.
  • the backdrop may represent a set or a location where the subject is to be depicted.
  • the backdrop can be displayed using a display wall (e.g. formed of multiple LED display panels) or using a projection screen.
  • a display wall e.g. formed of multiple LED display panels
  • One advantage of this approach is that it avoids the need for the subject to be in the said location, and allows the location to be a fantasy or historic location. In comparison to the green screen method, in which the backdrop is inserted in video post-processing, it reduces the need to for such post-processing and makes it easier to use the technique for live events.
  • the image displayed on the backdrop it is desirable for the image displayed on the backdrop to appear realistic from the point of view of the camera that is being used to capture the video.
  • One known way to accomplish this is to store a three-dimensional model of the scene that is to be the backdrop, to sense the location of the camera relative to the wall or screen on which the backdrop will be displayed and then to form an image of the scene as it would appear on the wall/screen from the location of the camera. The image is then passed to the wall for display, or to a projector for projection onto the screen.
  • the display controller may comprise one or more processors configured to execute code stored in non-transient form to execute the steps it is configured to perform. There may be provided a data carrier storing in non-transient form such code.
  • the rendering engine may comprise one or more processors configured to execute code stored in non-transient form to execute the steps it is configured to perform. There may be provided a data carrier storing in non-transient form such code.
  • Figure 1 shows a system for capturing video of a subject against a backdrop.
  • Figure 2 shows an oversized image and a subset of that image of a size appropriate for display on a display wall.
  • Figure 1 shows a system that could be implemented to capture video of a subject 1 against a background displayed on a display wall 2.
  • the subject could, for example, be one or more actors or presenters or an inanimate object.
  • the display wall is a structure extending in two dimensions that can display a desired image. It could be a matrix of individual display units, for example LED display panels.
  • a camera 3 is provided for capturing the video. The camera is located so that from the point of view of the camera the display wall 2 appears behind the subject 1 .
  • the camera is mounted, e.g. on a wheeled tripod 4, dolly or articulated boom, so that it can be moved when video is being captured.
  • a positioning system is provided so that the location of the camera can be estimated. Any suitable positioning system can be used.
  • a sensor 5 mounted on the camera senses the position of markers or transmitters 6 in the studio, and thereby estimates the location of the camera.
  • the positioning system may, for example, be as described in EP 2 962 284.
  • the position of the camera may be estimated at the camera or remotely from the camera.
  • the sensor 5 estimates the position of the camera and transmits that position to a rendering unit 6.
  • the rendering unit 6 comprises a processor 7 and has access to a memory 8 storing in non-transient form code executable by the processor to cause the rendering engine to operate as described herein.
  • the rendering engine also has access to a scene memory 9.
  • the scene memory stores data defining a scene in such a way that from that data the scene can be portrayed from multiple locations. Conveniently, the scene may be defined by data that defines the three-dimensional structure and appearance of objects in the scene. Such data allows it to be determined how those objects would appear relative to each other from different locations.
  • the rendering engine may implement a rendering engine such as Unreal Engine. The rendering engine receives the camera’s position.
  • the rendering engine has previously been provided with information defining the location of the display wall 2. With that information the rendering engine can determine what image needs to be displayed on the wall so that the backdrop will present the scene accurately from the point of view of the camera. This may, for example, be done by tracing rays from the camera position through the wall to the scene as it is defined in an imaginary space behind the wall, and thereby determining the colour, brightness etc. of each element (e.g. pixel) that can be displayed on the wall.
  • a display controller 10 local to the wall.
  • the display controller controls the wall to display the desired image. If instead of a display wall a projection screen is used, the display controller may be local to a projector which projects the image on to the screen.
  • the camera 3 captures video data which is stored in a video data store 11 . From there it can be edited and/or post-processed and broadcast if required.
  • the rendering unit 6 is configured to form an image that extends beyond the edges of the wall 2 at the scale at which the image is to be displayed. This may be done by tracing rays from the camera position through points that lie beyond the edges of the wall so as to determine what colour, brightness etc, would be represented in the scene at those locations. This is illustrated in figure 2.
  • Boundary 20 indicates the margin of the image that is generated.
  • Boundary 21 indicates how much of that image can be displayed on the wall at the desired scale. It will be seen that there are regions within boundary 20 but not within boundary 21 .
  • the display controller When the oversized image has been formed by the rendering engine it is transmitted to the display controller 10.
  • the display controller also receives the position of the camera. Due to lag in generating the oversized image and transmitting it to the display controller the position of the camera may have changed since the oversized image was generated. In dependence on the latest position of the camera that it has received, the display controller selects a subset of the oversized image and optionally applies a geometric transformation so as to form an adapted image. The display controller then causes the wall to display the adapted image.
  • the framing of the oversized image and any geometric transformation can be quicker operations than that of forming the oversized image. In this way, the display controller can cause the image that is displayed to be adapted promptly to the position of the camera. This can help to reduce any perception that the background to the subject 1 is unnatural.
  • the display controller 10 has a processor 12 and a memory 13.
  • the memory 13 stores in non-transient form code executable by the processor 12 so that the display controller can perform its functions.
  • the rendering engine provides the oversized image to the display controller together with an indication of the camera position for which the oversized image was formed.
  • the display controller can then apply an algorithm to frame (i.e. select a subset of) and optionally apply a geometric transformation to the oversized image (or the selected subset of it). For example, if the camera has traversed horizontally by a certain amount, then the display controller may select a subset of the display that is correspondingly offset horizontally from the centre of the oversized image. Or if the camera has traversed vertically by a certain amount, then the display controller may select a subset of the display that is correspondingly offset vertically from the centre of the oversized image.
  • the resulting image may be an imperfect rendering of the scene from the current point of view of the camera, but if the camera is moving at a reasonable speed compared to the frame rate of the display system, any errors can be expected to be minor.
  • the rendering engine and the display controller receive the position of the camera. They may also receive the direction of the camera (e.g. the direction of the central axis of the camera’s lens). They may also receive information about the properties of the camera’s lens. They may also receive information about the state of the camera’s lens, for example its zoom and/or aperture setting. Any one or more of these factors may be employed by the rendering engine to form the image and/or by the display controller to adapt that image. Information about the camera other than its position that was used to generate the image may be transmitted by the rendering engine to the display controller to permit the display controller to adapt for changes in the camera’s set-up since the image was formed.
  • the display controller may have information about the location of the screen so that it can establish the location of the camera relative to the screen. That information may also be used to affect the adaptation of the oversized image. For example, when the camera has undergone translation, the amount by which the centre of the adapted image is shifted from the centre of the oversized image may vary depending on the distance of the camera from the wall.
  • the camera may have rotated between the image being formed and it being processed by the display controller.
  • the display controller may apply a suitable trapezoidal or other geometric transformation to the image or part of it to form the adapted image for display.
  • the image as formed by the rendering engine may be stretched by the display controller to form the image for display. Then, if desired, the display controller may employ an algorithm to interpolate between pixels of the oversized image to form the adapted image.
  • the rendering engine may provide the display controller information about the depth of objects depicted at respective locations in the image. This may, for example be provided as a depth map in which for each pixel or block of pixels in the image a value is specified indicating the depth or aggregate or average depth of locations depicted in that pixel or block.
  • the display controller may apply a transformation to parts of the image in dependence on the indicated depth information. For example, a greater shift responsive to camera motion may be applied for pixels or blocks at a greater depth. Interpolation and/or culling of pixels and/or inpainting algorithms can be applied as appropriate to maintain a desired pixel density in the adapted image.
  • the display controller could be local to the display wall or projector or remote from it. It could be integrated into the wall or to one or more display panels. It could be integrated with the rendering engine.
  • the camera position sensor could signal the camera’s location directly to both the display controller and the rendering engine as illustrated in figure 1 . Alternatively it could signal the camera’s location to one of those entities, which could then forward it to the other. Other information about the state of the camera, such as its direction and the state of its lens, could be signalled in the same way.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Architecture (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A display controller for a backdrop display, the controller being configured to: receive an input image representing a scene as viewed from a first location; receive a second location offset from the first location; transform the input image in response to an offset between the first location and the second location to form a transformed image; and transmit the transformed image to a display device for display.

Description

CONTROLLING ADAPTIVE BACKDROPS
This invention relates to controlling images displayed by adaptive backdrops such as LED walls or projection screens.
When video is being captured, for example for movie filming or current affairs broadcasts, it is becoming increasingly common to display a changing backdrop behind a subject. The subject may be an actor or a presenter. The backdrop may represent a set or a location where the subject is to be depicted. The backdrop can be displayed using a display wall (e.g. formed of multiple LED display panels) or using a projection screen. One advantage of this approach is that it avoids the need for the subject to be in the said location, and allows the location to be a fantasy or historic location. In comparison to the green screen method, in which the backdrop is inserted in video post-processing, it reduces the need to for such post-processing and makes it easier to use the technique for live events.
It is desirable for the image displayed on the backdrop to appear realistic from the point of view of the camera that is being used to capture the video. One known way to accomplish this is to store a three-dimensional model of the scene that is to be the backdrop, to sense the location of the camera relative to the wall or screen on which the backdrop will be displayed and then to form an image of the scene as it would appear on the wall/screen from the location of the camera. The image is then passed to the wall for display, or to a projector for projection onto the screen.
One problem with this approach is that it takes some time for the image to be formed once the location of the camera is known. When the camera is moving, this time delay causes a lag in the change of the image once the camera has moved to a new location. This can be perceived by a viewer and can make the backdrop seem less convincing. It would be desirable to have an improved mechanism for forming adaptive backdrops so that camera motion may be better accommodated.
According to the present invention there is provided apparatus and methods as set out in the accompanying claims.
The display controller may comprise one or more processors configured to execute code stored in non-transient form to execute the steps it is configured to perform. There may be provided a data carrier storing in non-transient form such code.
The rendering engine may comprise one or more processors configured to execute code stored in non-transient form to execute the steps it is configured to perform. There may be provided a data carrier storing in non-transient form such code.
The present invention will now be described by way of example with reference to the accompanying drawing.
In the drawing:
Figure 1 shows a system for capturing video of a subject against a backdrop.
Figure 2 shows an oversized image and a subset of that image of a size appropriate for display on a display wall.
Figure 1 shows a system that could be implemented to capture video of a subject 1 against a background displayed on a display wall 2. The subject could, for example, be one or more actors or presenters or an inanimate object. The display wall is a structure extending in two dimensions that can display a desired image. It could be a matrix of individual display units, for example LED display panels. A camera 3 is provided for capturing the video. The camera is located so that from the point of view of the camera the display wall 2 appears behind the subject 1 . The camera is mounted, e.g. on a wheeled tripod 4, dolly or articulated boom, so that it can be moved when video is being captured.
A positioning system is provided so that the location of the camera can be estimated. Any suitable positioning system can be used. In one example, a sensor 5 mounted on the camera senses the position of markers or transmitters 6 in the studio, and thereby estimates the location of the camera. The positioning system may, for example, be as described in EP 2 962 284. The position of the camera may be estimated at the camera or remotely from the camera.
In the example of figure 1 , the sensor 5 estimates the position of the camera and transmits that position to a rendering unit 6. The rendering unit 6 comprises a processor 7 and has access to a memory 8 storing in non-transient form code executable by the processor to cause the rendering engine to operate as described herein. The rendering engine also has access to a scene memory 9. The scene memory stores data defining a scene in such a way that from that data the scene can be portrayed from multiple locations. Conveniently, the scene may be defined by data that defines the three-dimensional structure and appearance of objects in the scene. Such data allows it to be determined how those objects would appear relative to each other from different locations. The rendering engine may implement a rendering engine such as Unreal Engine. The rendering engine receives the camera’s position. The rendering engine has previously been provided with information defining the location of the display wall 2. With that information the rendering engine can determine what image needs to be displayed on the wall so that the backdrop will present the scene accurately from the point of view of the camera. This may, for example, be done by tracing rays from the camera position through the wall to the scene as it is defined in an imaginary space behind the wall, and thereby determining the colour, brightness etc. of each element (e.g. pixel) that can be displayed on the wall. Once that image has been determined it is passed to a display controller 10 local to the wall. The display controller controls the wall to display the desired image. If instead of a display wall a projection screen is used, the display controller may be local to a projector which projects the image on to the screen.
The camera 3 captures video data which is stored in a video data store 11 . From there it can be edited and/or post-processed and broadcast if required.
As indicated above, it would be desirable for the system to be able to respond quickly to changes in the position of the camera 3.
One way in which this may be done will now be described. The rendering unit 6 is configured to form an image that extends beyond the edges of the wall 2 at the scale at which the image is to be displayed. This may be done by tracing rays from the camera position through points that lie beyond the edges of the wall so as to determine what colour, brightness etc, would be represented in the scene at those locations. This is illustrated in figure 2. Boundary 20 indicates the margin of the image that is generated. Boundary 21 indicates how much of that image can be displayed on the wall at the desired scale. It will be seen that there are regions within boundary 20 but not within boundary 21 . Depending on the complexity of the scene and the processing power of the rendering unit, it may take some perceivable time to generate the image. Successive images may be generated several times per second, for example 30 or 60 times per second. The frequency may be selected depending on the frame rate of the camera, the frame rate of the wall and the expected speed of motion of the camera.
When the oversized image has been formed by the rendering engine it is transmitted to the display controller 10. The display controller also receives the position of the camera. Due to lag in generating the oversized image and transmitting it to the display controller the position of the camera may have changed since the oversized image was generated. In dependence on the latest position of the camera that it has received, the display controller selects a subset of the oversized image and optionally applies a geometric transformation so as to form an adapted image. The display controller then causes the wall to display the adapted image. The framing of the oversized image and any geometric transformation can be quicker operations than that of forming the oversized image. In this way, the display controller can cause the image that is displayed to be adapted promptly to the position of the camera. This can help to reduce any perception that the background to the subject 1 is unnatural.
The display controller 10 has a processor 12 and a memory 13. The memory 13 stores in non-transient form code executable by the processor 12 so that the display controller can perform its functions.
The adaptation of the oversized image will now be described.
In one example, the rendering engine provides the oversized image to the display controller together with an indication of the camera position for which the oversized image was formed. The display controller can then apply an algorithm to frame (i.e. select a subset of) and optionally apply a geometric transformation to the oversized image (or the selected subset of it). For example, if the camera has traversed horizontally by a certain amount, then the display controller may select a subset of the display that is correspondingly offset horizontally from the centre of the oversized image. Or if the camera has traversed vertically by a certain amount, then the display controller may select a subset of the display that is correspondingly offset vertically from the centre of the oversized image. The resulting image may be an imperfect rendering of the scene from the current point of view of the camera, but if the camera is moving at a reasonable speed compared to the frame rate of the display system, any errors can be expected to be minor.
In the discussion above, the rendering engine and the display controller receive the position of the camera. They may also receive the direction of the camera (e.g. the direction of the central axis of the camera’s lens). They may also receive information about the properties of the camera’s lens. They may also receive information about the state of the camera’s lens, for example its zoom and/or aperture setting. Any one or more of these factors may be employed by the rendering engine to form the image and/or by the display controller to adapt that image. Information about the camera other than its position that was used to generate the image may be transmitted by the rendering engine to the display controller to permit the display controller to adapt for changes in the camera’s set-up since the image was formed.
The display controller may have information about the location of the screen so that it can establish the location of the camera relative to the screen. That information may also be used to affect the adaptation of the oversized image. For example, when the camera has undergone translation, the amount by which the centre of the adapted image is shifted from the centre of the oversized image may vary depending on the distance of the camera from the wall.
In one example, the camera may have rotated between the image being formed and it being processed by the display controller. In that situation the display controller may apply a suitable trapezoidal or other geometric transformation to the image or part of it to form the adapted image for display.
In some situations the image as formed by the rendering engine may be stretched by the display controller to form the image for display. Then, if desired, the display controller may employ an algorithm to interpolate between pixels of the oversized image to form the adapted image.
It has been found that the present system is especially valuable for accommodating translation of the camera relative to the wall or screen.
The rendering engine may provide the display controller information about the depth of objects depicted at respective locations in the image. This may, for example be provided as a depth map in which for each pixel or block of pixels in the image a value is specified indicating the depth or aggregate or average depth of locations depicted in that pixel or block. The display controller may apply a transformation to parts of the image in dependence on the indicated depth information. For example, a greater shift responsive to camera motion may be applied for pixels or blocks at a greater depth. Interpolation and/or culling of pixels and/or inpainting algorithms can be applied as appropriate to maintain a desired pixel density in the adapted image.
The display controller could be local to the display wall or projector or remote from it. It could be integrated into the wall or to one or more display panels. It could be integrated with the rendering engine.
The camera position sensor could signal the camera’s location directly to both the display controller and the rendering engine as illustrated in figure 1 . Alternatively it could signal the camera’s location to one of those entities, which could then forward it to the other. Other information about the state of the camera, such as its direction and the state of its lens, could be signalled in the same way.
The applicant hereby discloses in isolation each individual feature described herein and any combination of two or more such features, to the extent that such features or combinations are capable of being carried out based on the present specification as a whole in the light of the common general knowledge of a person skilled in the art, irrespective of whether such features or combinations of features solve any problems disclosed herein, and without limitation to the scope of the claims. The applicant indicates that aspects of the present invention may consist of any such individual feature or combination of features. In view of the foregoing description it will be evident to a person skilled in the art that various modifications may be made within the scope of the invention.
The phrase "configured to" or “arranged to” followed by a term defining a condition or function is used herein to indicate that the object of the phrase is in a state in which it has that condition, or is able to perform that function, without that object being modified or further configured.

Claims

1 . A display controller for a backdrop display, the controller being configured to: receive an input image representing a scene as viewed from a first location; receive a second location offset from the first location; transform the input image in response to an offset between the first location and the second location to form a transformed image; and transmit the transformed image to a display device for display.
2. A controller for a backdrop display as claimed in claim 1 , wherein the first and second locations are the locations of a mobile camera.
3. A controller as claimed in claim 1 or 2, wherein the transformation of the input image comprises selecting a subset of the input image to form the transformed image.
4. A controller as claimed in claim 3, wherein the transformation of the input image comprises selecting a subset of the input image whose centre is offset from that of the input image to form the transformed image.
5. A controller as claimed in any preceding claim, wherein the transformation of the input image comprises applying a geometric transformation to the input image to form the transformed image.
6. A controller as claimed in any preceding claim, the controller being configured to receive the first location in conjunction with the input image.
7. A system for controlling a backdrop display, the system comprising: a display controller as claimed in any preceding claim; and a rendering engine configured to form the input image from a three-dimensional model of the scene and transmit the input image to the display controller.
8
8. A system as claimed in claim 7, comprising the display device and wherein the display device is (i) a screen and a projector for displaying the image on the screen and/or (ii) a display wall.
9. A system as claimed in claim 8, wherein the display controller is local to the display device.
10. A system as claimed in claim 8 or 9, wherein the display device is arranged as a backdrop in a video studio.
11 . A method for controlling a backdrop display, the method comprising: receiving an input image representing a scene as viewed from a first location; receiving a second location offset from the first location; transforming the input image in response to an offset between the first location and the second location to form a transformed image; and transmitting the transformed image to a video studio backdrop display device for display.
9
PCT/GB2022/052892 2021-11-15 2022-11-15 Controlling adaptive backdrops WO2023084250A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB2116457.9A GB2614698A (en) 2021-11-15 2021-11-15 Controlling adaptive backdrops
GB2116457.9 2021-11-15

Publications (1)

Publication Number Publication Date
WO2023084250A1 true WO2023084250A1 (en) 2023-05-19

Family

ID=79163528

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2022/052892 WO2023084250A1 (en) 2021-11-15 2022-11-15 Controlling adaptive backdrops

Country Status (2)

Country Link
GB (1) GB2614698A (en)
WO (1) WO2023084250A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150379772A1 (en) * 2014-06-30 2015-12-31 Samsung Display Co., Ltd. Tracking accelerator for virtual and augmented reality displays
EP2962284A2 (en) 2013-03-01 2016-01-06 Michael Paul Alexander Geissler Optical navigation&positioning system
US20200143592A1 (en) * 2018-11-06 2020-05-07 Lucasfilm Entertainment Company Ltd. LLC Immersive content production system
CN112040092A (en) * 2020-09-08 2020-12-04 杭州时光坐标影视传媒股份有限公司 Real-time virtual scene LED shooting system and method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003510864A (en) * 1999-09-22 2003-03-18 カナデアン スペース エージェンシー Method and system for time / motion compensation for a head mounted display
US7312766B1 (en) * 2000-09-22 2007-12-25 Canadian Space Agency Method and system for time/motion compensation for head mounted displays
US10832427B1 (en) * 2018-05-07 2020-11-10 Apple Inc. Scene camera retargeting
JP7352374B2 (en) * 2019-04-12 2023-09-28 日本放送協会 Virtual viewpoint conversion device and program
GB202112327D0 (en) * 2021-08-27 2021-10-13 Mo Sys Engineering Ltd Rendering image content

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2962284A2 (en) 2013-03-01 2016-01-06 Michael Paul Alexander Geissler Optical navigation&positioning system
US20150379772A1 (en) * 2014-06-30 2015-12-31 Samsung Display Co., Ltd. Tracking accelerator for virtual and augmented reality displays
US20200143592A1 (en) * 2018-11-06 2020-05-07 Lucasfilm Entertainment Company Ltd. LLC Immersive content production system
CN112040092A (en) * 2020-09-08 2020-12-04 杭州时光坐标影视传媒股份有限公司 Real-time virtual scene LED shooting system and method

Also Published As

Publication number Publication date
GB2614698A (en) 2023-07-19
GB202116457D0 (en) 2021-12-29

Similar Documents

Publication Publication Date Title
US8208048B2 (en) Method for high dynamic range imaging
US11663733B2 (en) Depth determination for images captured with a moving camera and representing moving features
US7162083B2 (en) Image segmentation by means of temporal parallax difference induction
JP4153146B2 (en) Image control method for camera array and camera array
US20060078162A1 (en) System and method for stabilized single moving camera object tracking
KR101538947B1 (en) The apparatus and method of hemispheric freeviewpoint image service technology
EP1843581A2 (en) Video processing and display
JP2009124685A (en) Method and system for combining videos for display in real-time
US20150348273A1 (en) Depth modification for display applications
KR20150108774A (en) Method for processing a video sequence, corresponding device, computer program and non-transitory computer-readable medium
CN105282421A (en) Defogged image obtaining method, device and terminal
KR20140090775A (en) Correction method of distortion image obtained by using fisheye lens and image display system implementing thereof
CN103797782A (en) Image processing device and program
JP2012019399A (en) Stereoscopic image correction device, stereoscopic image correction method, and stereoscopic image correction system
CN110928509B (en) Display control method, display control device, storage medium, and communication terminal
WO2014126032A1 (en) Image processing device, image processing method, and recording medium
CN110276714B (en) Method and device for synthesizing rapid scanning panoramic image
WO2018105097A1 (en) Image synthesis device, image synthesis method, and image synthesis program
WO2023084250A1 (en) Controlling adaptive backdrops
CN105208286A (en) Photographing method and device for simulating low-speed shutter
WO2022109897A1 (en) Time-lapse photography method and device, and time-lapse video generation method and device
CN114219895A (en) Three-dimensional visual image construction method and device
GB2616188A (en) Method and apparatus for creating panoramic picture on basis of large screen, and intelligent terminal and medium
JP3128467B2 (en) How to convert 2D video to 3D video
WO2024074815A1 (en) Background generation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22835464

Country of ref document: EP

Kind code of ref document: A1