GB2573328A - A method and apparatus for generating a composite image - Google Patents

A method and apparatus for generating a composite image Download PDF

Info

Publication number
GB2573328A
GB2573328A GB1807327.0A GB201807327A GB2573328A GB 2573328 A GB2573328 A GB 2573328A GB 201807327 A GB201807327 A GB 201807327A GB 2573328 A GB2573328 A GB 2573328A
Authority
GB
United Kingdom
Prior art keywords
image
pixels
video stream
composite
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1807327.0A
Other versions
GB201807327D0 (en
Inventor
Evison David
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB1807327.0A priority Critical patent/GB2573328A/en
Publication of GB201807327D0 publication Critical patent/GB201807327D0/en
Publication of GB2573328A publication Critical patent/GB2573328A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

A system for creating a composite image using content from a video stream comprises a computer-readable store in which a first image 20 is stored, a display (3, figure 1) and a user interface (6, figure 1). The system further comprises processing circuitry configured to display the first image to a user and, using a user input received via the interface, select a region of pixels 20A of the displayed first image, the selected region being smaller than the total number of pixels of the first image. Sequential video image frames of the video stream are mapped onto the pixels of the selected region to generate a sequence (30a,30b - 30n) of composite image frames. The resultant composite images frames comprise unselected pixels 20B of the first image and pixels of the video stream (30Aa = 30An) which replace the selected region of pixels of the first image. Each composite image frame comprises pixels from different video image frame of the video stream.

Description

There is described a method and apparatus for creating a composite image using content from a video stream.
US2016/0198097 describes a process of automatically distinguishing between foreground objects and a background within an image and forming a composite image by combining the distinguished foreground objects with a second image.
According to a first aspect of the invention there is provided a system for creating a composite image using content from a video stream; the system comprising: a computer readable store in which a first image is stored; an electronic visual display; a user interface; processing circuitry configured to: display the first image on the electronic visual display; receive a first input from a user via the user interface selecting a number of pixels of the displayed first image, the selection being smaller than the total number of pixels of the first image; and mapping pixels of sequential video image frames of a video stream derived from a electronic sensor of a camera with pixels of the first image; and displaying on the electronic visual display, one after another, a succession of composite image frames comprised from the unselected pixels of the first image and, in place of the selected pixels of the first image, the pixels of the image frames of the video stream that are mapped to the selected pixels of the first image, each composite image frame comprising pixels of a different video image frame of the video stream.
The processing circuitry may be further configured to receive a second input from the user via the user interface to save an image and in response store data in a persistent computer readable memory, the data representing the composite image frame that was displayed substantially at the time the second input was received
-2According to a second aspect there is provided a computer-readable storage medium, storing program instructions that when executed on one or more computers cause at least one of the one or more computers to: display a first image on a display of the at least one of the one or more computers; receive an input from a user via a user interface selecting a number of pixels of the displayed first image, the selection being smaller than the total number of pixels of the first image; mapping pixels of sequential video image frames of a video stream derived from an electronic sensor of a camera connected to the at least one of the one or more computer, with pixels of the first image; and displaying on the display a succession of composite image frames comprised from the unselected pixels of the first image and, in place of the selected pixels of the first image, the pixels of the image frames of the live video stream that are mapped to the selected pixels of the first image; receive an input from the user via the user interface to save an image and in response storing data in a persistent store, the data representing the composite image frame that was displayed substantially at the time the input from the user via the user interface to save an image was received.
According to a further aspect of the invention there is provided a method of creating a composite image using content from a video stream; the method comprising: displaying on an electronic visual display a first image derived from image data held in a computer readable store; manually selecting, via a user interface, a number of pixels of the displayed first image, the selection being smaller than the total number of pixels of the first image; mapping pixels of sequential video image frames of a video stream derived from a electronic sensor of a camera with pixels of the first image; and displaying on the electronic visual display, one after another, a succession of composite image frames comprised from the unselected pixels of the first image and, in place of the selected pixels of the first image, the pixels of the image frames of the video stream that are mapped to the selected pixels of the first image, each composite image frame comprising pixels of a different video image frame of the video stream; in response to a user input, generating a composite image from the combination of the visible portion of the foreground layer and the mapped portions of the background layer using the image frame derived from the live video
-3 stream that was held on the background layer at substantially the time of the user input.
The invention will be described by way of example with reference to the following Figures in which:
Figure 1 is schematic of a computer system adapted to generate and display a composite image;
Figure 2A illustrates on the left pixels forming a first image of which a region has been selected by a user, and on the right video image frames mapped to the first image;
Figure 2B illustrates composite images comprised from the first image and video image frames; and
Figures 3A -3E are example screen shots illustrating steps in the generation of a composite image from a video stream.
With reference to Figure 1 there is described a system and method of generating and displaying a composite image from a first image and a second image, the second image being an image frame from a video stream.
The method is implemented by a computer program held in a computer readable memory and executed by processing circuitry comprising one or more processors.
For example, as shown in Fig 1, the computer program may be run on a computer device 1 such as a smart phone or tablet computer that includes processing circuitry 2 and RAM 2A; an electronic visual display 3 of conventional form, e.g. LCD or OLED; non-volatile (persistent) memory 4; input devices including a camera 5; and a user input interface 6 comprising a touch screen layered over the electronic visual
-4display. The user input interface 6 may in addition or instead comprise a keyboard and/or mouse or other input device.
In response to a user input signal via the user input interface 6, an image data file 10 of a still image selected by the user is loaded from the non-volatile memory 4 into RAM 2A and the corresponding still image (e.g. a photograph) herein referred to as the first image 20, displayed to the user through the display 3.
The first image 20 may have been captured using the camera 5 and stored in nonvolatile memory 4, or the image file 10 may have been downloaded to the computer device from an external source. Where the first image 20 is captured by the camera 5 contemporaneously with the running of the computer program, the image data from the camera’s 5 electronic image sensor 5A may be processed by the processing circuitry 2 and the resulting image file 10 held in the RAM 2A as opposed to being loaded from the non-volatile memory 4.
The computer program is adapted to receive user inputs via the user input interface 6,
e.g. gestures on the touch screen, to select pixels of the first image; the selection typically defines one or more regions 20A of contiguous pixels of the first image.
The computer program is adapted to receive image data from the electronic image sensor 5A of the camera 5 that defines video image frames 30 of a video stream. Each video image frame has the same pixel dimensions as the first image 20. The computer program maps pixels of each video image frame one-to-one with the pixels of the still image. A portion of each video image frame comprising those pixels that are mapped to the selected pixels of the first image is displayed in turn, in place of the selected pixels of the first image such that the user is presented with a sequence of composite video image frames.
To the user, the above provides an impression that the first image is on an editable foreground layer and selecting portions of the still image renders them transparent to
- 5 reveal a portion of the frame of the video stream that lies on a background layer directly behind the first image.
To illustrate, Figure 2A is a representation of the first image 20 comprised from a 6x6 pixel array; it will be appreciated that in practice images are likely to be significantly larger. The user selects a number of contiguous pixels to form a selected region 20A leaving a non-selected region 20B. Each received video image frame 30(a.n) is also comprised from a 6x6 array of pixels. The pixel array of each video image frame 30(a. n) is mapped to the pixel array of the first image 20A, those pixels of each video image frame 30(a.n) mapped to the pixels in region 20A defining for their respective frame a selected region 30A; Fig 2A shows only selected region 30A(a) of frame 30(a). Referring to Fig 2B, the user is presented in sequence at a rate sufficient to provide the illusion of movement of a moving object depicted in region 30A, multiple composite images. Each composite image comprising the non-selected region 20B of the first image 20 and in place of the pixels within region 20A, the pixels of a region 30A(a.n) of a different video image frame 30.
Where the video stream is a live video stream, the user may move the camera, which when integrated with the computer device may involve moving the computer device, relative to an object that it is desired to depict on the display 3 in order to size, align or change the perspective of the object depicted within region 30A of video image frames that map to the selected region 20A of the first image 20. In this way a user can control the arrangement the object depicted in the video stream relative to another object depicted in the foreground layer.
The computer program may be adapted to receive user input whilst the composite video image frames are being displayed, to select further pixels of the first image or deselect pixels of the first image. To the user this provides the impression of revealing more or less of the video image frames.
In response to a detected input from the user to save a composite image, the computer program creates an image file 40 in a non-volatile memory 4 of the computer device,
-6the image file (e.g. of JPEG format) comprising image data corresponding to a composite image frame shown on the display at substantially the time the input to save a composite image was detected.
Rather than a single image, a sequence of composite video frames may be recorded to create a composite video file.
It is preferably that the image size, i.e. pixel dimensions, of the first image and each video frame are the same in order that the pixels of the first image can be mapped one-to-one with each video image frame. Nevertheless, it is conceivable that mapping may be possible where there is a disparity in the size of the first image and video frames. In such instances mapping may not be one-to-one. Alternatively, where the size of the first image and video frames differ, the first image may be cropped in order to match the size of the video frames or visa-versa.
Figures 3 A- 3E are of screen shots illustrating steps for the generation of a composite image from a photograph image file (in this instance depicting a dog) and a frame from a live video stream from the camera of the computer device depicting a human face. A photograph image file 50 is selected by the user from the non-volatile memory of the computer device (Fig 3A) to be loaded into RAM 2A and made available to the computer program; the corresponding photo 51 is displayed on display (Fig 3B). The user uses the touch screen to select a portion 51 of the photo (Fig 3C); the pixels of the selected portion are highlighted (in this instance of a region around the dog’s muzzle) to make the selected portion visible to the user. A portion 60 of each video image frame from the video stream that are mapped to the selected part of the photo are shown in place of the selected portion of the photo (Fig 3D). By moving the camera, the portion 60 of the video image frames that are visible changes (compare Figures 3D, 3E, 3F). Upon detecting a signal from the user via the user interface to save a composite image, an image file is created and saved to the nonvolatile memory 4 that corresponds with the composite image frame shown on the display at substantially the time the signal was detected (Fig 3F). It is possible instead or in addition for the image file to be saved to a non-volatile memory that is remote
-7 from the computer device 1, e.g. where the image is to be upload directly to a social media account, the image file may be stored on a remote server associated with the social media site via the internet.
Rather than or in addition to saving the generated composite image(s) to the non5 volatile store, the composite image may be transmitted to another device. For example, when used in conjunction with a video call software application which may form part of the computer program or be seperate, a video stream generated by a computer device of a first user for transmission to one or more other computer devices of other users participating in the video call may be composited with a still 10 image as described above by the first’s users computer device and the resulting composite video frames transmitted to the one or more other computer devices of the other users in place of the original video stream. Alternatively, a video stream received by a user may be composited with a still image on the user’s device and the composite image frames displayed to the user in place of the received video stream.

Claims (7)

Claims
1. A system for creating a composite image using content from a video stream; the system comprising:
a computer readable store in which a first image is stored;
an electronic visual display;
a user interface;
processing circuitry configured to:
display the first image on the electronic visual display;
receive a first input from a user via the user interface selecting a number of pixels of the displayed first image, the selection being smaller than the total number of pixels of the first image; and mapping pixels of sequential video image frames of a video stream derived from a electronic sensor of a camera with pixels of the first image; and displaying on the electronic visual display, one after another, a succession of composite image frames comprised from the unselected pixels of the first image and, in place of the selected pixels of the first image, the pixels of the image frames of the video stream that are mapped to the selected pixels of the first image, each composite image frame comprising pixels of a different video image frame of the video stream.
2. A system according to claim 1 the processing circuitry further configured to receive a second input from the user via the user interface to save an image and in response storing data representing the composite image frame that was displayed substantially at the time the second input was received.
3. A system according to claim 1 or 2 comprising the camera and the electronic sensor.
4. A computer-readable storage medium, storing program instructions that when executed on one or more computers cause at least one of the one or more computers to:
display a first image on a display of the at least one of the one or more computers;
receive an input from a user via a user interface selecting a number of pixels of the displayed first image, the selection being smaller than the total number of pixels of the first image;
mapping pixels of sequential video image frames of a video stream derived from an electronic sensor of a camera connected to the at least one of the one or more computer, with pixels of the first image; and displaying on the display a succession of composite image frames comprised from the unselected pixels of the first image and, in place of the selected pixels of the first image, the pixels of the image frames of the live video stream that are mapped to the selected pixels of the first image;
receive an input from the user via the user interface to save an image and in response storing data representing the composite image frame
- 10 that was displayed substantially at the time the input from the user via the user interface to save an image was received.
5. A method of creating a composite image using content from a video stream; the method comprising:
displaying on an electronic visual display a first image derived from image data held in a computer readable store;
manually selecting, via a user interface, a number of pixels of the displayed first image, the selection being smaller than the total number of pixels of the first image;
mapping pixels of sequential video image frames of a video stream derived from a electronic sensor of a camera with pixels of the first image; and displaying on the electronic visual display, one after another, a succession of composite image frames comprised from the unselected pixels of the first image and, in place of the selected pixels of the first image, the pixels of the image frames of the video stream that are mapped to the selected pixels of the first image, each composite image frame comprising pixels of a different video image frame of the video stream;
in response to a user input, generating a composite image from the combination of the visible portion of the foreground layer and the mapped portions of the background layer using the image frame derived from the live video stream that was held on the background layer at substantially the time of the user input.
6. A method according to claim 5 comprising generating a video stream from multiple composite images, the multiple composite images comprised from the combination of the visible portion of the foreground layer and the mapped
- 11 portions of the background layer using sequential image frames derived from the live video stream following the user input.
7. A method according to claim 5 or 6 comprising moving the camera whilst image frames of the video stream are displayed on the electronic visual 5 display in order to visual relationship between an object depicted in the video image frame with an object depicted in the visible portion of the first image.
GB1807327.0A 2018-05-03 2018-05-03 A method and apparatus for generating a composite image Withdrawn GB2573328A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1807327.0A GB2573328A (en) 2018-05-03 2018-05-03 A method and apparatus for generating a composite image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1807327.0A GB2573328A (en) 2018-05-03 2018-05-03 A method and apparatus for generating a composite image

Publications (2)

Publication Number Publication Date
GB201807327D0 GB201807327D0 (en) 2018-06-20
GB2573328A true GB2573328A (en) 2019-11-06

Family

ID=62598279

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1807327.0A Withdrawn GB2573328A (en) 2018-05-03 2018-05-03 A method and apparatus for generating a composite image

Country Status (1)

Country Link
GB (1) GB2573328A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10609332B1 (en) * 2018-12-21 2020-03-31 Microsoft Technology Licensing, Llc Video conferencing supporting a composite video stream

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6064399A (en) * 1998-04-03 2000-05-16 Mgi Software Corporation Method and system for panel alignment in panoramas
US20130235224A1 (en) * 2012-03-09 2013-09-12 Minwoo Park Video camera providing a composite video sequence
US20160198097A1 (en) * 2015-01-05 2016-07-07 GenMe, Inc. System and method for inserting objects into an image or sequence of images

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6064399A (en) * 1998-04-03 2000-05-16 Mgi Software Corporation Method and system for panel alignment in panoramas
US20130235224A1 (en) * 2012-03-09 2013-09-12 Minwoo Park Video camera providing a composite video sequence
US20160198097A1 (en) * 2015-01-05 2016-07-07 GenMe, Inc. System and method for inserting objects into an image or sequence of images

Also Published As

Publication number Publication date
GB201807327D0 (en) 2018-06-20

Similar Documents

Publication Publication Date Title
US9491366B2 (en) Electronic device and image composition method thereof
CN106210861B (en) Method and system for displaying bullet screen
CN110933296B (en) Apparatus and method for providing content aware photo filter
US9582902B2 (en) Managing raw and processed image file pairs
US9300876B2 (en) Fill with camera ink
KR20140098009A (en) Method and system for creating a context based camera collage
TWI546726B (en) Image processing methods and systems in accordance with depth information, and computer program prodcuts
US8995750B2 (en) Image composition apparatus, image retrieval method, and storage medium storing program
US9432583B2 (en) Method of providing an adjusted digital image representation of a view, and an apparatus
US9280847B2 (en) Image composition apparatus, image retrieval method, and storage medium storing program
GB2614483A (en) Recommending location and content aware filters for digital photographs
KR102063434B1 (en) Apparatus for displaying thumbnail image and method thereof
CN115494987A (en) Video-based interaction method and device, computer equipment and storage medium
AU2018271418B2 (en) Creating selective virtual long-exposure images
US11451721B2 (en) Interactive augmented reality (AR) based video creation from existing video
CN109065001B (en) Image down-sampling method and device, terminal equipment and medium
CN108134906B (en) Image processing method and system
GB2573328A (en) A method and apparatus for generating a composite image
CN116530078A (en) 3D video conferencing system and method for displaying stereo-rendered image data acquired from multiple perspectives
US9036070B2 (en) Displaying of images with lighting on the basis of captured auxiliary images
CN108305210B (en) Data processing method, device and storage medium
CN112634339A (en) Commodity object information display method and device and electronic equipment
US20170150065A1 (en) System And Method For Producing A Background Or Cover Image
JP7378960B2 (en) Image processing device, image processing system, image generation method, and program
CN114363521B (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
S20A Reinstatement of application (sect. 20a/patents act 1977)

Free format text: REQUEST FOR REINSTATEMENT ALLOWED

Effective date: 20190711

Free format text: REQUEST FOR REINSTATEMENT FILED

Effective date: 20190708

WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)