WO2019106378A1 - Automated media capture system - Google Patents

Automated media capture system Download PDF

Info

Publication number
WO2019106378A1
WO2019106378A1 PCT/GB2018/053469 GB2018053469W WO2019106378A1 WO 2019106378 A1 WO2019106378 A1 WO 2019106378A1 GB 2018053469 W GB2018053469 W GB 2018053469W WO 2019106378 A1 WO2019106378 A1 WO 2019106378A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
capture
images
sequence
camera system
Prior art date
Application number
PCT/GB2018/053469
Other languages
French (fr)
Inventor
Jessica KYTE
Satinder PAHWA
Original Assignee
Centrica Hive Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Centrica Hive Limited filed Critical Centrica Hive Limited
Publication of WO2019106378A1 publication Critical patent/WO2019106378A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Definitions

  • the present invention relates to systems and methods for automating media capture using camera devices.
  • the present invention seeks to alleviate these problems and provide improved approaches for automating camera operation.
  • a method of capturing image data using a camera system comprising: repeatedly performing an image capture process, wherein the image capture process is repeated periodically by the camera system at a predetermined capture interval and comprises, at each capture interval: acquiring a sequence of images by the camera system; and outputting the sequence of images from the camera system.
  • the sequence of images preferably comprises an image burst including a plurality of distinct images.
  • a predefined number of images are preferably acquired over the duration of the burst, and the images of the burst are preferably separated by a constant frame interval.
  • Each image sequence or burst is preferably separated by the constant capture interval.
  • the claimed approach thus enables a series of temporally separated image bursts to be acquired automatically.
  • the sequence of images may be acquired using a burst mode of the camera.
  • the sequence of images may be in the form of a video clip.
  • the method may comprise performing the image capture process in response to expiry of a timer configured to time the capture interval (the timer may then be reset to time a next capture interval before a next capture is triggered).
  • the sequence of images preferably extends over a time duration (e.g. burst length) that is shorter than the capture interval, preferably by a factor of at least 10, optionally by a factor of at least 100.
  • a time duration e.g. burst length
  • the capture interval preferably by a factor of at least 10
  • a factor of at least 100 optionally by a factor of at least 100.
  • the sequence of images is preferably recorded at a predetermined frame rate (e.g. a burst rate for burst images or video frame rate for video clips), or with a predetermined frame interval.
  • the capture interval is preferably greater than the frame interval (as defined by the predetermined frame rate), preferably by at least a factor of 100, more preferably by at least a factor of 1000.
  • the method preferably comprises not outputting image data from the camera system outside the periodic image capture process or discarding image data output from the camera system outside the periodic image capture process (e.g. by storing only the results of the image capture process for subsequent access by the user or processing).
  • the method preferably comprises receiving configuration data from a user device specifying the periodic capture interval (this may be specified directly as a time interval or indirectly as a number of captures to be performed over a given period of time).
  • the method preferably comprises performing the repeated capture process in response to detecting one or more predetermined features of an image acquired by the camera system. This may involve initiating the repeated capture process and/or continuing the repeated capture process in dependence on detection of the feature(s). Detection of a feature of an image may include detection of the presence or absence of an image feature within the image.
  • the one or more features may comprise at least one of: presence or absence of a predetermined individual in the image; and a number of detected individuals in the image meeting a criterion, preferably by meeting or exceeding a threshold number. Individuals (or number of individuals present) may be identified through face detection.
  • the method comprises repeatedly performing the image capture process during a predetermined, preferably user-configured, capture time period.
  • the capture process is preferably performed only after the start of the capture time period and is continued only until the end of the capture time period.
  • the invention provides a method of capturing image data using a camera system, the method comprising: receiving configuration data from a user device specifying a capture time period; during the capture time period, monitoring for the occurrence of a predetermined trigger condition; and in response to the trigger condition, performing an image capture process, the image capture process comprising: recording a sequence of images by the camera system; and outputting the sequence of images from the camera system.
  • The, or each, sequence of images may comprise one of: an image burst including a plurality of distinct images; and a video clip.
  • the method may comprise not outputting image data from the camera system outside the triggered image capture process or discarding image data output from the camera system outside the triggered image capture process.
  • the trigger condition comprises detection of one or more predetermined features of the camera image, preferably wherein detection of a feature comprises detecting the presence or absence of a predetermined image feature within an image.
  • the trigger condition may comprise detecting the presence or absence of a predetermined individual in the image.
  • the trigger condition may comprise detection of a number of individuals in the camera image which meets a predetermined criterion, preferably by meeting or exceeding a predetermined threshold number (or alternatively falling below a threshold number). Face detection may be used to detect presence of individuals.
  • the trigger condition may alternatively or additionally be based on a capture schedule, and/or the trigger condition may comprise a periodic capture interval, preferably as set out in relation to the first aspect of the invention.
  • Multiple triggers of the same or different type e.g. face detection and time-based triggers may be used in combination to determine when to trigger image capture.
  • the method is preferably performed automatically under the control of at least one controller or processor of the camera system.
  • the method is preferably implemented by software running on a processor of the camera system.
  • the camera system is typically in the form of a digital camera but may include other local or remote components, with the controller/processor located within the camera or another local or remote component.
  • the capture time period is preferably defined by one or more of: a start time, a duration, and an end time (e.g. a specific future time window may be defined, or simply a time duration starting immediately).
  • the sequence of images is preferably acquired based on one or more of: a predetermined, preferably user-configured, sequence duration; a predetermined preferably user-configured, number of images of the sequence; and a predetermined, preferably user-configured, sequence frame rate (e.g. burst rate or video frame rate).
  • the outputting step preferably comprises storing the sequence of images (e.g. at the camera system or at a remote device).
  • the camera system is connected to a communications network, and outputting the sequence of images comprises transmitting the sequence of images to a remote location via the network for storage at the remote location.
  • the remote location optionally comprises an Internet-connected server or cloud service.
  • the method comprises, after the end of the capture time period or after a last image sequence has been captured, combining a plurality of images of the captured image sequences, or a plurality of the captured image sequences, into a single media file.
  • selected image sequences may be incorporated into the file, or selected individual images from within image sequence, or a combination, preferably in order of capture.
  • all image sequences captured during a capture session e.g. defined by the capture time period
  • the method may comprise transmitting the image sequences to a server device over a communications network from the camera system; performing the combining at the server device; and outputting the single media file to a user device over a communications network.
  • the single media file may comprise a video file or a multi image file, for example a slide show.
  • the method may further comprise providing access to a user device (e.g. by way of a user device application or web service) for viewing or downloading the combined media file and/or one or more selected captured image sequences and/or one or more constituent images of the captured image sequences (from the server).
  • the method preferably comprises receiving configuration data at the camera system from a user device via a communications network, the configuration data specifying one or more of: a capture time period, a periodic capture interval, a sequence duration, a sequence image count, a sequence frame rate, and one or more trigger conditions for triggering the image capture process; and wherein the method preferably further comprises performing the image capture process in accordance with the received configuration data.
  • the camera system preferably includes an image sensor, a network interface, and a controller or processor for controlling transmission of image data from the camera system to a communications network connected to the network interface (e.g. for transmission to a media server).
  • the invention provides a camera system comprising means for performing any method as defined above or described below.
  • the invention further provides a tangible computer readable medium (or combination of multiple such media) comprising software code adapted, when executed by one or more processors, to perform any method as defined above or described below.
  • Figure 1 illustrates a system for automated media capture
  • Figure 2 illustrates an automated media capture process
  • Figure 3 illustrates processing performed by a media server
  • Figure 4 illustrates the hardware/software architecture of certain system components.
  • Embodiments of the invention provide a system enabling automatic media capture by one or more cameras on behalf of the user without requiring the user to trigger each individual image/video capture or hold the camera during media capture.
  • the camera can be placed or fixed on a shelf/wall/other mount and the media is captured automatically, without user intervention, based on a predefined capture schedule or other trigger conditions, allowing a set of representative images/videos of an event or location to be recorded automatically.
  • the system comprises one or more cameras 102 positioned at a user premises (e.g. a user’s home).
  • the cameras are connected (typically wirelessly) to an in-home local area network (LAN) 106.
  • LAN local area network
  • Cameras may be conventional digital cameras with communications capabilities.
  • the cameras are designed specifically for communication with other networked devices and control via the network, and thus may include image and communications functionality, but may provide limited or no user controls on the device itself (e.g. no display screen for viewing images) and limited or no permanent on-device storage for storing image data. Instead, the cameras may be moveably placed or fixedly installed at required locations in the home or other premises to record images for transmission to other devices for storage, manipulation and viewing.
  • a user device 104 e.g.
  • the media server provides a camera interface 112 for receiving image data from one or more cameras 102 (e.g. using a suitable API) and a user interface 114 for allowing interaction with the user device 104, for example to view captured media.
  • the media server includes, or is connected to, a media database 116 for storing image and video data received from cameras and processed media data generated by the media server.
  • User device 104 may communicate directly with camera(s) 102, for example to configure capture modes and receive image data directly.
  • the camera system provides automated media capture modes in which images or video are recorded in response to specific predefined triggers.
  • the triggers comprise a time schedule (e.g. periodic capture at predefined time intervals), whilst in another mode, the triggers are based on feature detection, as will be described in more detail later.
  • step 202 the user configures trigger options (for example setting the time interval between recording bursts) and recording options (e.g. type of media to be captured).
  • step 204 the user configures a time window, for example as a specified duration starting from the time of configuration (i.e. from“now”) or as a future time window (e.g.“8pm- 11 pm on Friday 1st December”).
  • the camera begins to monitor (206) for the configured trigger condition(s). If during monitoring the end of the time window is reached (decision 208), then the automatic capture mode ends (step 210). If prior to the end of the time window the configured trigger is detected (decision 212) then media capture is initiated; otherwise, the process continues to monitor for the trigger at step 206.
  • step 214 an image burst or video clip is recorded in step 214 (in some cases individual images may also be recorded instead of a burst or video clip).
  • step 216 the recorded images or the video clip are uploaded to the media server 110, and the camera continues to monitor for the trigger condition at step 206, until the end of the defined time window. Processing at the media server is described further below.
  • Figure 2 illustrates uploading of image data (burst images or video clip) immediately upon capture
  • media may alternatively be buffered at the camera and uploaded at a later point (e.g. at the end of the capture session).
  • the trigger detection and image capture is described as occurring at the camera.
  • image data could be streamed continuously to the server, with the server monitoring for the trigger condition, and storing image data (e.g. bursts or video) only in response to the trigger condition being met, whilst discarding other image data.
  • image data e.g. bursts or video
  • media capture is performed on a timed basis, with a capture schedule defining the trigger condition of Figure 2.
  • the user configures the duration of media capture for a capture session (this could be from a defined list of options e.g. 30 minutes, 1 hr, 2 hrs etc.), e.g. starting immediately or as a future time window. Additionally, the user configures the capture schedule and the type of media to be captured during the capture session.
  • the type of media may to be captured during this time can include single images, bursts of images, video clips, or a combination of any of the above.
  • An image burst is a set of distinct images recorded at a defined frame rate / frame interval, e.g. using a burst mode provided by the camera.
  • the frame interval may be based on the camera frame rate which may be defined by the manufacturer. For a camera supporting 30fps a typical burst could mean capturing a photo every ten frames (corresponding to a frame rate of 3fps and a frame interval of one third of a second).
  • the frame interval between images may be configured by the user (e.g. by selecting from a set of burst rates supported by the camera).
  • burst mode could be configured to record ten images at 0.2 second intervals when the trigger condition is detected.
  • a video clip of a predetermined duration e.g. 30 seconds is recorded, where the duration may similarly be configured by the user.
  • an image burst is distinct from a video clip since an image burst typically has a lower frame rate (larger frame interval) and furthermore the images are encoded as distinct, separate images which can be individually stored, displayed, copied etc., using a suitable image codec (e.g. JPEG), whereas a video clip will typically be recorded using a suitable video codec (e.g. MPEG-based) in which individual frames are not necessarily recorded separately and distinctly (since video codecs typically use inter-frame compression).
  • a suitable image codec e.g. JPEG
  • MPEG-based suitable video codec
  • individual images of a burst may be stored and transmitted as individual, separate, image files.
  • a video format could be used to provide an encoding of an image burst.
  • image bursts instead of individual image capture may be advantageous to the user in that the likelihood of obtaining a good digital photograph of a scene is increased.
  • media is being captured automatically, without user intervention, and this can result in a photo being captured which is not optimal - e.g. a person in the scene could be mid-blink with their eyes closed.
  • An image burst allows a series of photos to be captured close together in time, providing the opportunity for a photo to be captured at just the right moment. The user can later select the best image from the burst and discard the others.
  • the capture schedule is defined as a capture interval, with capture performed repeatedly and periodically based on the defined capture interval.
  • the user may configure capture of an image burst or video clip every 10 minutes during the capture session.
  • expiry of the capture interval provides the trigger condition to trigger media capture (e.g. based on a timer).
  • the timer is reset and capture stops until the next expiry of the specified time interval.
  • the capture interval is distinct from the frame interval at which individual images of an image burst are recorded - e.g. the user may configure capture of an image burst every 5 minutes (capture interval) with each image burst comprising 10 images at 2 second intervals (frame interval).
  • the frame interval between images of the burst and the total burst duration are typically small compared to the capture interval, allowing collection of a representative collection of images over an extended time period without the need to transmit and store large volumes of image data.
  • the media capture will continue without any intervention required by the user until the specified end time for the capture session.
  • the user may configure more complex capture schedules or patterns, for example involving multiple different media types.
  • capture schedules such as:
  • the user can preferably configure the same or different schedules for each camera.
  • Configuration of the capture schedule including capture intervals, frame intervals and frame count for burst mode, video duration for video clips etc. is performed via an application running on the user device 104 ( Figure 1), e.g. a smartphone / tablet app.
  • the user device communicates the schedule and other configuration directly to the camera(s) 102.
  • configuration could occur via a remote web application (e.g. part of user interface 114 at media server 110).
  • the configuration may be performed by the user specifying the total length of the capture session (e.g. as a specific time window or duration) together with the required capture interval (e.g. every 5 minutes). Instead of specifying the capture interval, the user may configure a number of captures to be performed during the session, with the required capture interval being computed automatically (e.g. if the user specifies 10 burst captures over 2 hours, this equates to one burst being captured every 12 minutes).
  • media capture can be triggered by automatic detection of one or more features in the camera image. Capture may be triggered based on presence or absence of specific features.
  • the detected features are typically people or faces, though the same principle can be applied for other types of image features.
  • the trigger may comprise a specific person being present in the camera’s viewing frame (e.g. the“birthday girl”).
  • the trigger may comprise continued detection of a threshold number of individuals in the camera’s viewing frame for a predefined duration of (e.g. at least x people in frame for at least y seconds), which could indicate a photo-worthy gathering of friends and family.
  • Known feature detection e.g. face detection
  • a separate computing device e.g. in LAN 106 or at media server 110 based on an image stream received from the camera.
  • the capture parameters and triggers are configurable by the user using the user device application (e.g. to specify the specific individual or number of individuals to be detected).
  • the user additionally specifies the type of media to be recorded whenever the trigger condition is detected, and the start/end time and/or duration of the media capture session.
  • feature-triggered and schedule-based capture may be combined, for example to trigger capture based on a schedule in response to detection of particular features (e.g. trigger periodic image burst capture whilst a particular individual is detected using face detection).
  • Example scenarios in this capture mode include:
  • the media captured during a capture session is uploaded to a cloud platform (in the form of media server 110 in the Figure 1 example) where it can be processed and made available to the user.
  • a cloud platform in the form of media server 110 in the Figure 1 example
  • the media server 110 processes the media recorded for a session to generate a single compressed media playback file that stitches together all individual photos, image bursts and video clips into one video file. This is then made available to the user to view (e.g. via a web application as part of user interface 114). The user also has the option to download the full video file or access individual photo/video files to save/share certain moments from the media capture session.
  • the server receives captured media from a camera device. This may, for example, be an individual image, an image burst consisting of a sequence of individual images, or a video clip.
  • the server stores the media in the media database (e.g. media DB 116 in Figure 1).
  • the server determines whether the capture session is complete. If not, the process continues at step 302 with the server waiting to receive additional media. Once the capture session is complete, the server processes the received media to create a single combined video presentation in step 308. The combined video presentation and the individual media files are then made available for viewing and/or download to the user in step 310 and/or may be automatically transmitted to a user device.
  • the combined media is created as a single video file (e.g. using a standard video format such as MP4 or AVI), the media could be combined in other ways, e.g. as a segmented media file comprising multiple images/video clips, as an animation (e.g. GIF, Flash), as a slideshow / presentation format (e.g. Powerpoint), as a web page embedding multiple media items etc.
  • an animation e.g. GIF, Flash
  • a slideshow / presentation format e.g. Powerpoint
  • only a selected subset of the media may be included (e.g. based on user configuration and/or automated image analysis, e.g. to exclude low quality media). This could include selected bursts and/or video clips in full and/or individual selected images taken from some or all of the bursts.
  • the end product can thus provide a complete record of the recorded event for the user in an easily viewable form, based on a combination of video and still image segments.
  • Figure 4 illustrates the camera 102 and media server 110 in more detail.
  • the camera device 102 includes one or more processors 402 together with volatile / random access memory 404 for storing temporary data and software code being executed.
  • the volatile memory may buffer recorded image data and store configuration parameters, such as capture schedule/trigger information, burst length etc.
  • Persistent storage 406 persistently stores control software 412 for controlling operation of the camera.
  • a network interface 410 e.g. a Wi-Fi transceiver
  • a network interface 410 is provided for communication with other system components, including user device 104 and media server 110, over one or more networks (e.g. Local or Wide Area Networks, including the Internet).
  • the camera also includes an image sensor 408 for acquiring image data, and may include other conventional hardware and software components as known to those skilled in the art.
  • the media server 110 includes one or more processors 422 together with volatile / random access memory 424 for storing temporary data and software code being executed.
  • Persistent storage 426 e.g. in the form of hard disk and/or optical storage media persistently stores software for execution on the server, including camera interface module 112, user interface module 114 (e.g. in the form of a web application server), and media combiner module 432 for generating combined media files from multiple images, image bursts and video clips.
  • Persistent storage 426 may additionally store the media database 116 including source image data recorded by the camera(s) and generated combined media files.
  • a network interface 430 is provided for communication with the other system components.
  • the server may include other conventional hardware and software components as known to those skilled in the art (e.g. a server operating system).
  • User device 104 is preferably a conventional user device such as a smartphone, tablet or personal computer and comprises conventional hardware/software components as known to those skilled in the art along with an application 440 for configuring media capture by the camera and viewing or downloading captured media from the media server.
  • media database 116 could be stored at a separate database server.
  • server 110 may in practice be implemented by multiple separate server devices (e.g. by a server cluster). Some functions of the server could be performed at the user device or camera or vice versa.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

A method of capturing image data using a camera system is disclosed, involving repeatedly performing an image capture process.The image capture process is repeated periodically at a predetermined capture interval. At each capture interval, a sequence of images is acquired by the camera system(e.g. as an image burst or video clip),andthe sequence of images is output from the camera system for storage at a remote device.

Description

Automated media capture system
The present invention relates to systems and methods for automating media capture using camera devices.
Conventional cameras require direct user interaction to trigger media capture. Some commercially available cameras provide limited timer delay functions, allowing the user to set a delay of, say, 10 seconds after pressing the shutter before a picture is taken, to allow time for the scene to be staged as needed, but this still requires direct control by the user over the capture process. Furthermore, camera users often wish to record a variety of images and video over the course of events such as family gatherings and parties, or capture‘big moments’ such as baby’s first steps. At such events, the user operates the camera manually or may set up the camera on a fixed timer for individual photo captures and in this way a collection of individual photos or videos of the event can be acquired. However, this requires the user to be constantly attentive to the camera equipment and to make decisions on what media to capture and when, placing significant burden on the user.
The present invention seeks to alleviate these problems and provide improved approaches for automating camera operation.
Accordingly, in a first aspect of the invention, there is provided a method of capturing image data using a camera system, comprising: repeatedly performing an image capture process, wherein the image capture process is repeated periodically by the camera system at a predetermined capture interval and comprises, at each capture interval: acquiring a sequence of images by the camera system; and outputting the sequence of images from the camera system.
The sequence of images preferably comprises an image burst including a plurality of distinct images. A predefined number of images are preferably acquired over the duration of the burst, and the images of the burst are preferably separated by a constant frame interval. Each image sequence or burst is preferably separated by the constant capture interval. The claimed approach thus enables a series of temporally separated image bursts to be acquired automatically. The sequence of images may be acquired using a burst mode of the camera.
Alternatively, the sequence of images may be in the form of a video clip. The method may comprise performing the image capture process in response to expiry of a timer configured to time the capture interval (the timer may then be reset to time a next capture interval before a next capture is triggered).
The sequence of images preferably extends over a time duration (e.g. burst length) that is shorter than the capture interval, preferably by a factor of at least 10, optionally by a factor of at least 100. In this way a series of short, widely spaced bursts can be used to obtain a representative set of image data covering a time period or event without creating excessive data volumes.
The sequence of images is preferably recorded at a predetermined frame rate (e.g. a burst rate for burst images or video frame rate for video clips), or with a predetermined frame interval. The capture interval is preferably greater than the frame interval (as defined by the predetermined frame rate), preferably by at least a factor of 100, more preferably by at least a factor of 1000.
The method preferably comprises not outputting image data from the camera system outside the periodic image capture process or discarding image data output from the camera system outside the periodic image capture process (e.g. by storing only the results of the image capture process for subsequent access by the user or processing).
The method preferably comprises receiving configuration data from a user device specifying the periodic capture interval (this may be specified directly as a time interval or indirectly as a number of captures to be performed over a given period of time).
The method preferably comprises performing the repeated capture process in response to detecting one or more predetermined features of an image acquired by the camera system. This may involve initiating the repeated capture process and/or continuing the repeated capture process in dependence on detection of the feature(s). Detection of a feature of an image may include detection of the presence or absence of an image feature within the image. The one or more features may comprise at least one of: presence or absence of a predetermined individual in the image; and a number of detected individuals in the image meeting a criterion, preferably by meeting or exceeding a threshold number. Individuals (or number of individuals present) may be identified through face detection.
Preferably, the method comprises repeatedly performing the image capture process during a predetermined, preferably user-configured, capture time period. Thus, the capture process is preferably performed only after the start of the capture time period and is continued only until the end of the capture time period.
In a further aspect of the invention (which may be combined with the above aspect), the invention provides a method of capturing image data using a camera system, the method comprising: receiving configuration data from a user device specifying a capture time period; during the capture time period, monitoring for the occurrence of a predetermined trigger condition; and in response to the trigger condition, performing an image capture process, the image capture process comprising: recording a sequence of images by the camera system; and outputting the sequence of images from the camera system.
The, or each, sequence of images may comprise one of: an image burst including a plurality of distinct images; and a video clip. The method may comprise not outputting image data from the camera system outside the triggered image capture process or discarding image data output from the camera system outside the triggered image capture process.
Preferably, the trigger condition comprises detection of one or more predetermined features of the camera image, preferably wherein detection of a feature comprises detecting the presence or absence of a predetermined image feature within an image. The trigger condition may comprise detecting the presence or absence of a predetermined individual in the image. Alternatively or additionally, the trigger condition may comprise detection of a number of individuals in the camera image which meets a predetermined criterion, preferably by meeting or exceeding a predetermined threshold number (or alternatively falling below a threshold number). Face detection may be used to detect presence of individuals.
The trigger condition may alternatively or additionally be based on a capture schedule, and/or the trigger condition may comprise a periodic capture interval, preferably as set out in relation to the first aspect of the invention. Multiple triggers of the same or different type (e.g. face detection and time-based triggers) may be used in combination to determine when to trigger image capture.
The following optional features apply to either of the aspects of the invention defined above.
The method is preferably performed automatically under the control of at least one controller or processor of the camera system. For example, the method is preferably implemented by software running on a processor of the camera system. The camera system is typically in the form of a digital camera but may include other local or remote components, with the controller/processor located within the camera or another local or remote component.
The capture time period is preferably defined by one or more of: a start time, a duration, and an end time (e.g. a specific future time window may be defined, or simply a time duration starting immediately).
The sequence of images is preferably acquired based on one or more of: a predetermined, preferably user-configured, sequence duration; a predetermined preferably user-configured, number of images of the sequence; and a predetermined, preferably user-configured, sequence frame rate (e.g. burst rate or video frame rate).
The outputting step preferably comprises storing the sequence of images (e.g. at the camera system or at a remote device).
Preferably, the camera system is connected to a communications network, and outputting the sequence of images comprises transmitting the sequence of images to a remote location via the network for storage at the remote location. The remote location optionally comprises an Internet-connected server or cloud service.
Preferably the method comprises, after the end of the capture time period or after a last image sequence has been captured, combining a plurality of images of the captured image sequences, or a plurality of the captured image sequences, into a single media file. For example, selected image sequences may be incorporated into the file, or selected individual images from within image sequence, or a combination, preferably in order of capture. In one example, all image sequences captured during a capture session (e.g. defined by the capture time period) are combined in time order to form the single media file.
The method may comprise transmitting the image sequences to a server device over a communications network from the camera system; performing the combining at the server device; and outputting the single media file to a user device over a communications network. The single media file may comprise a video file or a multi image file, for example a slide show. The method may further comprise providing access to a user device (e.g. by way of a user device application or web service) for viewing or downloading the combined media file and/or one or more selected captured image sequences and/or one or more constituent images of the captured image sequences (from the server).
The method preferably comprises receiving configuration data at the camera system from a user device via a communications network, the configuration data specifying one or more of: a capture time period, a periodic capture interval, a sequence duration, a sequence image count, a sequence frame rate, and one or more trigger conditions for triggering the image capture process; and wherein the method preferably further comprises performing the image capture process in accordance with the received configuration data.
The camera system preferably includes an image sensor, a network interface, and a controller or processor for controlling transmission of image data from the camera system to a communications network connected to the network interface (e.g. for transmission to a media server).
In a further aspect, the invention provides a camera system comprising means for performing any method as defined above or described below.
The invention further provides a tangible computer readable medium (or combination of multiple such media) comprising software code adapted, when executed by one or more processors, to perform any method as defined above or described below.
Any feature in one aspect of the invention may be applied to other aspects of the invention, in any appropriate combination. In particular, method aspects may be applied to apparatus and computer program aspects, and vice versa. Furthermore, features implemented in hardware may generally be implemented in software, and vice versa. Any reference to software and hardware features herein should be construed accordingly.
Preferred features of the present invention will now be described, purely by way of example, with reference to the accompanying drawings, in which:-
Figure 1 illustrates a system for automated media capture;
Figure 2 illustrates an automated media capture process;
Figure 3 illustrates processing performed by a media server; and
Figure 4 illustrates the hardware/software architecture of certain system components.
Overview
Embodiments of the invention provide a system enabling automatic media capture by one or more cameras on behalf of the user without requiring the user to trigger each individual image/video capture or hold the camera during media capture. During media capture the camera can be placed or fixed on a shelf/wall/other mount and the media is captured automatically, without user intervention, based on a predefined capture schedule or other trigger conditions, allowing a set of representative images/videos of an event or location to be recorded automatically.
A system in accordance with embodiments of the invention is illustrated in Figure 1.
The system comprises one or more cameras 102 positioned at a user premises (e.g. a user’s home). The cameras are connected (typically wirelessly) to an in-home local area network (LAN) 106. Cameras may be conventional digital cameras with communications capabilities. However, in one embodiment, the cameras are designed specifically for communication with other networked devices and control via the network, and thus may include image and communications functionality, but may provide limited or no user controls on the device itself (e.g. no display screen for viewing images) and limited or no permanent on-device storage for storing image data. Instead, the cameras may be moveably placed or fixedly installed at required locations in the home or other premises to record images for transmission to other devices for storage, manipulation and viewing. A user device 104 (e.g. computer terminal, smartphone or the like) is also connected to the LAN. User devices and cameras can communicate via the LAN 106 with each other, and with a wide area network external to the home, typically including the Internet 108, through which they have access to a media server 110. The media server provides a camera interface 112 for receiving image data from one or more cameras 102 (e.g. using a suitable API) and a user interface 114 for allowing interaction with the user device 104, for example to view captured media. The media server includes, or is connected to, a media database 116 for storing image and video data received from cameras and processed media data generated by the media server.
User device 104 may communicate directly with camera(s) 102, for example to configure capture modes and receive image data directly.
The camera system provides automated media capture modes in which images or video are recorded in response to specific predefined triggers. In one mode, the triggers comprise a time schedule (e.g. periodic capture at predefined time intervals), whilst in another mode, the triggers are based on feature detection, as will be described in more detail later.
The operation of the automated media capture mode is illustrated in Figure 2. In step 202 the user configures trigger options (for example setting the time interval between recording bursts) and recording options (e.g. type of media to be captured). In step 204 the user configures a time window, for example as a specified duration starting from the time of configuration (i.e. from“now”) or as a future time window (e.g.“8pm- 11 pm on Friday 1st December”).
At the start of the time window (or immediately, if the user configures an immediate start to recording), the camera begins to monitor (206) for the configured trigger condition(s). If during monitoring the end of the time window is reached (decision 208), then the automatic capture mode ends (step 210). If prior to the end of the time window the configured trigger is detected (decision 212) then media capture is initiated; otherwise, the process continues to monitor for the trigger at step 206.
When media capture is activated, an image burst or video clip is recorded in step 214 (in some cases individual images may also be recorded instead of a burst or video clip). In step 216, the recorded images or the video clip are uploaded to the media server 110, and the camera continues to monitor for the trigger condition at step 206, until the end of the defined time window. Processing at the media server is described further below.
Not that while Figure 2 illustrates uploading of image data (burst images or video clip) immediately upon capture, media may alternatively be buffered at the camera and uploaded at a later point (e.g. at the end of the capture session).
Furthermore, in the above example, the trigger detection and image capture is described as occurring at the camera. Alternatively, image data could be streamed continuously to the server, with the server monitoring for the trigger condition, and storing image data (e.g. bursts or video) only in response to the trigger condition being met, whilst discarding other image data.
Timed media capture
In one mode, media capture is performed on a timed basis, with a capture schedule defining the trigger condition of Figure 2.
In this approach, the user configures the duration of media capture for a capture session (this could be from a defined list of options e.g. 30 minutes, 1 hr, 2 hrs etc.), e.g. starting immediately or as a future time window. Additionally, the user configures the capture schedule and the type of media to be captured during the capture session. The type of media may to be captured during this time can include single images, bursts of images, video clips, or a combination of any of the above.
An image burst is a set of distinct images recorded at a defined frame rate / frame interval, e.g. using a burst mode provided by the camera. The frame interval may be based on the camera frame rate which may be defined by the manufacturer. For a camera supporting 30fps a typical burst could mean capturing a photo every ten frames (corresponding to a frame rate of 3fps and a frame interval of one third of a second). In some embodiments, rather than being fixed, the frame interval between images may be configured by the user (e.g. by selecting from a set of burst rates supported by the camera).
Additionally, the number of images recorded in a single burst may also be configurable by the user (or a default number can be used). As an example, burst mode could be configured to record ten images at 0.2 second intervals when the trigger condition is detected. For video, a video clip of a predetermined duration (e.g. 30 seconds) is recorded, where the duration may similarly be configured by the user.
Note that an image burst is distinct from a video clip since an image burst typically has a lower frame rate (larger frame interval) and furthermore the images are encoded as distinct, separate images which can be individually stored, displayed, copied etc., using a suitable image codec (e.g. JPEG), whereas a video clip will typically be recorded using a suitable video codec (e.g. MPEG-based) in which individual frames are not necessarily recorded separately and distinctly (since video codecs typically use inter-frame compression). Thus individual images of a burst may be stored and transmitted as individual, separate, image files. However, in some cases, a video format could be used to provide an encoding of an image burst.
Use of image bursts instead of individual image capture may be advantageous to the user in that the likelihood of obtaining a good digital photograph of a scene is increased. With the described approach, media is being captured automatically, without user intervention, and this can result in a photo being captured which is not optimal - e.g. a person in the scene could be mid-blink with their eyes closed. An image burst allows a series of photos to be captured close together in time, providing the opportunity for a photo to be captured at just the right moment. The user can later select the best image from the burst and discard the others.
In one mode, the capture schedule is defined as a capture interval, with capture performed repeatedly and periodically based on the defined capture interval. For example, the user may configure capture of an image burst or video clip every 10 minutes during the capture session. Thus in this approach, expiry of the capture interval provides the trigger condition to trigger media capture (e.g. based on a timer). Once capture has been performed, the timer is reset and capture stops until the next expiry of the specified time interval. Note that the capture interval is distinct from the frame interval at which individual images of an image burst are recorded - e.g. the user may configure capture of an image burst every 5 minutes (capture interval) with each image burst comprising 10 images at 2 second intervals (frame interval).
When performing periodic capture of image bursts (i.e. an image burst recorded for each capture interval), the frame interval between images of the burst and the total burst duration are typically small compared to the capture interval, allowing collection of a representative collection of images over an extended time period without the need to transmit and store large volumes of image data.
Once set up, the media capture will continue without any intervention required by the user until the specified end time for the capture session.
In some embodiments, the user may configure more complex capture schedules or patterns, for example involving multiple different media types.
For example, a user could define capture schedules such as:
• a burst of 5 photos every 1 minute;
• a burst of 3 photos every 2 minutes AND a 30 second video clip every 10 minutes;
• 3 single photos, 30 seconds apart, followed by a video of 20 seconds; followed by a burst of 5 photos; to be repeated (possibly at defined intervals) for the duration of the media capture session
• single photos, 1 minute apart, for 5 minutes, followed by 3 second videos, 30 seconds apart for 5 minutes; to be repeated (possibly at defined intervals) for the duration of the media capture session
Where there are multiple cameras, the user can preferably configure the same or different schedules for each camera.
Configuration of the capture schedule, including capture intervals, frame intervals and frame count for burst mode, video duration for video clips etc. is performed via an application running on the user device 104 (Figure 1), e.g. a smartphone / tablet app. The user device communicates the schedule and other configuration directly to the camera(s) 102. Alternatively, configuration could occur via a remote web application (e.g. part of user interface 114 at media server 110).
The configuration may be performed by the user specifying the total length of the capture session (e.g. as a specific time window or duration) together with the required capture interval (e.g. every 5 minutes). Instead of specifying the capture interval, the user may configure a number of captures to be performed during the session, with the required capture interval being computed automatically (e.g. if the user specifies 10 burst captures over 2 hours, this equates to one burst being captured every 12 minutes).
Other triggers
Alternatively, media capture can be triggered by automatic detection of one or more features in the camera image. Capture may be triggered based on presence or absence of specific features. The detected features are typically people or faces, though the same principle can be applied for other types of image features.
In a specific example of this, the trigger may comprise a specific person being present in the camera’s viewing frame (e.g. the“birthday girl”). In another example, the trigger may comprise continued detection of a threshold number of individuals in the camera’s viewing frame for a predefined duration of (e.g. at least x people in frame for at least y seconds), which could indicate a photo-worthy gathering of friends and family. Known feature detection (e.g. face detection) algorithms may be implemented at the camera to enable detection of such features (alternatively feature detection could be performed by a separate computing device, e.g. in LAN 106 or at media server 110 based on an image stream received from the camera).
As for the timed capture mode described above, the capture parameters and triggers are configurable by the user using the user device application (e.g. to specify the specific individual or number of individuals to be detected). As before, the user additionally specifies the type of media to be recorded whenever the trigger condition is detected, and the start/end time and/or duration of the media capture session.
Furthermore, feature-triggered and schedule-based capture may be combined, for example to trigger capture based on a schedule in response to detection of particular features (e.g. trigger periodic image burst capture whilst a particular individual is detected using face detection).
Example scenarios in this capture mode include:
• Whenever a certain face is seen (e.g. birthday girl) capture a 10 second video + 5 second pre-capture; whenever the specified face has been absent for more than 2 minutes, start capturing single photos every 15 seconds until the face is seen again or the end time is reached • Whenever there are more than x people in the camera’s viewing frame, capture a burst of 5 photos every 30 seconds; when the number of people drops, start capturing single photos every 1 minute until the number of people reaches x again or the end time is reached
Server processing
As described above, the media captured during a capture session is uploaded to a cloud platform (in the form of media server 110 in the Figure 1 example) where it can be processed and made available to the user.
In one embodiment, the media server 110 processes the media recorded for a session to generate a single compressed media playback file that stitches together all individual photos, image bursts and video clips into one video file. This is then made available to the user to view (e.g. via a web application as part of user interface 114). The user also has the option to download the full video file or access individual photo/video files to save/share certain moments from the media capture session.
The processing at the media server is illustrated in Figure 3. In step 302, the server receives captured media from a camera device. This may, for example, be an individual image, an image burst consisting of a sequence of individual images, or a video clip. In step 304, the server stores the media in the media database (e.g. media DB 116 in Figure 1). In step 306, the server determines whether the capture session is complete. If not, the process continues at step 302 with the server waiting to receive additional media. Once the capture session is complete, the server processes the received media to create a single combined video presentation in step 308. The combined video presentation and the individual media files are then made available for viewing and/or download to the user in step 310 and/or may be automatically transmitted to a user device.
While in the above example the combined media is created as a single video file (e.g. using a standard video format such as MP4 or AVI), the media could be combined in other ways, e.g. as a segmented media file comprising multiple images/video clips, as an animation (e.g. GIF, Flash), as a slideshow / presentation format (e.g. Powerpoint), as a web page embedding multiple media items etc. Furthermore, instead of combining all received media for the session, only a selected subset of the media may be included (e.g. based on user configuration and/or automated image analysis, e.g. to exclude low quality media). This could include selected bursts and/or video clips in full and/or individual selected images taken from some or all of the bursts.
The end product can thus provide a complete record of the recorded event for the user in an easily viewable form, based on a combination of video and still image segments.
System architecture
Figure 4 illustrates the camera 102 and media server 110 in more detail.
The camera device 102 includes one or more processors 402 together with volatile / random access memory 404 for storing temporary data and software code being executed. For example, the volatile memory may buffer recorded image data and store configuration parameters, such as capture schedule/trigger information, burst length etc.
Persistent storage 406 (e.g. in the form of ROM/FLASH memory or the like) persistently stores control software 412 for controlling operation of the camera. A network interface 410 (e.g. a Wi-Fi transceiver) is provided for communication with other system components, including user device 104 and media server 110, over one or more networks (e.g. Local or Wide Area Networks, including the Internet).
The camera also includes an image sensor 408 for acquiring image data, and may include other conventional hardware and software components as known to those skilled in the art.
The media server 110 includes one or more processors 422 together with volatile / random access memory 424 for storing temporary data and software code being executed. Persistent storage 426 (e.g. in the form of hard disk and/or optical storage media) persistently stores software for execution on the server, including camera interface module 112, user interface module 114 (e.g. in the form of a web application server), and media combiner module 432 for generating combined media files from multiple images, image bursts and video clips. Persistent storage 426 may additionally store the media database 116 including source image data recorded by the camera(s) and generated combined media files. A network interface 430 is provided for communication with the other system components. The server may include other conventional hardware and software components as known to those skilled in the art (e.g. a server operating system).
User device 104 is preferably a conventional user device such as a smartphone, tablet or personal computer and comprises conventional hardware/software components as known to those skilled in the art along with an application 440 for configuring media capture by the camera and viewing or downloading captured media from the media server..
While a specific architecture is shown by way of example, any appropriate hardware/software architecture may be employed.
Furthermore, functional components indicated as separate may be combined and vice versa. For example, media database 116 could be stored at a separate database server. Furthermore, the functions of server 110 may in practice be implemented by multiple separate server devices (e.g. by a server cluster). Some functions of the server could be performed at the user device or camera or vice versa.
It will be understood that the present invention has been described above purely by way of example, and modification of detail can be made within the scope of the invention.

Claims

1. A method of capturing image data using a camera system, comprising:
repeatedly performing an image capture process, wherein the image capture process is repeated periodically by the camera system at a predetermined capture interval and comprises, at each capture interval:
acquiring a sequence of images by the camera system; and
outputting the sequence of images from the camera system.
2. A method according to claim 1 , wherein the sequence of images comprises an image burst including a plurality of distinct images.
3. A method according to claim 1 , wherein the sequence of images is in the form of a video clip.
4. A method according to any of the preceding claims, comprising performing the image capture process in response to expiry of a timer configured to time the capture interval.
5. A method according to any of the preceding claims, wherein the sequence of images extends over a time duration that is shorter than the capture interval, preferably by a factor of at least 10, optionally by a factor of at least 100.
6. A method according to any of the preceding claims, wherein the sequence of images is recorded at a predetermined frame rate.
7. A method according to claim 6, wherein the capture interval is greater than a frame interval defined by the predetermined frame rate, preferably by at least a factor of 100, more preferably by at least a factor of 1000.
8. A method according to any of the preceding claims, comprising not outputting image data from the camera system outside the periodic image capture process or discarding image data output from the camera system outside the periodic image capture process.
9. A method according to any of the preceding claims, comprising receiving configuration data from a user device specifying the periodic capture interval.
10. A method according to any of the preceding claims, comprising performing the repeated capture process in response to detecting one or more predetermined features of an image acquired by the camera system.
11. A method according to claim 10 wherein the one or more features comprise at least one of:
presence or absence of a predetermined individual in the image; and a number of detected individuals in the image meeting a criterion, preferably by meeting or exceeding a threshold number.
12. A method according to any of the preceding claims comprising repeatedly performing the image capture process during a predetermined, preferably user- configured, capture time period.
13. A method of capturing image data using a camera system, comprising:
receiving configuration data from a user device specifying a capture time period;
during the capture time period, monitoring for the occurrence of a
predetermined trigger condition; and
in response to the trigger condition, performing an image capture process, the image capture process comprising:
recording a sequence of images by the camera system; and
outputting the sequence of images from the camera system.
14. A method according to claim 13, wherein each sequence of images comprises one of: an image burst including a plurality of distinct images; and a video clip.
15. A method according to claim 13 or 14, comprising not outputting image data from the camera system outside the triggered image capture process or discarding image data output from the camera system outside the triggered image capture process.
16. A method according to any of claims 13 to 15, wherein the trigger condition comprises detection of one or more predetermined features of the camera image, preferably wherein detection of a feature comprises detecting the presence or absence of a predetermined image feature within the image.
17. A method according to any of claims 13 to 16, wherein the trigger condition comprises detecting the presence or absence of a predetermined individual in the image.
18. A method according to any of claims 13 to 17, wherein the trigger condition comprises detection of a number of individuals in the camera image which meets a predetermined criterion, preferably by meeting or exceeding a predetermined threshold number.
19. A method according to any claims 13 to 18, wherein the trigger condition is based on a capture schedule, and/or wherein the trigger condition comprises a periodic capture interval, preferably as set out in any of claims 1 to 12.
20. A method according to any of claims 12 to 19, wherein the capture time period is defined by one or more of: a start time, a duration, and an end time.
21. A method according to any of the preceding claims, wherein the sequence of images is acquired based on one or more of: a predetermined, preferably user- configured, sequence duration; a predetermined preferably user-configured, number of images of the sequence; and a predetermined, preferably user-configured, sequence frame rate.
22. A method according to any of the preceding claims, wherein the outputting step comprises storing the sequence of images.
23. A method according to any of the preceding claims, wherein the camera system is connected to a communications network, and wherein outputting the sequence of images comprises transmitting the sequence of images to a remote location via the network for storage at the remote location, optionally wherein the remote location comprises an Internet-connected server or cloud service.
24. A method according to any of the claims, comprising, preferably after the end of the capture time period or after a last image sequence has been captured, combining a plurality of images of the captured image sequences, or a plurality of the captured image sequences, into a single media file.
25. A method according to claim 24, comprising transmitting the image sequences to a server device over a communications network from the camera system; performing the combining at the server device; and outputting the single media file to a user device over a communications network.
26. A method according to claim 24 or 25, wherein the single media file comprises a video file or a multi-image file, for example a slide show.
27. A method according to any of claims 24 to 26, comprising providing access to a user device for viewing or downloading the combined media file and/or one or more selected captured image sequences and/or one or more constituent images of the captured image sequences.
28. A method according to any of the preceding claims, comprising receiving configuration data at the camera system from a user device via a communications network, the configuration data specifying one or more of: a capture time period, a periodic capture interval, a sequence duration, a sequence image count, a sequence frame rate, and one or more trigger conditions for triggering the image capture process; and wherein the method preferably further comprises performing the image capture process in accordance with the received configuration data.
29. A method according to any of the preceding claims, wherein the camera system includes an image sensor, a network interface, and a controller or processor for controlling transmission of image data from the camera system to a
communications network connected to the network interface.
30. A camera system comprising means for performing a method as set out in any of the preceding claims.
31. One or more computer readable media comprising software code adapted, when executed by one or more processors, to perform a method as set out in any of claims 1 to 29.
PCT/GB2018/053469 2017-12-01 2018-11-30 Automated media capture system WO2019106378A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1720093.2A GB2570858B (en) 2017-12-01 2017-12-01 Automated media capture system
GB1720093.2 2017-12-01

Publications (1)

Publication Number Publication Date
WO2019106378A1 true WO2019106378A1 (en) 2019-06-06

Family

ID=60950210

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2018/053469 WO2019106378A1 (en) 2017-12-01 2018-11-30 Automated media capture system

Country Status (2)

Country Link
GB (1) GB2570858B (en)
WO (1) WO2019106378A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113246911A (en) * 2020-02-12 2021-08-13 本田技研工业株式会社 Vehicle control device, method, and recording medium having program for vehicle control recorded thereon

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2620974A (en) * 2022-07-28 2024-01-31 Tooth Care Project Ltd Event monitoring system and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010087899A (en) * 2008-09-30 2010-04-15 Canon Inc Imaging apparatus, control method thereof, and program
WO2014076920A1 (en) * 2012-11-14 2014-05-22 Panasonic Corporation Video monitoring system
US20150009328A1 (en) * 2013-07-08 2015-01-08 Claas Selbstfahrende Erntemaschinen Gmbh Agricultural harvesting machine
WO2015179458A1 (en) * 2014-05-22 2015-11-26 Microsoft Technology Licensing, Llc Automatic insertion of video into a photo story
US20160093335A1 (en) * 2014-09-30 2016-03-31 Apple Inc. Time-Lapse Video Capture With Temporal Points Of Interest
US20170076571A1 (en) * 2015-09-14 2017-03-16 Logitech Europe S.A. Temporal video streaming and summaries

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6371511B2 (en) * 2013-10-22 2018-08-08 キヤノン株式会社 Network system and device management method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010087899A (en) * 2008-09-30 2010-04-15 Canon Inc Imaging apparatus, control method thereof, and program
WO2014076920A1 (en) * 2012-11-14 2014-05-22 Panasonic Corporation Video monitoring system
US20150009328A1 (en) * 2013-07-08 2015-01-08 Claas Selbstfahrende Erntemaschinen Gmbh Agricultural harvesting machine
WO2015179458A1 (en) * 2014-05-22 2015-11-26 Microsoft Technology Licensing, Llc Automatic insertion of video into a photo story
US20160093335A1 (en) * 2014-09-30 2016-03-31 Apple Inc. Time-Lapse Video Capture With Temporal Points Of Interest
US20170076571A1 (en) * 2015-09-14 2017-03-16 Logitech Europe S.A. Temporal video streaming and summaries

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113246911A (en) * 2020-02-12 2021-08-13 本田技研工业株式会社 Vehicle control device, method, and recording medium having program for vehicle control recorded thereon

Also Published As

Publication number Publication date
GB2570858A (en) 2019-08-14
GB201720093D0 (en) 2018-01-17
GB2570858B (en) 2022-02-23
GB2570858A8 (en) 2019-09-04

Similar Documents

Publication Publication Date Title
US20200351466A1 (en) Low Power Framework for Controlling Image Sensor Mode in a Mobile Image Capture Device
US9588640B1 (en) User interface for video summaries
US9805567B2 (en) Temporal video streaming and summaries
US10299017B2 (en) Video searching for filtered and tagged motion
KR102249005B1 (en) Storage management of data streamed from a video source device
CN101621617B (en) Image sensing apparatus and storage medium
US9838641B1 (en) Low power framework for processing, compressing, and transmitting images at a mobile image capture device
US20170076156A1 (en) Automatically determining camera location and determining type of scene
US9703461B2 (en) Media content creation
US20150189171A1 (en) Method and electronic apparatus for sharing photographing setting values, and sharing system
AU2015100630A4 (en) System and methods for time lapse video acquisition and compression
US10032482B2 (en) Moving image generating apparatus, moving image generating method and storage medium
WO2016023358A1 (en) Method and apparatus for adjusting image quality of video according to network environment
WO2019106378A1 (en) Automated media capture system
EP3568974B1 (en) Systems and methods for recording and storing media content
US20150356356A1 (en) Apparatus and method of providing thumbnail image of moving picture
US9485420B2 (en) Video imaging using plural virtual image capture devices
CN107018442A (en) One kind video recording synchronized playback method and device
KR20170024866A (en) System for creating a event image
CN106851354B (en) Method and related device for synchronously playing recorded multimedia in different places
US11115619B2 (en) Adaptive storage between multiple cameras in a video recording system
US10313625B2 (en) Method, apparatus, and storage medium for video file processing
US20150350593A1 (en) Moving Image Data Playback Apparatus Which Controls Moving Image Data Playback, And Imaging Apparatus
TWI566591B (en) Method for cloud-based time-lapse imaging systems
EP4054184A1 (en) Event video sequences

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18815799

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18815799

Country of ref document: EP

Kind code of ref document: A1