GB2570498A - A method and user device for displaying video data, a method and apparatus for streaming video data and a video surveillance system - Google Patents

A method and user device for displaying video data, a method and apparatus for streaming video data and a video surveillance system Download PDF

Info

Publication number
GB2570498A
GB2570498A GB1801424.1A GB201801424A GB2570498A GB 2570498 A GB2570498 A GB 2570498A GB 201801424 A GB201801424 A GB 201801424A GB 2570498 A GB2570498 A GB 2570498A
Authority
GB
United Kingdom
Prior art keywords
video data
interest
user device
area
segments
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1801424.1A
Other versions
GB201801424D0 (en
Inventor
Jørgen Skovgaard Hans
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB1801424.1A priority Critical patent/GB2570498A/en
Publication of GB201801424D0 publication Critical patent/GB201801424D0/en
Publication of GB2570498A publication Critical patent/GB2570498A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/17Image acquisition using hand-held instruments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
    • H04N21/8133Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection

Abstract

A method of viewing a video data stream on a display of a user device, such as mobile device 160 with a touch-screen display, involves selecting, on the display, an area of interest 210 of a video image; and displaying on the display only segments of the video data stream in which an event occurs in the selected area of interest. The required segments may be streamed to the user device. The selection of the area of interest may be by a pinch gesture. Metadata indicating motion above a threshold level or classification metadata may be used to determine the required segments. The method may be used in a video surveillance system.

Description

A METHOD AND USER DEVICE FOR DISPLAYING VIDEO DATA, A METHOD AND APPARATUS FOR STREAMING VIDEO DATA AND A VIDEO SURVEILLANCE SYSTEM
Technical Field of the Invention
The present invention relates to a method and user device for displaying video data, and a method and apparatus for streaming video data to a user device. The present invention is particularly applicable in a video surveillance system.
Background of the invention
Browsing video in order to find an event is very difficult, particularly if there are large quantities of video data to be viewed, such as video surveillance data where there may be many hours of recorded data, much of which is of little or no interest.
Increasingly, video surveillance systems are being provided where a user can view video surveillance data on a mobile device such as a mobile phone or tablet. This has further limitations compared to a larger display because the limited viewing space typically means that a user can only meaningfully look at one video stream from one camera at the time. Secondly, there is not space on the display for a complicated user interface including
- 1 features that might make searching easier such as timelines and buttons to specify various objectives without blocking the video that the user wishes to view.
Therefore, on a mobile user device, browsing for an event is typically achieved by simply letting the video run and looking until the user sees the relevant event happen. It is possible to play the video in fast forward mode such as x4 video speed to reduce the time required to view the full video in question. The smart user can of course plot search strategies and jump to specific time points and see if the event has occurred. However, this strategy will not work for reoccurring events (Forensic tasks) like who was at the front door during the day.
Some video surveillance systems are set up such that cameras only record when there is motion in the scene (field of view) in order to reduce the amount of recorded data that will have to be looked through. However, recording can still be triggered by motion events which are not of interest. For example, for outdoor cameras, wind in a tree causing motion of the tree, or on a sunny day casting moving shadows will cause motion in the picture which triggers recording. This means that a lot of video is recorded and the event identification and retrieval on mobile phones is extremely slow and very time consuming.
Summary of the Invention
According to a first aspect of the present invention there is provided a method of viewing a video data stream on a display of a user device according to claim 1.
The present invention allows a user to reduce the amount of video data which must be viewed by selecting the region of interest and only viewing video data where events occur within the selected area.
The selection of the area of interest can be achieved easily and intuitively on the display, for example by a pinch type gesture on a touch screen.
According to a second aspect of the present invention there is provided a method of streaming video data to a user device according to claim 11.
According to the invention, the user selection can be received, for example at a recording server, which determines the segments of video data which are required and streams only the determined segments to the user device .
Another aspect of the invention relates to a computer program which, when executed by a programmable apparatus, causes the apparatus to perform the method defined above.
Another aspect of the invention relates to a user device according to claim 17.
Another aspect of the invention relates to an apparatus for streaming video to a user device according to claim 18 .
At least parts of the methods according to the invention may be computer implemented. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a circuit, module or system. Furthermore, the present invention may take the form of a computer program product embodied in any tangible medium of expression having computer usable program code embodied in the medium.
Since the present invention can be implemented in software, the present invention can be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium. A tangible
- 4 carrier medium may comprise a storage medium such as a hard disk drive, a magnetic tape device or a solid state memory device and the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g. a microwave or RF signal.
Brief description of the drawings
Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings in which:
Figure 1 illustrates an example of a video surveillance system;
Figure 2 is a view of a display of a user device;
Figure 3 is a flowchart illustrating a method of viewing a video data stream on a display of a user device; and
Figure 4 is a flowchart illustrating a method of streaming video data to a user device.
Detailed Description of the Invention
Figure 1 shows an example of a video surveillance system 100 in which embodiments of the invention can be
- 5 implemented. The system 100 comprises a management server 130, a recording server 150 and a mobile server 140. Further servers may also be included, such as further recording servers, archive servers or analytics servers. A plurality of video surveillance cameras 110a, 110b, 110c send video data to the recording server 150. An operator client 120 is a fixed terminal which provides an interface via which an operator can view video data live from the cameras 110a, 110b, 110c, or recorded video data from the recording server 150.
The cameras 110a, 110b, 110c capture image data and send this to the recording server 150 as a plurality of video data streams.
The recording server 150 stores the video data streams captured by the video cameras 110a, 110b, 110c.
The mobile server 140 communicates with a user device 160 which is a mobile device such as a smartphone or tablet which has a touch screen display. The user device 160 can access the system from a browser using a web client. Via the user device 160 and the mobile server 140, a user can view recorded video data stored on the recording server 150. The user can also view a live feed via the user device 160, but the present invention is primarily concerned with viewing recorded video data.
Other servers may also be present in the system 100. For example, an archiving server (not illustrated) may be
- 6 provided for archiving older data stored in the recording server 150 which does not need to be immediately accessible from the recording server 150, but which it is not desired to be deleted permanently. A fail-over recording server (not illustrated) may be provided in case a main recording server fails. An analytics server can also run analytics software for image analysis, for example motion or object detection, facial recognition, event detection.
The operator client 120 and the mobile server 140 are configured to communicate via a first network/bus 121 with the management server 130 and the recording server 150. The recording server 150 communicates with the cameras 110a, 110b, 110c via a second network/bus 122.
Figure 2 shows a user device 160 having a display 200 which displays recorded video data streamed from the recording server 150 via the mobile server 140. In this case, the user device 160 is a mobile device such as a smartphone or tablet and the display 200 is a touch screen display.
In the present invention, a user can select, on the display, an area of interest 210 of a video image, and only time segments of the video data stream in which an event occurs in the selected area of interest are then displayed on the display.
Further, a user can select from an interface conditions for determining if an event occurs, such as setting a threshold for motion detection or selecting objects.
A more detailed explanation of the operation of the user device 160 is set out below, with reference to Figure 3, which is a flow diagram illustrating the method of operating the user device 160.
In step 300 a user selects, on the display 200, an area of interest 210 of the video image. The selection can be carried out by a pinch gesture, which is a known method of selecting an area on a touch screen display wherein a user uses thumb and forefinger to designate diagonally opposite corners of a rectangle which the
user can drag to a desired size and position. The
selected area is then shown on the display 200 as a
rectangle
In step 310, the user defines an event via an event
selection menu which may appear automatically on the display when an area has been selected, or may appear when the user touches the selected area. The event selection menu allows a user to select conditions for searching the video data. These conditions define events. For example, an event may be defined as motion detected in the selected area of interest. Furthermore, the sensitivity of the search to motion may be set by
- 8 setting a threshold for motion detection so that an event is defined as any motion having a magnitude greater than a threshold value. This may simply appear on the menu as selecting a motion sensitivity level (eg high, low, medium) or by means of a visual indication such as a sliding scale. Alternatively, an event may be defined as the detection of a particular type of object in the selected area, such as a person or a car.
In step 320 the user device 160 sends area definition information and event definition information to the recording server 150 via the mobile server 140. The area definition information defines the area of interest set by the user, and the event definition information defines the events that the user wishes to search for in the recorded video data.
At step 330, the user device receives segments of video data from the recording server 150 via the mobile server 140, and displays the received segments of video data, in time order, on the display 200.
It is possible that the user device 160 displays only the area of interest from the received video segments ie it shows a zoomed in portion of the video. Alternatively, the area of interest is only used to define the area in which events should occur, but the displayed video segments are displayed full size ie the entire area of the video segments is displayed. In either case, the entire area of the video segments may be received at the user device 160 ie there is no cropping of the images sent from the recording server 150. Alternatively, the video data could be cropped at the recording server 150 for transmission to the user device 160, for example to reduce the amount of data that needs to be transmitted to the user device 160.
In the present invention, the recording server 150 receives the area definition information from the user device, which defines the area of interest of a video image from a video data stream recorded by one of the cameras. The recording server 150 also receives the event definition information which defines an event of interest, and in accordance with the area definition information and the event definition information, determines segments (ie time segments) of the video data stream in which an event occurs in the selected area of interest. The recording server 150 then streams only the determined segments to the user device.
The recording server 150 stores video data accompanied by annotations of the video stored in a metadata stream that can be scanned very fast compared with decoding the video and performing the analytics identification to identify events. The metadata could be generated by analytics software at the camera 110a, 110b, 110c, or on a separate analytics server or by another networked device such as temperature or wind sensors, or door access control systems.
For example, motion metadata can be generated by annotating the video with a grid of 8x8 or 64x64 identifying if there is motion in that specific area and how much on a scale from 0 to 256 at recording time. Different motion levels can be defined eg high could be 200 to 256. Furthermore, annotations about the existence of objects and object type in the picture can be add to the grid. The annotations can include the object type (eg car, person) and probability. Thus searches can be limited by object type and probability eg object type car with at least 50% probability. Generation of annotations to video streams is a well know research area and there are many different techniques known.
The metadata can be generated in several ways by analytics software. For example, analytics software can run on a separate analytics server to generate the metadata which is stored with the video data on the recording server. Alternatively, some cameras include processors which can run analytics software which generates metadata at the camera which is then streamed to the recording server 150 with the video data and recorded with the video data.
- 11 A more detailed explanation of the selection of the video segments by the recording server 150 is set out below, with reference to Figure 4, which is a flow diagram illustrating the method of streaming video data to a user device 160.
At step 400, the recording server 150 receives the area definition information and the event definition information from the user device 160.
At step 410 the recording server 150 determines which segments of the stored video data contain events within the area 210 defined by the area definition information. It is determined whether an event as defined by the event definition information occurs by searching the metadata associated with the video data. The metadata can include motion metadata as described above, which can be searched to determine if motion having a magnitude above a threshold is present in the area of interest. The metadata can also include object classification metadata defining a type of object identified in the video data, which can be searched to determine if a specified object such as a person or a vehicle is present in the area of interest. In addition, the metadata can indicate a probability level associated with each object classification. The probability level can be expressed as a percentage, and levels can be predefined as ranges of probability eg low, medium, high.
The use of the metadata to determine events rather than analysing the video data itself means that the selection of video segments containing events within the area of interest happens without opening the video stream on the recorder. The metadata can be searched very quickly compared with decoding the video and performing the analytics identification to identify events. This also allows for combination with normal fast forward/reverse operation.
At step 420, the recording server 150 streams the selected segments of video data to the user device 160.
In the above described embodiment, the selection of the video segments containing events is carried out at the recording server 150. In an alternative embodiment, it is possible that the recording server 150 streams all of the video data together with metadata to the user device 160, and the selection of the video segments including events within the area of interest using the metadata is carried out by the user device 160 itself. However, it is preferred that this is carried out at the recording server to reduce the amount of data streamed, and because the recording server 150 will have a greater processing capacity than the user device 160.
The present invention provides an intuitive way for a user to search video surveillance data on a mobile user device 160. Narrowing down the video data streamed to the user device 160 by searching for events within a user defined search area greatly reduces the amount of video a user has to review, and allows the user to readily find events of interest in the video stream.
For example, a user may be viewing recorded video from a door entry system in which recording is triggered by motion, but a tree in the field of view is moving in the wind and triggering recording unnecessarily. This is causing many hours of footage to be recorded when the user only wishes to view video footage of people at the door .
The user could narrow down the video data viewed in several ways using embodiments of the present invention. Firstly, the user could set an area of interest that excludes the moving tree, and look for any motion only in the area of interest. This will exclude video data in which the only motion is the motion of the tree. Secondly, the user could keep the area of interest large but set a higher threshold for motion detection to exclude video where only the tree is moving. Thirdly,
- 14 the user could set an event as being the presence of a person, to search for metadata indicating a person is present.
More complicated queries can use multiple annotation types combined in a Boolean search. For example, if there is metadata from IOT (Internet of Things) devices such as wind or temperature sensors and a user wishes to search video data from a bridge for bicycles when there is high winds, then the user can combine object classification data for bicycles in an and operation with high wind using metadata from the wind sensor.
While the present invention has been described with reference to embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. The present invention can be implemented in various forms without departing from the principal features of the present invention as defined by the claims .

Claims (19)

1. A method of viewing a video data stream on a display of a user device, the method comprising:
selecting, on the display, an area of interest of a video image; and displaying on the display only segments of the video data stream in which an event occurs in the selected area of interest.
2. The method according to claim 1, wherein the user device is a mobile device and the display is a touch screen display.
3. The method according to claim 2, wherein the selection of the area of interest is by a pinch gesture.
4. The method according to any preceding claim, wherein the displaying comprises displaying only the area of interest of the segments of the video data stream.
5. The method according to any one of claims 1 to 3, wherein the displaying comprises displaying the entire area of the segments of the video data stream.
6. The method according to any preceding claim, further comprises setting at least one condition for determining if an event occurs in the area of interest.
7. The method according to claim 6, wherein a condition for determining if an event occurs comprises whether motion above a threshold level is detected in the area of interest.
8. The method according to claim 7, wherein the method comprises setting the threshold level for motion detection.
9. The method according to any of claims 6 to 8, wherein a condition for determining if an event occurs comprises the detection of a type of object in the area of interest.
10. The method according to any preceding claim, wherein the method further comprises determining the segments of the video data stream in which an event occurs in the selected area of interest.
11. A method of streaming video data to a user device, the method comprising:
receiving a user selection from a user device, the user selection defining an area of interest of a video image;
determining segments of the video data stream in which an event occurs in the selected area of interest; and streaming only the determined segments to the user device .
12. The method according to claim 11, wherein the step of determining segments in which an event occurs comprises searching metadata associated with the video data stream.
13. The method according to claim 12, wherein the metadata includes metadata indicating motion.
14. The method according to claim 12 or 13, wherein the metadata includes object classification metadata defining a type of object identified in the video data.
15. The method according to any of claims 11 to 14, wherein the method comprises receiving from the user device at least one condition for determining if an event occurs .
16. A computer program which, when executed by a programmable apparatus, causes the apparatus to perform the method of any one of Claims 1 to 15.
17. A user device including a display for viewing a video data stream comprising:
means for selecting, on the display, an area of interest of a video image;
means for displaying on the display only segments of the video data stream in which an event occurs in the selected area of interest.
18. An apparatus for streaming video data to a user device, the apparatus comprising:
means for receiving a user selection from a user device, the user selection defining an area of interest of a video image;
means for determining segments of the video data stream in which an event occurs in the selected area of interest;
means for streaming only the determined segments to the user device.
19. A video surveillance system comprising:
a recording server according to claim 18, having video surveillance data stored thereon; and a user device according to claim 17.
GB1801424.1A 2018-01-29 2018-01-29 A method and user device for displaying video data, a method and apparatus for streaming video data and a video surveillance system Withdrawn GB2570498A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1801424.1A GB2570498A (en) 2018-01-29 2018-01-29 A method and user device for displaying video data, a method and apparatus for streaming video data and a video surveillance system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1801424.1A GB2570498A (en) 2018-01-29 2018-01-29 A method and user device for displaying video data, a method and apparatus for streaming video data and a video surveillance system

Publications (2)

Publication Number Publication Date
GB201801424D0 GB201801424D0 (en) 2018-03-14
GB2570498A true GB2570498A (en) 2019-07-31

Family

ID=61558116

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1801424.1A Withdrawn GB2570498A (en) 2018-01-29 2018-01-29 A method and user device for displaying video data, a method and apparatus for streaming video data and a video surveillance system

Country Status (1)

Country Link
GB (1) GB2570498A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021059139A1 (en) * 2019-09-27 2021-04-01 Ricoh Company, Ltd. Apparatus, image processing system, communication system, method for setting, image processing method, and recording medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006104903A1 (en) * 2005-03-31 2006-10-05 Honeywell International, Inc. Methods for defining, detecting, analyzing, indexing and retrieving events using video image processing
US20070250898A1 (en) * 2006-03-28 2007-10-25 Object Video, Inc. Automatic extraction of secondary video streams
WO2012115593A1 (en) * 2011-02-21 2012-08-30 National University Of Singapore Apparatus, system, and method for annotation of media files with sensor data
US20140079126A1 (en) * 2012-09-18 2014-03-20 Vid Scale, Inc. Method and Apparatus for Region of Interest Video Coding Using Tiles and Tile Groups
US20140176708A1 (en) * 2012-12-21 2014-06-26 Robert Bosch Gmbh System And Method For Detection Of High-Interest Events In Video Data
US20150201198A1 (en) * 2014-01-15 2015-07-16 Avigilon Corporation Streaming multiple encodings encoded using different encoding parameters
US20150365687A1 (en) * 2013-01-18 2015-12-17 Canon Kabushiki Kaisha Method of displaying a region of interest in a video stream
WO2018049321A1 (en) * 2016-09-12 2018-03-15 Vid Scale, Inc. Method and systems for displaying a portion of a video stream with partial zoom ratios

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006104903A1 (en) * 2005-03-31 2006-10-05 Honeywell International, Inc. Methods for defining, detecting, analyzing, indexing and retrieving events using video image processing
US20070250898A1 (en) * 2006-03-28 2007-10-25 Object Video, Inc. Automatic extraction of secondary video streams
WO2012115593A1 (en) * 2011-02-21 2012-08-30 National University Of Singapore Apparatus, system, and method for annotation of media files with sensor data
US20140079126A1 (en) * 2012-09-18 2014-03-20 Vid Scale, Inc. Method and Apparatus for Region of Interest Video Coding Using Tiles and Tile Groups
US20140176708A1 (en) * 2012-12-21 2014-06-26 Robert Bosch Gmbh System And Method For Detection Of High-Interest Events In Video Data
US20150365687A1 (en) * 2013-01-18 2015-12-17 Canon Kabushiki Kaisha Method of displaying a region of interest in a video stream
US20150201198A1 (en) * 2014-01-15 2015-07-16 Avigilon Corporation Streaming multiple encodings encoded using different encoding parameters
WO2018049321A1 (en) * 2016-09-12 2018-03-15 Vid Scale, Inc. Method and systems for displaying a portion of a video stream with partial zoom ratios

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021059139A1 (en) * 2019-09-27 2021-04-01 Ricoh Company, Ltd. Apparatus, image processing system, communication system, method for setting, image processing method, and recording medium

Also Published As

Publication number Publication date
GB201801424D0 (en) 2018-03-14

Similar Documents

Publication Publication Date Title
US7594177B2 (en) System and method for video browsing using a cluster index
JP4847165B2 (en) Video recording / reproducing method and video recording / reproducing apparatus
EP2717564B1 (en) Method, device and system for realizing video retrieval
US10192588B2 (en) Method, device, and computer-readable medium for tagging an object in a video
EP1873732A2 (en) Image processing apparatus, image processing system and filter setting method
US11308158B2 (en) Information processing system, method for controlling information processing system, and storage medium
WO2012033758A2 (en) Video system with intelligent visual display
KR20160097870A (en) System and method for browsing summary image
US20110096994A1 (en) Similar image retrieval system and similar image retrieval method
CN110740290B (en) Monitoring video previewing method and device
KR101212082B1 (en) Image Recognition Apparatus and Vison Monitoring Method thereof
US8411141B2 (en) Video processing method for reducing the load of viewing
JP6203188B2 (en) Similar image search device
KR20170098139A (en) Apparatus and method for summarizing image
CN105338293A (en) Output display method and device for alarm event
JP6214762B2 (en) Image search system, search screen display method
JP2002152721A (en) Video display method and device for video recording and reproducing device
KR101964230B1 (en) System for processing data
GB2570498A (en) A method and user device for displaying video data, a method and apparatus for streaming video data and a video surveillance system
KR101049574B1 (en) Event processing device and method using bookmarks
CN112437270B (en) Monitoring video playing method and device and readable storage medium
JP2013015916A (en) Image management device, image management method, image management program, and recording medium
US11172159B2 (en) Monitoring camera system and reproduction method
KR101498608B1 (en) Apparatus for searching image data
GB2572007A (en) A method and user device for displaying video data, a method and apparatus for streaming video data and a video surveillance system

Legal Events

Date Code Title Description
COOA Change in applicant's name or ownership of the application

Owner name: CANON KABUSHIKI KAISHA

Free format text: FORMER OWNER: CANON EUROPA N.V.

WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)