WO2008136007A2 - Acquiring regions of interest at a high frame rate - Google Patents

Acquiring regions of interest at a high frame rate Download PDF

Info

Publication number
WO2008136007A2
WO2008136007A2 PCT/IL2008/000654 IL2008000654W WO2008136007A2 WO 2008136007 A2 WO2008136007 A2 WO 2008136007A2 IL 2008000654 W IL2008000654 W IL 2008000654W WO 2008136007 A2 WO2008136007 A2 WO 2008136007A2
Authority
WO
WIPO (PCT)
Prior art keywords
acquiring
interest
subarea
full
roi
Prior art date
Application number
PCT/IL2008/000654
Other languages
French (fr)
Other versions
WO2008136007A3 (en
Inventor
Amihay Halamish
Original Assignee
Amihay Halamish
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Amihay Halamish filed Critical Amihay Halamish
Publication of WO2008136007A2 publication Critical patent/WO2008136007A2/en
Publication of WO2008136007A3 publication Critical patent/WO2008136007A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/44Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
    • H04N25/443Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array by reading pixels from selected 2D regions of the array, e.g. for windowing or digital zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/53Control of the integration time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Definitions

  • the present invention generally relates to the field of image processing. More particularly, the present invention relates to acquiring regions of interest.
  • US2004240546 which is incorporated herein by reference in its entirety, discloses a method and/or apparatus for analyzing the content of a surveillance image, improving upon conventional approaches by implementing content analysis in the acquisition side of the system. The volume of data recorded can be reduced by identifying and eliminating "uninteresting" content.
  • US2005200714 which is incorporated herein by reference in its entirety, discloses a digital video system using networked cameras.
  • WO2007079590 which is incorporated herein by reference in its entirety, discloses an interactive input system comprising at least two imaging devices associated with a region of interest.
  • EP0604009 which is incorporated herein by reference in its entirety, discloses a video camera that is attached to a computer having image processing facilities.
  • US2007030342 which is incorporated herein by reference in its entirety, discloses an apparatus and method for capturing a scene using staggered triggering of dense camera arrays.
  • WO2006086141 which is incorporated herein by reference in its entirety, discloses a video processing system for generating a foveated video display with sections having different resolutions.
  • US2006098882 which is incorporated herein by reference in its entirety, discloses a compression of a scene of an image sequence using a number of foveation zones, each foveation zone being weighted based on a probability of a viewer looking at a corresponding portion of the first scene.
  • US6252989 which is incorporated herein by reference in its entirety, discloses a foveated imaging system, which can be implemented on a general purpose computer and greatly reduces the transmission bandwidth of images.
  • US2002064314 which is incorporated herein by reference in its entirety, discloses a client-server system and method that enables efficient, low bit rate transmission of image data over a network from an image server (e.g., active cameras) to a client for, e.g., distributed surveillance.
  • an image server e.g., active cameras
  • US2006140495 which is incorporated herein by reference in its entirety, discloses methods and systems for compression of digital images (still or motion sequences), wherein predetermined criteria may be used to identify a plurality of areas of interest in the image, and each area of interest is encoded with a corresponding quality level (Q-factor).
  • US6393056 which is incorporated herein by reference in its entirety, discloses a system for compression of information from one detector as a function of information from another detector.
  • the present invention discloses a method, a module and a camera in a video acquisition system with a maximal frame rate for acquiring full frame images.
  • the method, module and camera enable acquiring regions of interest (ROI's) at a higher frame rate than the maximal frame rate for acquiring full frame images.
  • the method comprises the steps: acquiring at least one consequent full frame, identifying an object in the frames, defining a subarea of a full frame around the object, and acquiring the subarea at a higher frame rate for a predefined period of time, wherein the higher frame rate is achieved by acquiring only the subarea corresponding to the ROI.
  • the camera comprises at least one sensor for acquiring full frame images, a tracking module for identifying objects and tracking their movements, and a control module for defining an ROI around the objects.
  • the control module is operatively associated with the sensor and is configured to operate only at least one subarea corresponding to the ROI of the sensor. Restricting the acquisition to the subarea of the sensor permits a higher frame rate than the acquisition frame rate of full frame images.
  • the module defines an ROI and acquires only the defined ROI. In embodiments, restricting the acquisition to the subarea of the sensor permits a higher quality of acquisition than the acquisition quality of full frame images.
  • the module comprises a controlling module arranged to define an ROI in the controller, and an acquisition module arranged to acquire a subarea of the full image in the sensor.
  • Fig. 1 comprises two graphs illustrating the relation between the size of ROI and the respective frame rate, according to some embodiments of the invention.
  • Fig. 2 illustrates the frame flow in an embodiment of the invention in comparison to the prior art, according to some embodiments of the invention.
  • Fig. 3 is a flowchart illustrating a method for acquiring images of a region of interest at a high frame rate, according to some embodiments of the invention.
  • Fig. 4 is a block diagram of a system for acquiring images of a region of interest at a high frame rate, according to some embodiments of the invention.
  • Fig. 5 is a block diagram of a camera for acquiring images of objects and regions of interest at a high frame rate, according to some embodiments of the invention.
  • Fig. 6 is a frame image illustrating the ROI control data, according to some embodiments of the invention.
  • Fig. 7 is a block diagram illustrating an integrated circuit frame for acquiring images of objects and regions of interest at a high frame rate, according to some embodiments of the invention.
  • the present invention discloses a system and method for acquiring regions of interest (ROI's) at a higher frame rate than the maximal frame rate for acquiring full frame images using the prior art.
  • ROI for region of interest
  • GUI for graphical user interface
  • CCD charge-coupled device
  • CMOS complementary metal-oxide semiconductor
  • CCD and CMOS are used in this disclosure to denote an image sensor, comprising of pixels.
  • a CCD a pixel stand for a single photoelectric light sensor.
  • a CMOS sensor a pixel refer to an active pixel sensor.
  • a pixel may stand for a single cell, a single sensing element or a small area of an acquired image
  • Fig. 1 comprises two graphs illustrating the relation between the size of ROI and the respective frame rate, according to some embodiments of the invention.
  • the graphs depict the size of ROI 140 and the frame rate 150 along a common time axis 100.
  • Three possible sizes ROI 140 are schematically represented as small, medium and large.
  • Three possible frame rates 150 are schematically represented as low, medium and high.
  • a large ROI e.g. a full frame
  • the system recognizes (at time Tl) a small ROI (e.g. including an interesting moving object), which is acquired at a high frame rate 110, leaving out the rest of the frame.
  • the system switches (at time T2) to acquire a larger ROI, which is still smaller than the full frame, at a medium frame rate 115 (e.g. to follow the movement of the object outside the small ROI). Consequently (at time T3, e.g. upon losing a followed object), the system acquires the full frame again (large ROI, low frame rate 120). Upon detecting the object again (at time T4) the system returns to acquiring a small ROI at a high frame rate (125).
  • Fig. 2 illustrates the frame flow in an embodiment of the invention in comparison to the prior art, according to some embodiments of the invention.
  • a compressor e.g. implementing an MPEG compression, compresses the full frames outputted by the sensor and converts them into a frame flow that comprises frames of a constant size 200, 210, 215.
  • Such a frame flow may comprise I Frames 200 (reference full images), P Frames 215 (relative frames encoded in relation to a former frame) and B frames 210 (relative frames encoded in relation to both former and future frames).
  • the frame flow comprises references frames 205 and ROI frames 220 having different sizes and different frame rates.
  • the reference frames may be taken in pairs (205A, 205B) at the frame rate of prior art, while the ROI frames 220 may be taken in a large number at a high frame rate.
  • n ROI frames are taken at the time four full frames are taken according to the prior art (n»4).
  • each ROI frame hold all the data of the ROI, other than P frames 215 and B frames 210 in the prior art which only hold relative changes to their reference frames.
  • the single ROI frames may be processed using single image compression protocols e.g. JPEG2000.
  • the last I frame is the latest image a user may receive from the system, whereas according to the disclosed invention, the user may receive the last acquired image of the ROI which comprises a later acquired image than the last I frame.
  • FIG. 3 is a flowchart illustrating a method for acquiring images of a region of interest at a high frame rate, according to some embodiments of the invention. The method comprises the steps:
  • the region of interest may comprise an object.
  • step 320 • Defining a subarea of a full frame around the region of interest (step 320).
  • the method may be implemented in a video acquisition system with a maximal frame rate for acquiring full frame images.
  • the method acquires regions of interest at a higher frame rate than the maximal frame rate for full frames of the acquisition system.
  • the method acquires regions of interest in a higher image quality than the acquisition quality of full frames, e.g. in a larger number of bits per pixel such as in a higher color depth.
  • identifying a region of interest may be carried out manually (e.g. in applications of large area surveillance), automatically (e.g. following fast moving single objects), or semi- automatically (e.g. following a moving object with priorities set by a user).
  • defining a subarea around an object may be carried out automatically (e.g. after estimating the velocity of a moving object), semi-automatically (e.g. with a default that may be changed by supervisor) or manually (e.g. according to expected movements or frame rate considerations).
  • identifying a region of interest (step 310) and defining a subarea around an object (step 320) may be carried out utilizing two full frames and comparing differences between them.
  • step 320 defining a subarea around an object (step 320) and allow acquiring a subarea already after a single full frame.
  • multiple ROI' s may be identified (step 310), defined (step 320) and acquired (step 330).
  • a control application may manage acquiring several ROF s, comprising identifying objects and regions of interest, defining the sizes of the ROI's and subareas, and handling priorities.
  • the method may be carried out in real-time.
  • the method may further comprise defining the size of the region of interest according to a required frame rate
  • step 340 as defined by a user or automatically to achieve a certain task.
  • the method may further comprise identifying an occurrence of an event within the region of interest (step 350).
  • acquiring a subarea of the full frame comprises reading a subset of pixels from the entirety of pixels contained in a full frame.
  • the method may further comprise detecting ejection of a projectile in a region of interest and detecting the location of impact of the projectile.
  • the method may further comprise calculating at least one function depending on the velocity of the projectile.
  • Fig. 4 is a block diagram of a system for acquiring images of a region of interest at a high frame rate, according to some embodiments of the invention.
  • the video acquisition system comprises of a camera 400 communicating with a control unit 450.
  • Control unit 450 may be operated or supervised by a user 99.
  • the video acquisition system has a maximal frame rate for acquiring full frame images.
  • Camera 400 comprises at least one sensor 410 for acquiring images (e.g. a CCD), a tracking module 420 for identifying objects and tracking their movements and more generally for defining regions of interest, a control module 430 for controlling the acquired ROI's (e.g. parts of the CCD area) and a communication module 440 for communicating data to control unit 450 and receiving instructions from control unit 450.
  • images e.g. a CCD
  • tracking module 420 for identifying objects and tracking their movements and more generally for defining regions of interest
  • control module 430 for controlling the acquired ROI's (e.g. parts of the CCD area
  • Control unit 450 comprises an interactive display 460 controlled by a graphical user interface (GUI) 470, a control application 480 and a database storing regions of interest, objects and data related to the former operation of the system.
  • GUI graphical user interface
  • User 99 may control the identification of objects and regions of interest and the definition of ROI's and subareas, as well as other parameters using interactive display 460 and GUI 470.
  • Objects from database 490 may be compared to acquired objects (manually, automatically or semi-automatically).
  • the control module 430 may be configured to further comprise data relating to the ROI.
  • the control module 430 may be configured to acquire the ROI in a higher frame rate than required for acquisition of the full frame images, and optionally in a higher image quality than the acquisition quality of full frames.
  • the control module 430 may be configured to detect the ejection of a projectile in the ROI and detect the location of at least one moving objects in the ROI. According to some embodiments of the invention, the control module 430 may be configured to calculate at least one function depending on the velocity of the moving objects, and to calculate the impact location of at least one moving object.
  • Fig. 5 is a block diagram of a camera for acquiring images of objects and regions of interest at a high frame rate, according to some embodiments of the invention.
  • the camera comprises a lens 515, a sensor 510 and a controller 500 for acquiring an image 520.
  • the sensor 510 provides acquired data 530 to the controller according to control data provided by the controller.
  • the control data comprises prior art control data 540 and ROI control data 550 according to some embodiments of the invention.
  • the prior art control data 540 comprises horizontal synchronization, vertical synchronization and clock, for synchronizing and timing the delivery of full frame data from the sensor 510.
  • the ROI control data 550 comprises coordinates of start and end pixels of the ROI.
  • the coordinates may comprise the horizontal and vertical coordinates of the starting pixel, and the number of rows and columns included in the ROI.
  • the coordinates may comprise the horizontal and vertical coordinates of the starting pixel, and the horizontal and vertical coordinates of the ending pixel of the ROI.
  • the ROI control data 550 may further comprise data related to the quality and frame rate of the ROI, e.g. the resolution of the analog to digital converter (e.g.
  • the sensor 510 may use the same data channel for providing ROI data as acquired data 530 channel for full frame images.
  • the sensor 510 may comprise a CCD.
  • the sensor 510 may comprise a digital CMOS image sensor.
  • a subarea of the sensor may comprise some of the pixels in the sensor, corresponding to an area that is defined around an ROI.
  • a module for defining an ROI and acquiring only the ROI may be added to a video camera comprising a controller 500 and at least one sensor 510 acquiring full frame images.
  • the module may comprise a controlling module 570 in the controller 500 operatively associated with an acquisition module 580 in the sensor 510.
  • the controlling module 570 is arranged to define an ROI around an acquired object.
  • the acquisition module 580 is arranged to acquire a subarea of the full image corresponding to the ROI. Acquiring only the ROI is carried out in a higher frame rate than the maximal frame rate of the video camera when acquiring full images. According to some embodiments of the invention, acquiring only the ROI is carried out in a higher image quality than the acquisition quality of full frames.
  • Fig. 6 is a frame image illustrating the ROI control data, according to some embodiments of the invention.
  • the figure presents and ROI frame 610 within a full frame 600.
  • the corner pixels of the full frame are indicated as (0,0), (0,M), (N,0) and (N 5 M), N and M denoting the number of line and columns in the full frame respectively (e.g.1024 and 768).
  • the corner pixels of the ROI may change according to the specific ROI defined by the controller and the system.
  • a momentary ROI is defined by the corner pixels (X, Y), (X,Y+B), (X+A,Y) and (X+A, Y+B) (e.g.
  • vertical and horizontal synchronization signals may specify the numbers of start and end lines and of start and end columns, respectively.
  • the synchronization signals may be sent from a controlling module to the camera, and may define a subarea of the full frame that is smaller than the full frame.
  • Fig. 7 is a block diagram illustrating an integrated circuit for acquiring images of objects and regions of interest at a high frame rate, according to some embodiments of the invention.
  • the integrated circuit comprises an array of photo sensitive cells 700 connected to a plurality of wiring elements 710.
  • the wiring elements 710 are arranged to be controlled by a controller 730 (not being part of the integrated circuit), with or without an intermediate switching element 720, that may mediate control commands from the controller 730 to the wiring elements 710.
  • the wiring elements 710 may be connected to a large proportion of the photo sensitive cells 700, and arranged to allow access to subsets of photo sensitive cells 700.
  • the photo sensitive cells 700 may comprise a CCD or a CMOS sensor.
  • the photo sensitive cells 700 may exhibit photo sensitive cells subsets 740.
  • the subset 740 may partially overlap and be of different sizes.
  • the wiring elements 710 may be connected to photo sensitive cells 700 in such a way, that each subset of photo sensitive cells 740 is connected to at least one wiring element 710.
  • the wiring elements 710 may be arranged to allow individual access to each subset 740 of photo sensitive cells 700 and permit output of data from each subset 740 for itself, and without having to output data from other photo sensitive cells
  • the wiring elements 710 may be operatively associated with a controller 730.
  • the controller 730 may enable reading data from a subset 740 of the plurality of photo sensitive cells 700 alone.
  • the integrated circuit may further comprise the switching element 720 connected to the wiring elements
  • the wiring elements 710 may be operatively associated with the switching element 720.
  • the switching element 720 may mediate the control commands from the controller 730 to the wiring elements 710.
  • the integrated circuit may be constructed in a way that allows outputting data from a subset of photo sensitive cells 700 comprising a region of interest.
  • the wiring elements 710 may be connected to a subgroup of photo sensitive cells 700, and the controller 730 or switching element 720 may decide for each region of interest which of the wiring elements 710 should be used to receive data from the appropriate subset 740 comprising the region of interest and possibly some peripheral photo sensitive cells 700. Taking only data from a subset 740 of photo sensitive cells 700 in the integrated circuit and not from all photo sensitive cells 700 may allow acquiring data from this subset 740 more frequently than acquiring data from all photo sensitive cells 700 and thus allowing the acquisition of the region of interest at a higher frame rate.
  • Methods of the present invention may be implemented by performing or completing manually, automatically, or a combination thereof, selected steps or tasks.
  • method may refer to manners, means, techniques and procedures for accomplishing a given task including, but not limited to, those manners, means, techniques and procedures either known to, or readily developed from known manners, means, techniques and procedures by practitioners of the art to which the invention belongs.

Abstract

A method, module and camera in a video acquisition system with a maximal frame rate for acquiring full frame images. The method, module and camera enable acquiring regions of interest at a higher frame rate than the maximal frame rate for acquiring full frame images. The higher frame rate is achieved by acquiring only a subarea of the full image corresponding to the region of interest.

Description

ACQUIRING REGIONS OF INTEREST AT A HIGH FRAME
RATE
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Patent Application 60916576 filed May 8, 2007, which is incorporated herein by reference.
FIELD OF THE INVENTION
[0002] The present invention generally relates to the field of image processing. More particularly, the present invention relates to acquiring regions of interest.
BACKGROUND OF THE INVENTION
[0003] US2004240546, which is incorporated herein by reference in its entirety, discloses a method and/or apparatus for analyzing the content of a surveillance image, improving upon conventional approaches by implementing content analysis in the acquisition side of the system. The volume of data recorded can be reduced by identifying and eliminating "uninteresting" content. US2005200714, which is incorporated herein by reference in its entirety, discloses a digital video system using networked cameras.
[0004] WO2007079590, which is incorporated herein by reference in its entirety, discloses an interactive input system comprising at least two imaging devices associated with a region of interest. EP0604009, which is incorporated herein by reference in its entirety, discloses a video camera that is attached to a computer having image processing facilities. US2007030342, which is incorporated herein by reference in its entirety, discloses an apparatus and method for capturing a scene using staggered triggering of dense camera arrays.
[0005] WO2006086141, which is incorporated herein by reference in its entirety, discloses a video processing system for generating a foveated video display with sections having different resolutions. US2006098882, which is incorporated herein by reference in its entirety, discloses a compression of a scene of an image sequence using a number of foveation zones, each foveation zone being weighted based on a probability of a viewer looking at a corresponding portion of the first scene. US6252989, which is incorporated herein by reference in its entirety, discloses a foveated imaging system, which can be implemented on a general purpose computer and greatly reduces the transmission bandwidth of images.
[0006] US2002064314, which is incorporated herein by reference in its entirety, discloses a client-server system and method that enables efficient, low bit rate transmission of image data over a network from an image server (e.g., active cameras) to a client for, e.g., distributed surveillance. US2006140495, which is incorporated herein by reference in its entirety, discloses methods and systems for compression of digital images (still or motion sequences), wherein predetermined criteria may be used to identify a plurality of areas of interest in the image, and each area of interest is encoded with a corresponding quality level (Q-factor). US6393056, which is incorporated herein by reference in its entirety, discloses a system for compression of information from one detector as a function of information from another detector.
SUMMARY OF THE INVENTION
[0007] The present invention discloses a method, a module and a camera in a video acquisition system with a maximal frame rate for acquiring full frame images. The method, module and camera enable acquiring regions of interest (ROI's) at a higher frame rate than the maximal frame rate for acquiring full frame images. The method comprises the steps: acquiring at least one consequent full frame, identifying an object in the frames, defining a subarea of a full frame around the object, and acquiring the subarea at a higher frame rate for a predefined period of time, wherein the higher frame rate is achieved by acquiring only the subarea corresponding to the ROI. The camera comprises at least one sensor for acquiring full frame images, a tracking module for identifying objects and tracking their movements, and a control module for defining an ROI around the objects. The control module is operatively associated with the sensor and is configured to operate only at least one subarea corresponding to the ROI of the sensor. Restricting the acquisition to the subarea of the sensor permits a higher frame rate than the acquisition frame rate of full frame images. The module defines an ROI and acquires only the defined ROI. In embodiments, restricting the acquisition to the subarea of the sensor permits a higher quality of acquisition than the acquisition quality of full frame images. In embodiments, the module comprises a controlling module arranged to define an ROI in the controller, and an acquisition module arranged to acquire a subarea of the full image in the sensor.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] The subject matter regarded as the invention will become more clearly understood in light of the ensuing description of embodiments herein, given by way of example and for purposes of illustrative discussion of the present invention only, with reference to the accompanying drawings (Figures, or simply "FIGS."), wherein:
Fig. 1 comprises two graphs illustrating the relation between the size of ROI and the respective frame rate, according to some embodiments of the invention.
Fig. 2 illustrates the frame flow in an embodiment of the invention in comparison to the prior art, according to some embodiments of the invention.
Fig. 3 is a flowchart illustrating a method for acquiring images of a region of interest at a high frame rate, according to some embodiments of the invention.
Fig. 4 is a block diagram of a system for acquiring images of a region of interest at a high frame rate, according to some embodiments of the invention.
Fig. 5 is a block diagram of a camera for acquiring images of objects and regions of interest at a high frame rate, according to some embodiments of the invention.
Fig. 6 is a frame image illustrating the ROI control data, according to some embodiments of the invention.
Fig. 7 is a block diagram illustrating an integrated circuit frame for acquiring images of objects and regions of interest at a high frame rate, according to some embodiments of the invention.
DETAILED DESCRIPTIONS OF SOME EMBODIMENTS OF THE INVENTION [0009] The present invention discloses a system and method for acquiring regions of interest (ROI's) at a higher frame rate than the maximal frame rate for acquiring full frame images using the prior art.
[0010] The following shorthand is used in the current disclosure: ROI for region of interest, GUI for graphical user interface, CCD for charge-coupled device, CMOS for complementary metal-oxide semiconductor. The terms CCD and CMOS are used in this disclosure to denote an image sensor, comprising of pixels. In a CCD a pixel stand for a single photoelectric light sensor. In a CMOS sensor a pixel refer to an active pixel sensor. In other image sensors a pixel may stand for a single cell, a single sensing element or a small area of an acquired image
[0011] Fig. 1 comprises two graphs illustrating the relation between the size of ROI and the respective frame rate, according to some embodiments of the invention. The graphs depict the size of ROI 140 and the frame rate 150 along a common time axis 100. Three possible sizes ROI 140 are schematically represented as small, medium and large. Three possible frame rates 150 are schematically represented as low, medium and high. Starting with a large ROI (e.g. a full frame) and a low (e.g. maximal using prior art) frame rate 105, the system recognizes (at time Tl) a small ROI (e.g. including an interesting moving object), which is acquired at a high frame rate 110, leaving out the rest of the frame. After following the small ROI for a period, the system switches (at time T2) to acquire a larger ROI, which is still smaller than the full frame, at a medium frame rate 115 (e.g. to follow the movement of the object outside the small ROI). Consequently (at time T3, e.g. upon losing a followed object), the system acquires the full frame again (large ROI, low frame rate 120). Upon detecting the object again (at time T4) the system returns to acquiring a small ROI at a high frame rate (125).
[0012] Fig. 2 illustrates the frame flow in an embodiment of the invention in comparison to the prior art, according to some embodiments of the invention. According to prior art, a compressor e.g. implementing an MPEG compression, compresses the full frames outputted by the sensor and converts them into a frame flow that comprises frames of a constant size 200, 210, 215. Such a frame flow may comprise I Frames 200 (reference full images), P Frames 215 (relative frames encoded in relation to a former frame) and B frames 210 (relative frames encoded in relation to both former and future frames). According to some embodiments of the invention, the frame flow comprises references frames 205 and ROI frames 220 having different sizes and different frame rates. For example, the reference frames may be taken in pairs (205A, 205B) at the frame rate of prior art, while the ROI frames 220 may be taken in a large number at a high frame rate. In the depicted example, n ROI frames are taken at the time four full frames are taken according to the prior art (n»4). Moreover, each ROI frame hold all the data of the ROI, other than P frames 215 and B frames 210 in the prior art which only hold relative changes to their reference frames. The single ROI frames may be processed using single image compression protocols e.g. JPEG2000.
[0013] According to prior art, upon pausing a recording of a video acquisition system, the last I frame is the latest image a user may receive from the system, whereas according to the disclosed invention, the user may receive the last acquired image of the ROI which comprises a later acquired image than the last I frame.
[0014] Fig. 3 is a flowchart illustrating a method for acquiring images of a region of interest at a high frame rate, according to some embodiments of the invention. The method comprises the steps:
• Acquiring at least one consequent full image (step 300).
• Identifying a region of interest in the images (step 310). The region of interest may comprise an object.
• Defining a subarea of a full frame around the region of interest (step 320).
• Acquiring the subarea at a high frame rate for a predefined period of time (step 330).
• Acquiring full images to track or change region of interest (repeat step 300). The method may be implemented in a video acquisition system with a maximal frame rate for acquiring full frame images. The method acquires regions of interest at a higher frame rate than the maximal frame rate for full frames of the acquisition system. According to some embodiments of the invention, the method acquires regions of interest in a higher image quality than the acquisition quality of full frames, e.g. in a larger number of bits per pixel such as in a higher color depth.
[0015] According to some embodiments of the invention, identifying a region of interest (step 310) may be carried out manually (e.g. in applications of large area surveillance), automatically (e.g. following fast moving single objects), or semi- automatically (e.g. following a moving object with priorities set by a user).
[0016] According to some embodiments of the invention, defining a subarea around an object (step 320) may be carried out automatically (e.g. after estimating the velocity of a moving object), semi-automatically (e.g. with a default that may be changed by supervisor) or manually (e.g. according to expected movements or frame rate considerations).
[0017] According to some embodiments of the invention, identifying a region of interest (step 310) and defining a subarea around an object (step 320) may be carried out utilizing two full frames and comparing differences between them.
[0018] According to some embodiments of the invention, identifying a region of interest (step 310) and defining a subarea around an object (step 320) may be carried out according to expected movement, such as an expected initiation of motion of an object in the full frame or an expected entry of an object into the full frame. In these cases acquiring a single full frame may be sufficient for identifying a region of interest (step
310) and defining a subarea around an object (step 320) and allow acquiring a subarea already after a single full frame.
[0019] According to some embodiments of the invention, multiple ROI' s may be identified (step 310), defined (step 320) and acquired (step 330). A control application may manage acquiring several ROF s, comprising identifying objects and regions of interest, defining the sizes of the ROI's and subareas, and handling priorities.
[0020] According to some embodiments of the invention, the method may be carried out in real-time.
[0021] According to some embodiments of the invention, the method may further comprise defining the size of the region of interest according to a required frame rate
(step 340) as defined by a user or automatically to achieve a certain task.
[0022] According to some embodiments of the invention, the method may further comprise identifying an occurrence of an event within the region of interest (step 350).
[0023] According to some embodiments of the invention, acquiring a subarea of the full frame (step 330) comprises reading a subset of pixels from the entirety of pixels contained in a full frame. [0024] According to some embodiments of the invention, the method may further comprise detecting ejection of a projectile in a region of interest and detecting the location of impact of the projectile. According to some embodiments of the invention, the method may further comprise calculating at least one function depending on the velocity of the projectile.
[0025] Fig. 4 is a block diagram of a system for acquiring images of a region of interest at a high frame rate, according to some embodiments of the invention. The video acquisition system comprises of a camera 400 communicating with a control unit 450. Control unit 450 may be operated or supervised by a user 99. The video acquisition system has a maximal frame rate for acquiring full frame images. Camera 400 comprises at least one sensor 410 for acquiring images (e.g. a CCD), a tracking module 420 for identifying objects and tracking their movements and more generally for defining regions of interest, a control module 430 for controlling the acquired ROI's (e.g. parts of the CCD area) and a communication module 440 for communicating data to control unit 450 and receiving instructions from control unit 450. Control unit 450 comprises an interactive display 460 controlled by a graphical user interface (GUI) 470, a control application 480 and a database storing regions of interest, objects and data related to the former operation of the system. User 99 may control the identification of objects and regions of interest and the definition of ROI's and subareas, as well as other parameters using interactive display 460 and GUI 470. Objects from database 490 may be compared to acquired objects (manually, automatically or semi-automatically). The control module 430 may be configured to further comprise data relating to the ROI. The control module 430 may be configured to acquire the ROI in a higher frame rate than required for acquisition of the full frame images, and optionally in a higher image quality than the acquisition quality of full frames.
[0026] According to some embodiments of the invention, the control module 430 may be configured to detect the ejection of a projectile in the ROI and detect the location of at least one moving objects in the ROI. According to some embodiments of the invention, the control module 430 may be configured to calculate at least one function depending on the velocity of the moving objects, and to calculate the impact location of at least one moving object. [0027] Fig. 5 is a block diagram of a camera for acquiring images of objects and regions of interest at a high frame rate, according to some embodiments of the invention. The camera comprises a lens 515, a sensor 510 and a controller 500 for acquiring an image 520. The sensor 510 provides acquired data 530 to the controller according to control data provided by the controller. The control data comprises prior art control data 540 and ROI control data 550 according to some embodiments of the invention. The prior art control data 540 comprises horizontal synchronization, vertical synchronization and clock, for synchronizing and timing the delivery of full frame data from the sensor 510. The ROI control data 550 comprises coordinates of start and end pixels of the ROI. For example, the coordinates may comprise the horizontal and vertical coordinates of the starting pixel, and the number of rows and columns included in the ROI. In another embodiment, the coordinates may comprise the horizontal and vertical coordinates of the starting pixel, and the horizontal and vertical coordinates of the ending pixel of the ROI. The ROI control data 550 may further comprise data related to the quality and frame rate of the ROI, e.g. the resolution of the analog to digital converter (e.g. 12 bit for ROI versus 8 bit for full frame) and the frame rate (e.g. 900 frame per second for ROI versus 30 frame per second for ROI). The sensor 510 may use the same data channel for providing ROI data as acquired data 530 channel for full frame images. [0028] According to some embodiments of the invention, the sensor 510 may comprise a CCD. According to some embodiments of the invention, the sensor 510 may comprise a digital CMOS image sensor. A subarea of the sensor may comprise some of the pixels in the sensor, corresponding to an area that is defined around an ROI. [0029] According to some embodiments of the invention, a module for defining an ROI and acquiring only the ROI may be added to a video camera comprising a controller 500 and at least one sensor 510 acquiring full frame images. According to some embodiments of the invention, the module may comprise a controlling module 570 in the controller 500 operatively associated with an acquisition module 580 in the sensor 510. The controlling module 570 is arranged to define an ROI around an acquired object. The acquisition module 580 is arranged to acquire a subarea of the full image corresponding to the ROI. Acquiring only the ROI is carried out in a higher frame rate than the maximal frame rate of the video camera when acquiring full images. According to some embodiments of the invention, acquiring only the ROI is carried out in a higher image quality than the acquisition quality of full frames.
[0030] Fig. 6 is a frame image illustrating the ROI control data, according to some embodiments of the invention. The figure presents and ROI frame 610 within a full frame 600. The corner pixels of the full frame are indicated as (0,0), (0,M), (N,0) and (N5M), N and M denoting the number of line and columns in the full frame respectively (e.g.1024 and 768). The corner pixels of the ROI may change according to the specific ROI defined by the controller and the system. In Fig. 6 a momentary ROI is defined by the corner pixels (X, Y), (X,Y+B), (X+A,Y) and (X+A, Y+B) (e.g. a 100 x 200 ROI with the corner pixels (140,280), (140,480), (240,280), (240,480)). According to some embodiments of the invention, vertical and horizontal synchronization signals may specify the numbers of start and end lines and of start and end columns, respectively. The synchronization signals may be sent from a controlling module to the camera, and may define a subarea of the full frame that is smaller than the full frame.
[0031] Fig. 7 is a block diagram illustrating an integrated circuit for acquiring images of objects and regions of interest at a high frame rate, according to some embodiments of the invention. The integrated circuit comprises an array of photo sensitive cells 700 connected to a plurality of wiring elements 710. The wiring elements 710 are arranged to be controlled by a controller 730 (not being part of the integrated circuit), with or without an intermediate switching element 720, that may mediate control commands from the controller 730 to the wiring elements 710. The wiring elements 710 may be connected to a large proportion of the photo sensitive cells 700, and arranged to allow access to subsets of photo sensitive cells 700. The photo sensitive cells 700 may comprise a CCD or a CMOS sensor.
[0032] According to some embodiments of the invention, the photo sensitive cells 700 may exhibit photo sensitive cells subsets 740. The subset 740 may partially overlap and be of different sizes. The wiring elements 710 may be connected to photo sensitive cells 700 in such a way, that each subset of photo sensitive cells 740 is connected to at least one wiring element 710. The wiring elements 710 may be arranged to allow individual access to each subset 740 of photo sensitive cells 700 and permit output of data from each subset 740 for itself, and without having to output data from other photo sensitive cells
700 outside the subset 740.
[0033] According to some embodiments of the invention, the wiring elements 710 may be operatively associated with a controller 730. The controller 730 may enable reading data from a subset 740 of the plurality of photo sensitive cells 700 alone. The integrated circuit may further comprise the switching element 720 connected to the wiring elements
710 in the integrated circuit and to the controller 730 outside the integrated circuit. The wiring elements 710 may be operatively associated with the switching element 720. The switching element 720 may mediate the control commands from the controller 730 to the wiring elements 710.
[0034] According to some embodiments of the invention, the integrated circuit may be constructed in a way that allows outputting data from a subset of photo sensitive cells 700 comprising a region of interest. The wiring elements 710 may be connected to a subgroup of photo sensitive cells 700, and the controller 730 or switching element 720 may decide for each region of interest which of the wiring elements 710 should be used to receive data from the appropriate subset 740 comprising the region of interest and possibly some peripheral photo sensitive cells 700. Taking only data from a subset 740 of photo sensitive cells 700 in the integrated circuit and not from all photo sensitive cells 700 may allow acquiring data from this subset 740 more frequently than acquiring data from all photo sensitive cells 700 and thus allowing the acquisition of the region of interest at a higher frame rate.
[0035] In the above description, an embodiment is an example or implementation of the inventions. The various appearances of "one embodiment," "an embodiment" or "some embodiments" do not necessarily all refer to the same embodiments.
[0036] Although various features of the invention may be described in the context of a single embodiment, the features may also be provided separately or in any suitable combination. Conversely, although the invention may be described herein in the context of separate embodiments for clarity, the invention may also be implemented in a single embodiment.
[0037] Reference in the specification to "some embodiments", "an embodiment", "one embodiment" or "other embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments, of the inventions.
[0038] It is understood that the phraseology and terminology employed herein is not to be construed as limiting and are for descriptive purpose only.
[0039] The principles and uses of the teachings of the present invention may be better understood with reference to the accompanying description, figures and examples.
[0040] It is to be understood that the details set forth herein do not construe a limitation to an application of the invention.
[0041] Furthermore, it is to be understood that the invention can be carried out or practiced in various ways and that the invention can be implemented in embodiments other than the ones outlined in the description above.
[0042] It is to be understood that where the claims or specification refer to "a" or "an" element, such reference is not be construed that there is only one of that element.
[0043] It is to be understood that where the specification states that a component, feature, structure, or characteristic "may", "might", "can" or "could" be included, that particular component, feature, structure, or characteristic is not required to be included.
[0044] Where applicable, although state diagrams, flow diagrams or both may be used to describe embodiments, the invention is not limited to those diagrams or to the corresponding descriptions. For example, flow need not move through each illustrated box or state, or in exactly the same order as illustrated and described.
[0045] Methods of the present invention may be implemented by performing or completing manually, automatically, or a combination thereof, selected steps or tasks.
[0046] The term "method" may refer to manners, means, techniques and procedures for accomplishing a given task including, but not limited to, those manners, means, techniques and procedures either known to, or readily developed from known manners, means, techniques and procedures by practitioners of the art to which the invention belongs.
[0047] The descriptions, examples, methods and materials presented in the claims and the specification are not to be construed as limiting but rather as illustrative only. [0048] Meanings of technical and scientific terms used herein are to be commonly understood as by one of ordinary skill in the art to which the invention belongs, unless otherwise defined.
[0049] The present invention can be implemented in the testing or practice with methods and materials equivalent or similar to those described herein. [0050] While the invention has been described with respect to a limited number of embodiments, these should not be construed as limitations on the scope of the invention, but rather as exemplifications of some of the preferred embodiments. Those skilled in the art will envision other possible variations, modifications, and applications that are also within the scope of the invention. Accordingly, the scope of the invention should not be limited by what has thus far been described, but by the appended claims and their legal equivalents.

Claims

CLAIMSWhat is claimed is:
1. In a video acquisition system with a maximal frame rate for acquiring full frame images, a method for acquiring regions of interest at a higher frame rate than said maximal frame rate, said method comprising: acquiring at least one consequent full frames, identifying at least one region of interest in said frames, defining at least one subarea of a full frame around said at least one region of interest, acquiring said subarea at said higher frame rate for a predefined period of time, wherein said higher frame rate is achieved by acquiring only said subarea corresponding to said region of interest.
2. The method of claim 1, wherein said acquiring said subarea comprises reading a subset of pixels from the entirety of pixels of said full frames.
3. The method of claim 1, wherein said acquiring said subarea is carried out in a higher image quality than the acquisition quality of full frames.
4. The method of claim 1, wherein said acquiring said subarea is carried out in a larger number of bits per pixel than acquisition quality of full frames.
5. The method of claim 1, further comprising displaying the latest acquired image of said subarea upon pausing the acquisition by said video acquisition system.
6. The method of claim 1, wherein said identifying a region of interest in said frames is carried out manually by a user.
7. The method of claim 1, wherein said defining subarea of a full frame around said region of interest, is carried out manually by a user.
8. The method of claim 1, wherein said identifying a region of interest in said frames is carried out automatically by identifying differences between said at least one consequent full frame acquired.
9. The method of claim 1, wherein said defining subarea of a full frame around said region of interest is carried out automatically and relating to said region of interest.
10. The method of claim 1, wherein said identifying a region of interest in said frames is carried out according to expected movement in said at least one full frame.
11. The method of claim 1, wherein said method is carried out in real-time.
12. The method of claim 1, further comprising defining the size of said region of interest according to a required frame rate.
13. The method of claim 1, further comprising identifying an occurrence of an event within said region of interest.
14. The method of claim 13, wherein said event comprising ejection of a projectile, and further comprising detecting the location of impact of said projectile.
15. The method of claim 13, wherein said event comprising ejection of a projectile, and further comprising calculating at least one function depending on the velocity of said projectile.
16. An integrated circuit comprising: a plurality of photo sensitive cells exhibiting photo sensitive cells subsets; a plurality of wiring elements connected to a plurality of said photo sensitive cells such that each subset of photo sensitive cells is connected to at least one wiring element; wherein wiring elements are operatively associated with a controller and arranged to allow individual access to each subset of photo sensitive cells.
17. The integrated circuit of claim 16, further comprising a switching element operatively associated with said plurality of wiring elements and with said controller, said switching element mediates the control commands from said controller to said wiring elements.
18. The integrated circuit of claim 16, wherein said a plurality of photo sensitive cells comprises one of the following: a CCD, a CMOS sensor.
19. A module in a video camera comprising a controller and at least one sensor acquiring full frame images, said module comprising: a controlling module in said controller, said controlling module arranged to define an ROI around an acquired object, an acquisition module in said at least one sensor, said acquisition module operatively associated with said controlling module and arranged to acquire a subarea of the full image corresponding to said ROI.
20. The module of claim 19, wherein acquiring said subarea is carried out in a higher frame rate than the maximal frame rate of said video camera when acquiring full images.
21. The module of claim 19, wherein acquiring said subarea is carried out in a higher image quality than the image quality of full images.
22. A camera for acquiring images of objects at a high frame rate, said camera comprising: at least one sensor for acquiring full frame images, a tracking module for identifying objects and tracking their movements, said tracking module using at least two images acquired by said at least one sensor, a control module for defining an ROI around said objects, said control module operatively associated with said at least one sensor and is configured to operate only at least one subarea of said least one sensor corresponding to said ROI; wherein restricting the acquisition to said at least one subarea of said least one sensor corresponding to said ROI, permits a higher frame rate than required for acquisition of said full frame images.
23. The camera of claim 22, wherein said at least one sensor is a CCD, and wherein said at least one subarea comprises a non complete plurality of the pixels in said CCD.
24. The camera of claim 22, wherein said at least one sensor is a CMOS sensor.
25. The camera of claim 22, wherein said control module is configured to acquire said ROI in a higher image quality than the acquisition quality of full frames.
26. The camera of claim 22, wherein said control module is configured to calculate at least one function depending on the velocity of objects in said ROI.
27. The camera of claim 22, wherein said control module is configured to calculate impact location of at least one moving object in said ROI.
PCT/IL2008/000654 2007-05-08 2008-05-11 Acquiring regions of interest at a high frame rate WO2008136007A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US91657607P 2007-05-08 2007-05-08
US60/916,576 2007-05-08

Publications (2)

Publication Number Publication Date
WO2008136007A2 true WO2008136007A2 (en) 2008-11-13
WO2008136007A3 WO2008136007A3 (en) 2010-02-25

Family

ID=39944108

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2008/000654 WO2008136007A2 (en) 2007-05-08 2008-05-11 Acquiring regions of interest at a high frame rate

Country Status (1)

Country Link
WO (1) WO2008136007A2 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010133210A1 (en) * 2009-05-19 2010-11-25 Mobotix Ag Digital video camera
CN102088560A (en) * 2009-12-02 2011-06-08 精工爱普生株式会社 Imaging device, imaging method and imaging program
EP2565860A1 (en) * 2011-08-30 2013-03-06 Kapsch TrafficCom AG Device and method for detecting vehicle identification panels
GB2503481A (en) * 2012-06-28 2014-01-01 Bae Systems Plc Increased frame rate for tracked region of interest in surveillance image processing
WO2014016286A1 (en) * 2012-07-27 2014-01-30 Robert Bosch Gmbh Method and device for determining situation data based on image data
EP2786556A4 (en) * 2011-12-23 2015-09-02 Nokia Technologies Oy Controlling image capture and/or controlling image processing
EP3570138A1 (en) * 2013-04-29 2019-11-20 Tobii AB Power efficient image sensing apparatus, method of operating the same and eye/gaze tracking system
EP3621292A1 (en) * 2018-09-04 2020-03-11 Samsung Electronics Co., Ltd. Electronic device for obtaining images by controlling frame rate for external moving object through point of interest, and operating method thereof
US10746979B2 (en) 2013-06-26 2020-08-18 Alentic Microscience Inc. Sample processing improvements for microscopy
US10768078B2 (en) 2013-02-06 2020-09-08 Alentic Microscience Inc. Sample processing improvements for quantitative microscopy
US10824896B2 (en) 2017-05-31 2020-11-03 Shanghai United Imaging Healthcare Co., Ltd. Method and system for image processing
US10866395B2 (en) 2009-10-28 2020-12-15 Alentic Microscience Inc. Microscopy imaging
US10900999B2 (en) 2009-10-28 2021-01-26 Alentic Microscience Inc. Microscopy imaging
CN113286096A (en) * 2021-05-19 2021-08-20 中移(上海)信息通信科技有限公司 Video identification method and system
EP3993401A1 (en) * 2020-11-03 2022-05-04 Samsung Electronics Co., Ltd. Integrated high-speed image sensor and operation method thereof
EP4194821A1 (en) * 2018-02-15 2023-06-14 Viavi Solutions Inc. Sensor device and methods of use

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194925A (en) * 2017-05-31 2017-09-22 上海联影医疗科技有限公司 Image processing method and system
CN109753957B (en) * 2018-12-07 2020-11-27 东软集团股份有限公司 Image significance detection method and device, storage medium and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5393064A (en) * 1988-06-20 1995-02-28 Beard, Iii; Bryce P. Apparatus and method for determining projectile impact locations
US6618084B1 (en) * 1997-11-05 2003-09-09 Stmicroelectronics, Inc. Pixel correction system and method for CMOS imagers
US20030174772A1 (en) * 2001-09-12 2003-09-18 Transchip, Inc. Systems and methods for utilizing activity detection information in relation to image processing
US20060062306A1 (en) * 2004-09-18 2006-03-23 Samsung Electronics Co., Ltd. Method for motion estimation based on hybrid block matching and apparatus for converting frame rate using the method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5393064A (en) * 1988-06-20 1995-02-28 Beard, Iii; Bryce P. Apparatus and method for determining projectile impact locations
US6618084B1 (en) * 1997-11-05 2003-09-09 Stmicroelectronics, Inc. Pixel correction system and method for CMOS imagers
US20030174772A1 (en) * 2001-09-12 2003-09-18 Transchip, Inc. Systems and methods for utilizing activity detection information in relation to image processing
US20060062306A1 (en) * 2004-09-18 2006-03-23 Samsung Electronics Co., Ltd. Method for motion estimation based on hybrid block matching and apparatus for converting frame rate using the method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CARUSO ET AL.: 'Image Editing with Adobe Photoshop 6.0.' RADIOGRAPHICS, [Online] vol. 22, no. 4, July 2002, pages 993 - 1002 Retrieved from the Internet: <URL:http://radiographics.rsnajnls.org/cgi/reprint/22/4/993> [retrieved on 2008-10-19] *

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010133210A1 (en) * 2009-05-19 2010-11-25 Mobotix Ag Digital video camera
US9100564B2 (en) 2009-05-19 2015-08-04 Mobotix Ag Digital video camera
US11947096B2 (en) 2009-10-28 2024-04-02 Alentic Microscience Inc. Microscopy imaging
US10900999B2 (en) 2009-10-28 2021-01-26 Alentic Microscience Inc. Microscopy imaging
US10866395B2 (en) 2009-10-28 2020-12-15 Alentic Microscience Inc. Microscopy imaging
US11294160B2 (en) 2009-10-28 2022-04-05 Alentic Microscience Inc. Microscopy imaging
US11635447B2 (en) 2009-10-28 2023-04-25 Alentic Microscience Inc. Microscopy imaging
CN102088560A (en) * 2009-12-02 2011-06-08 精工爱普生株式会社 Imaging device, imaging method and imaging program
EP2355491A1 (en) * 2009-12-02 2011-08-10 Seiko Epson Corporation Imaging device, imaging method and imaging program
US8964094B2 (en) 2009-12-02 2015-02-24 Seiko Epson Corporation Imaging device, imaging method and imaging program for producing image data on the basis of a plurality of signals transmitted from an image element
EP2565860A1 (en) * 2011-08-30 2013-03-06 Kapsch TrafficCom AG Device and method for detecting vehicle identification panels
US9025028B2 (en) 2011-08-30 2015-05-05 Kapsch Trafficcom Ag Device and method for detecting vehicle license plates
EP2786556A4 (en) * 2011-12-23 2015-09-02 Nokia Technologies Oy Controlling image capture and/or controlling image processing
US9473702B2 (en) 2011-12-23 2016-10-18 Nokia Technologies Oy Controlling image capture and/or controlling image processing
GB2503481B (en) * 2012-06-28 2017-06-07 Bae Systems Plc Surveillance process and apparatus
US9418299B2 (en) 2012-06-28 2016-08-16 Bae Systems Plc Surveillance process and apparatus
WO2014001800A1 (en) * 2012-06-28 2014-01-03 Bae Systems Plc Surveillance process and apparatus
GB2503481A (en) * 2012-06-28 2014-01-01 Bae Systems Plc Increased frame rate for tracked region of interest in surveillance image processing
WO2014016286A1 (en) * 2012-07-27 2014-01-30 Robert Bosch Gmbh Method and device for determining situation data based on image data
US10768078B2 (en) 2013-02-06 2020-09-08 Alentic Microscience Inc. Sample processing improvements for quantitative microscopy
US11598699B2 (en) 2013-02-06 2023-03-07 Alentic Microscience Inc. Sample processing improvements for quantitative microscopy
EP3570138A1 (en) * 2013-04-29 2019-11-20 Tobii AB Power efficient image sensing apparatus, method of operating the same and eye/gaze tracking system
US10809512B2 (en) 2013-06-26 2020-10-20 Alentic Microscience Inc. Sample processing improvements for microscopy
US11874452B2 (en) 2013-06-26 2024-01-16 Alentic Microscience Inc. Sample processing improvements for microscopy
US10746979B2 (en) 2013-06-26 2020-08-18 Alentic Microscience Inc. Sample processing improvements for microscopy
US10824896B2 (en) 2017-05-31 2020-11-03 Shanghai United Imaging Healthcare Co., Ltd. Method and system for image processing
US11798168B2 (en) 2017-05-31 2023-10-24 Shanghai United Imaging Healthcare Co., Ltd. Method and system for image processing
US11461990B2 (en) 2017-05-31 2022-10-04 Shanghai United Imaging Healthcare Co., Ltd. Method and system for image processing
US11796389B2 (en) 2018-02-15 2023-10-24 Viavi Solutions Inc. Sensor device and methods of use
EP4194821A1 (en) * 2018-02-15 2023-06-14 Viavi Solutions Inc. Sensor device and methods of use
US11223761B2 (en) 2018-09-04 2022-01-11 Samsung Electronics Co., Ltd. Electronic device for obtaining images by controlling frame rate for external moving object through point of interest, and operating method thereof
EP3621292A1 (en) * 2018-09-04 2020-03-11 Samsung Electronics Co., Ltd. Electronic device for obtaining images by controlling frame rate for external moving object through point of interest, and operating method thereof
US11736823B2 (en) 2020-11-03 2023-08-22 Samsung Electronics Co., Ltd. Integrated high-speed image sensor and operation method thereof
EP3993401A1 (en) * 2020-11-03 2022-05-04 Samsung Electronics Co., Ltd. Integrated high-speed image sensor and operation method thereof
CN113286096B (en) * 2021-05-19 2022-08-16 中移(上海)信息通信科技有限公司 Video identification method and system
CN113286096A (en) * 2021-05-19 2021-08-20 中移(上海)信息通信科技有限公司 Video identification method and system

Also Published As

Publication number Publication date
WO2008136007A3 (en) 2010-02-25

Similar Documents

Publication Publication Date Title
WO2008136007A2 (en) Acquiring regions of interest at a high frame rate
KR101116789B1 (en) Supervisory camera apparatus and video data processing method
JP4140591B2 (en) Imaging system and imaging method
JP4847165B2 (en) Video recording / reproducing method and video recording / reproducing apparatus
KR100883632B1 (en) System and method for intelligent video surveillance using high-resolution video cameras
US8792681B2 (en) Imaging system and imaging method
US20120140067A1 (en) High Definition Imaging Over Legacy Surveillance and Lower Bandwidth Systems
US9154700B2 (en) Apparatus and method for image capture using image stored in camera
US8780203B2 (en) Video recording apparatus, video recording system and video recording method executed by video recording apparatus
CN102348067B (en) Image capturing apparatus and control method therefor
EP3225022B1 (en) Method and apparatus for improving resolution in a tdi image
US20170142315A1 (en) Imaging apparatus equipped with a flicker detection function
EP3261331A2 (en) Dual mode image sensor and method of using same
KR101025133B1 (en) Video surveillance system and video surveillance method thereof
JP2007281555A (en) Imaging apparatus
JP2006203395A (en) Moving body recognition system and moving body monitor system
US8699750B2 (en) Image processing apparatus
KR100995949B1 (en) Image processing device, camera device and image processing method
WO2006067547A1 (en) Method for extracting of multiple sub-windows of a scanning area by means of a digital video camera
EP2713608B1 (en) A device and a method for image acquisition
JP5069091B2 (en) Surveillance camera and surveillance camera system
KR100763969B1 (en) Auto image pickup apparatus and computer-readable recording medium recorded with program for controlling the same
KR101077777B1 (en) Network camera system, method for processing video data thereof and method for managing thereof
US9253450B2 (en) Total bus surveillance system
US11128835B2 (en) Data transmission method, camera and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08751347

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08751347

Country of ref document: EP

Kind code of ref document: A2