WO2015067750A1 - Method and apparatus for acquiring images - Google Patents

Method and apparatus for acquiring images Download PDF

Info

Publication number
WO2015067750A1
WO2015067750A1 PCT/EP2014/074036 EP2014074036W WO2015067750A1 WO 2015067750 A1 WO2015067750 A1 WO 2015067750A1 EP 2014074036 W EP2014074036 W EP 2014074036W WO 2015067750 A1 WO2015067750 A1 WO 2015067750A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
interest
loop buffer
previous
Prior art date
Application number
PCT/EP2014/074036
Other languages
French (fr)
Inventor
Franciscus Martinus Wilhelmus KANTERS
Thomas Hans Donatus BERGER
Original Assignee
Incatec B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Incatec B.V. filed Critical Incatec B.V.
Publication of WO2015067750A1 publication Critical patent/WO2015067750A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • H04N23/673Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/743Bracketing, i.e. taking a series of images with varying exposure conditions

Definitions

  • the invention relates to a method and apparatus for acquiring images.
  • it relates to a method and apparatus for acquiring images using a loop buffer.
  • a camera 1 1 with a video loop buffer 10 is schematically shown in figure 1 .
  • the camera is configured to, after receiving a user trigger, start capturing images at a predetermined rate (e.g. 5 images per second).
  • the video loop buffer 10 is configured to hold the n most recently captured images.
  • the camera now stops acquiring images, so that the content of the loop buffer 10 is frozen.
  • the camera 1 1 is provided with a (touch) screen 12 for displaying the images from the loop buffer and arrows for selecting a previous or next image. By browsing through the images, the user can select the best image for permanent storage. The remaining images can be deleted or later overwritten.
  • WO 2012 / 166 044 provides a method of capturing images of a view using a camera.
  • a number of different camera settings are used in sequence, so that the view is captured using a variety of settings. The user can then later select which camera setting was appropriate for the view.
  • the sequence of camera settings can be predetermined before capture, or adaptively determined during the capture of the images based on high-level analysis of the captured images.
  • the invention provides a method for acquiring images, the method comprising:
  • the trigger thus stops the capturing of images in the loop buffer, so that (older) images are no longer overwritten after the trigger is received.
  • the trigger can be a user trigger (e.g. the user presses a "capture” button on the camera) or the occurrence of a predetermined event (e.g. the loop buffer is filled , or a predetermined amount of time has passed since an earlier event).
  • the image capture device is adapted to capture images at a high frame rate, for example at least 50 frames/second , 100 frames/second , 200 frames/second , 300 frames/second , or more. Higher frame rates allow better and more reliable object tracking of objects of interest. Object tracking can be performed using feature point tracking methods, using SI FT or SU RF or optical flow based methods or a combination thereof.
  • the components image capture unit, memory, processor units, any dedicated signal processing units, etc
  • a single device such as a camera or smartphone apparatus.
  • image capture devices that can operate at 300 frames/second or even 900 frames/second are commercially available. These devices can advantageously be used in an apparatus according the invention.
  • the processing comprises detecting an imaged object in one or, preferably, a plurality of captured images. That way, an object of interest can be tracked in a plurality of images in the loop buffer.
  • the operating setting of the image capture device can be optimized for optimal imaging of the object of interest.
  • processing the image to obtain an image metric comprises evaluating a function which takes the last m images as input, where m equals at least 2. Algorithms that use more than 2 images as input can be more robust against false detections. An object can be more reliably tracked.
  • processing the image to obtain one or more image metric values comprises at least one of:
  • the operating setting of the image capture device comprises one of:
  • S-1 images are deleted from the loop buffer.
  • S-1 images are deleted from the loop buffer.
  • selected images are deleted from the loop buffer.
  • images in which the object of interest is not present may be advantageously deleted .
  • the method comprises determining an object of interest based on the one or more image metric values.
  • the object of interest is determined based on a determined motion vector field .
  • an object is determined and/or tracked using feature points.
  • the invention further provides an apparatus for acquiring images, the apparatus comprising:
  • controller is configured to implement a method as described in this application.
  • the apparatus further comprises:
  • controller is configured to use the signal from the acceleration sensor in an image processing function.
  • the apparatus further comprises: - an orientation sensor for determining an orientation of the apparatus.
  • controller is configured to use the signal from the orientation sensor in an image processing function.
  • the image processing function uses the signal from the sensor to compensate calculated motion vectors for motion of the apparatus.
  • the invention further provides a computer storage medium comprising a computer program which , when executed on a processing unit of an image acquisition apparatus, causes said apparatus to behave according to any one of the methods described in this application.
  • a nature photographer intends to capture a focused image of a frog in mid-jump. While the frog is sitting, the photographer aims his camera at the frog and enables the continuous loop buffer recording. He may manually select the frog as the imaged object of interest, or he may leave the camera on an automatic setting. As soon as the frog jumps, the automatic detection algorithm would identify the moving frog as the imaged object of interest. At high frame rates, e.g.
  • a fast event such as the jump of the frog results in a series of captured images in which the movement of the frog is sufficiently gradual to allow the camera to track the frog by identifying the frog in each captured image in the loop buffer, and to adjust operating settings accordingly so that the frog in the captured images is optimised (e.g. optimal sharpness, depth of focus, lighting, etc).
  • the photographer disables the loop buffer recording. He can then, at leisure, select an image from the loop buffer for permanent storage.
  • the loop buffer recording can be stopped automatically when the system detects large movements of the object of interest.
  • the object tracking algorithms make use of a model or other description of the object of interest. This will allow the algorithms to re-find an object of interest, for example after it has moved off-frame for a while. This for example, frequently occurs in animal photography. When a photographer follows a flying bird in loop buffer recording mode, the bird may occasionally be out of sight.
  • An object tracking algorithm making use of persistent knowledge of the object to be tracked e.g. a model
  • the timing issue is resolved utilizing the loop buffer that continually stores images or frames for a limited time period upon activation via a trigger, so that the user is able to perfectly time a photograph even after the event of interest has taken place (providing of course the buffer has not exceeded its cycle time for the desired frame). Accessing the frames of the loop buffer can be done using the trigger upon which the device halts input of new frames into the buffer as well as ceasing deletion of frames already in the buffer. The user is then able to select the required frame(s).
  • the invention adds real-time "automated image processing" to this concept to increase the quality of stored frames dynamically and conditionally.
  • the automated image processing will adjust focus, zoom, aperture and shutter speed (hereafter: exposure time) either separate or in any combination together dynamically via object tracking and light intensity measurements processed in realtime.
  • real-time automated image processing can be utilized to either follow a selected object or automatically detect an object to follow. After selection of an object -whether automatic or manual- its movement, relative movement and speed can be tracked and calculated, sub- sequentially the focus of the camera can be continually and progressively adjusted using predictive algorithms.
  • the real-time automated image processing provides the required data for the algorithms. This enables a camera according the invention to automatically adjust zoom and focus before exposure of the light sensor or image recording device thus vastly increasing image quality.
  • the same method can be used to automatically and dynamically adjust other features of the camera such as aperture, shutter speed or exposure time in any combination or separate.
  • figure 1 schematically shows a loop buffer and a camera device using a loop buffer
  • figure 2 schematically shows a method for acquiring an image according to an embodiment of the invention
  • figure 3a schematically shows a method for recording and processing an image according to an embodiment of the invention
  • FIG. 4 schematically shows a method for processing an image according to an embodiment of the invention
  • figure 9 schematically shows a image acquisition apparatus according to an embodiment of the invention.
  • FIG. 1 which was briefly introduced in the background of the invention, schematically shows a loop buffer and a camera device 1 1 using a loop buffer 1 0.
  • a loop buffer is also termed a circular buffer. It can be seen as a memory or storage area that has been divided into a number of slots 1 , in figure 1 labelled 1 , 2, 3, 4 ... n. Each slot can hold an image or frame.
  • the circular aspect of the buffer is that when a next image, for example image n+ 1 is stored, it overwrites the oldest image (in this case, image 1 ).
  • the loop buffer with capacity n thus contains the n most recently stored images.
  • the division can also be done ad-hoc. That is, a memory manager can manage the available memory so that new images are stored as long as there is sufficient space. When an image is to be stored , and there is not sufficient memory left in the loop buffer area, the memory manager will delete images, starting with the oldest, until enough memory is left. That way, the loop buffer can efficiently deal with images that have a variable size (for example, JPEG coded images).
  • the camera 1 1 includes a loop buffer. When the user uses the camera to take a picture, the camera actually records n images in the loop buffer. The user can then select, for example by pressing arrows on touch screen 12, the best image in the loop buffer for permanent storage.
  • Figure 2 schematically shows a method 20 for acquiring an image by an image capturing apparatus (such as a camera device or a mobile phone with camera) according to an embodiment of the invention .
  • I n action 21 it is determined if the apparatus is in a "sampling" state. If not, the action is terminated .
  • the apparatus enters the sampling state when receiving a trigger from the user, e.g. a "start sampling" command.
  • the apparatus continuously, at least until another trigger is received in action 24, records images in the loop buffer (action 22) and processes them in action 23.
  • the processing action 22 can advantageously not only refer to the latest added image, but also to earlier images.
  • the processing action 23 can also comprise adjusting camera settings for the next image capture in action 22. Examples of processing will be explained in reference to figures 3-6.
  • the loop of capturing 22 and processing 23 images is repeated until a (user) trigger is received in action 24.
  • the status of the apparatus is set to "selecting".
  • the loop buffer is then effectively frozen, no new images are added and no older images are overwritten.
  • some images may be automatically deleted from the loop buffer.
  • the images to be deleted may be automatically determined according to an algorithm.
  • the trigger in action 24 is not necessarily a user-provided trigger. It can also be a predetermined trigger.
  • the trigger can be generated when a loop buffer is filled, just before an oldest image is about to be overwritten. It can also be a predetermined moment after a first event, such as the start of motion of an object of interest. More examples of automatic triggers are described in reference to figures 7 and 8.
  • images having too little or too much exposure may be automatically deleted.
  • images having a wrong focus may be automatically deleted.
  • the apparatus may use adaptive algorithms to determine the optimal camera settings, such as focus, zoom, aperture, iso (gain), and exposure time.
  • the camera can attempt several settings and evaluate results. For example, the camera may try three levels of exposure time before settling on a good value.
  • the images with a (as it later turned out) sub-optimal exposure time may be automatically deleted.
  • this can also comprise (temporarily) disabling the image so that it is excluded from the set offered to the user.
  • I n action 27 a set of images from the loop buffer is provided to the user, for example in the manner described in reference to figure 1 .
  • the user can select one or more images for permanent storage in action 28.
  • Figure 3a schematically shows a method 30 for recording and processing an image according to an embodiment of the invention. It can be seen as an example implementation of actions 22 and 23.
  • an image is captured using actual camera settings for zoom, focus, exposure time and aperture.
  • the image is stored in the loop buffer.
  • action 32 the recent image or a number of recent images is processed .
  • This processing can for example involve calculating image parameters such as brightness and sharpness. It can also comprise calculating a motion vector field, wherein the motion vectors indicate a displacement relative to an earlier image.
  • the apparatus determines, in action 34, a main object based on a number of the most recent images. For example, it can be an object that is, on average, centered in the previous m images. It can also be an object that is moving, as determined by the motion vectors, relative to a stable background. The object can also be detected using pattern recognition methods using models of the object or templates from previous frames (or previous movies).
  • the apparatus finds the selected object in the recent image in action 35.
  • the apparatus evaluates the camera settings used to acquire the most recent image. These parameters can comprise, among others, zoom, focus, exposure time, and aperture. Based on the evaluation, adjusted parameters may be determined in action 37 for example using prediction algorithms to estimate the position of the object in the next frame.
  • the camera can also mark an image for deletion if it can already be determined that the image is/will be sub- optimal, in action 38.
  • the apparatus can evaluate a previous number of images that way.
  • a camera parameter is updated for every new image capture moment.
  • the parameter update cycle can occur at a lower rate than the image capture cycle. This is necessary if the camera hardware cannot adjust to the setting update in the (short) time between image captures.
  • the device makes use of this redundancy by adopting a trial-and-error pattern . If the capture rate reduction factor is S, then for each set of S-1 images n-S, n-S+ 1 , .. , n-1 various camera settings are tried . After the S-1 experiments, the optimal settings are used for image n. At a later stage (e.g. after image n has been stored), images n-S, n-S+1 , n-1 can be deleted from the loop buffer to free space in the buffer. Then the cycle repeats itself, and again S-1 trial images are temporarily stored in the buffer, after which an optimal image is acquired and stored.
  • S-1 trial images are temporarily stored in the buffer, after which an optimal image is acquired and stored.
  • FIG. 3c schematically shows a further aspect of a loop buffer according the invention .
  • the loop buffer can be configured to automatically delete every m-th image in the loop buffer when the loop buffer is nearly full. This will reduce the choice for the user to select a captured image, but allows a longer time span to be captured in the loop buffer. So, in case an event (for example the flight of the bird in figure 8) takes longer than expected , the loop buffer can automatically start deleting intermediate images (indicated with a cross in figure 3b) so that a range of images representing a longer time period can be held in the loop buffer. The time lapse between images stored in the loop buffer thus becomes effectively dependent on the time lapse between the start and end of loop buffer capturing.
  • FIG. 4 schematically shows a method 40 for processing an image according to an embodiment of the invention.
  • I n action 41 motion vectors are calculated using a motion estimation algorithm.
  • the motion vectors describe, for each pixel or block of pixels, the displacement between a previous image (e.g. image n-1 ) and a next image (e.g. image n).
  • the skilled person will have access to a variety of motion estimation algorithms, as they are widely used in e.g. video encoding algorithms such as MPEG and various computer vision applications.
  • a blur metric is calculated for each pixel or block of pixels in a recent image (e.g. image n).
  • a blur metric can be based on high frequency presence (FFT), and/or based on spatial Gaussian derivatives (differential geometry).
  • a blur metric can be used in an algorithm to determine if the focus level is optimal.
  • a brightness metric is calculated for each pixel or block of pixels in a recent image. This metric can be used to evaluate exposure time or aperture.
  • the brightness metric can include a global intensity value, and/or local intensity and local contrast values.
  • Other possible metrics include object recognition, for example based on feature points (e.g. SI FT, SU RF, etc) or based on motion (e.g. visual attention based on motion).
  • Processing can include object tracking, for example based of feature points (e.g. SI FT, SU RF, etc, for example as determined in one of the metrics).
  • Object tracking can also or additionally be based on flow field (using e.g optical flow algorithms).
  • the invention is thus not limited to the metrics shown in figure 4. Other metrics may advantageously be calculated to assist the processing algorithms.
  • Figures 5 and 6 schematically show examples 50, 60 of image processing steps according to an embodiment of the invention.
  • the series of images 51 - 54 correspond to a sports scene where a ball B moves towards a goal.
  • the background is held more or less steady, the ball B is the main moving object in the scene.
  • the user will typically have not enough time to point out the object of interest using a touch screen user interface.
  • the camera unit automatically processes images recorded in the loop buffer, the motion vector field can be calculated.
  • the vector field corresponding to image 54 is shown schematically in field 55.
  • the ball object B is clearly distinguished by the relatively large motion vectors in a small area of the screen.
  • the camera algorithm can use this cue in the object determination action 34 of figure 3, and determine that the object B is in fact the object that should be in focus.
  • Figure 6 shows an alternative range 60 of images 61 -64.
  • the ball B is relatively steady in the image, whereas the background is moving rapidly (the user of the camera is tracking the ball). Again the ball and therefore the object of interest can be found by evaluating the motion vector field 65. I n this case, the area of the screen with a motion that is sharply different from the background (panning) motion may be detected as corresponding to a relevant object. Again, the ball can be found in that manner.
  • the apparatus can make use of further cues.
  • the central position of the distinguishing motion vectors may be a cue that this is an object of interest.
  • the apparatus can comprise a motion sensor so that a rotational motion of the camera can be detected . That way, the camera can determine by comparing the calculated motion vectors with the input from the motion sensor, which parts of the image correspond to the background (i.e. are stationary in the fixed scene and thus have motion vectors that track the motion sensor input) and which are relatively moving (relative to the fixed scene).
  • Figure 7 schematically shows a further example of image processing actions according to an embodiment of the invention.
  • Figure 7 comprises a series of images with boundaries or frames 71 - 78.
  • I n image 71 the user selects the frog F as object of interest, for example by touching the touch screen of a camera on or near the location where the frog is displayed .
  • a square is drawn in image 71 to indicate that the frog F is selected as object of interest.
  • the loop buffer is already operating (i.e. capturing images).
  • the act of selecting the object of interest starts the collection of images in the loop buffer.
  • I n a further embodiment, the collection of images is only started when the object of interest starts moving.
  • the camera determines feature points of the object, for example using SI FT or SU RF algorithms. Exemplary feature points P are indicated in frame 72. While the object is stationary, the object is easily tracked.
  • the object When the object starts to move (image 73), it will be tracked automatically.
  • the camera detects in frame 73 the new position of frog F. This can be done using for example motion vectors M combined with the previous location of F as indication. Alternatively or in addition, the object is tracked by finding again the feature points P (image 74).
  • the frog F can be tracked throughout the jump, in frames 75-77, using any of the object tracking mechanisms disclosed in this application. While object is being tracked, the camera parameters are continually optimized to capture the object's image. When the object stops moving again , the camera can automatically end the loop buffer capturing. This can also be selected automatically. In a further alternative embodiment, the loop buffer capturing is automatically stopped when the loop buffer is about to overwrite the images corresponding to the start of the movement. That way, the user can be assured that at least the start of the movement is not overwritten, even if the total movement takes more time than the loop buffer can hold images for. Alternatively, the automatic image decimating feature as explained in reference to figure 3b can be used to assure that images corresponding to the start of the movement are not (all) overwritten.
  • Figure 8 shows a further example of object tracking.
  • I n this case bird B is selected (either automatically or manually) as object of interest in sketch 81 .
  • the inner rectangle indicated FoV represents the frame or Field of View of the camera.
  • I n sketch 82 the bird B is still in the FoV.
  • Feature points P are used to track the bird.
  • I n sketch 83 the bird B has left the FoV and is thus not visible to the camera and the bird's image is therefore not present on the images being captured .
  • the feature points P cannot be found in the images being captured .
  • I n sketch 84 the bird B is back in the FoV.
  • the feature points P are found again by the detection algorithms, so that the camera can continue tracking the object of interest, bird B.
  • the images in which the object of interest is not in the FoV are automatically deleted from the loop buffer (this can be part of action 26 of figure 2).
  • This has two advantages: it frees up space in the loop buffer so that more relevant images can be held , and it ensures that the user will only have to make a selection form relevant images (images showing the bird) and not from "failed" images in which the bird was out of the field of view.
  • Figure 9 schematically shows a image acquisition apparatus 100 according to an embodiment of the invention .
  • the apparatus can be configured to execute any of the methods as described above.
  • the apparatus 100 comprises a controller 101 for controlling the subsystems of the apparatus.
  • the controller 101 will generally comprise a programmable microprocessor.
  • the controller is connected to a video processor 102 (e.g. an FPGA, ASI C, DSP, CPU or GPU or any combination thereof), which in turn is connected to a video capture unit 103, such as a camera unit, and to a touch screen 105.
  • the video capture unit can receive zoom, focus, exposure time, and aperture settings from the controller 101 and can record images at an acquisition rate F.
  • the images are stored in a loop buffer in memory 106.
  • the touch screen 105 is configured to display images from the capture unit 103 or the memory 106. It can also receive user inputs and provide the inputs to the controller 1 01 .
  • the apparatus can have a further input module 107 for handling user inputs, for example trigger buttons provided on the apparatus housing, and an audio capture unit 104 for capturing audio.
  • the apparatus can also comprise a motion sensor 108 and/or an orientation sensor 109.
  • the sensors 108, 1 09 provide their inputs to the controller, so that the camera's motion and/or orientation (including derivative values such as acceleration and rate of change of inclination) can be a factor in image processing algorithms. I n particular, the inputs can be used to help determine an object of interest, or to locate said object in subsequently recorded images (object tracking).
  • the stream of digital data as produced by a sensor chip of the capturing device The recorded data is stored and processed , in no particular order. I ndeed , it is clear to the skilled person that the order of storing a captured image and processing the image is not important. It is possible to first process the image on the fly (e.g. by processing the stream of data as produced by the sensor chip of the capturing device) and then store it, or first (temporarily) store the generated data and then process the stored data.

Abstract

The invention provides a method (20, 30) and apparatus (100) for acquiring images. The method comprises: recording (22, 31) an image using an image capture device (103); determining (34, 35) an object of interest in the image; storing the image in a loop buffer; processing (23, 32, 36) the image to obtain one or more image metric values; calculating (37), based on the image metric, an updated value for at least one operating setting of the image capture device, wherein the updated value is optimized to capture the object of interest; adjusting the operating setting of the image capture device using the updated value; repeating the above actions until a trigger is received.

Description

Method and apparatus for acquiring images
Field of the invention
[0001] The invention relates to a method and apparatus for acquiring images. In particular, it relates to a method and apparatus for acquiring images using a loop buffer.
Background of the invention
[0002] A camera 1 1 with a video loop buffer 10 is schematically shown in figure 1 . The camera is configured to, after receiving a user trigger, start capturing images at a predetermined rate (e.g. 5 images per second). The video loop buffer 10 is configured to hold the n most recently captured images. When the user expects that a good image has been recorded, he or she presses the trigger again. The camera now stops acquiring images, so that the content of the loop buffer 10 is frozen. The camera 1 1 is provided with a (touch) screen 12 for displaying the images from the loop buffer and arrows for selecting a previous or next image. By browsing through the images, the user can select the best image for permanent storage. The remaining images can be deleted or later overwritten.
[0003] When taking pictures of moving objects (items, persons or animals) it is very hard to continually keep the objects in focus and correctly time the shutter button. The use of auto-focus features will currently only assist the user in a limited way due to its nature to focus at fixed positions (typically 5-30 points in the image) rendering it near useless when targeting a specific moving object over time. Furthermore auto-focus features might even hinder users by adjusting focus just as a shot is taken / during exposure time of the light sensor or image recording device. While the above described use of a video loop buffer can help reduce timing issues as well as enable "after the fact" photography, it does not solve the focus problem. Taking the perfect picture is therefore still a combination of skill and luck.
[0004] Known AutoFocus techniques do little to address this problem. Current AutoFocus systems typically have problems with fast moving and relatively small objects. Especially when objects briefly disappear from the camera view, they are not easily found again by the AutoFocus system.
[0005] WO 2012 / 166 044 provides a method of capturing images of a view using a camera. A number of different camera settings are used in sequence, so that the view is captured using a variety of settings. The user can then later select which camera setting was appropriate for the view. The sequence of camera settings can be predetermined before capture, or adaptively determined during the capture of the images based on high-level analysis of the captured images.
[0006] It is a goal of the invention to further improve a camera to overcome or reduce at least one of the above mentioned drawbacks.
Summary of the invention
[0007] The invention provides a method for acquiring images, the method comprising:
- recording an image using an image capture device ;
- storing the image in a loop buffer;
- determining an object of interest in the image;
- processing the image to obtain one or more image metric values;
- calculating, based on the image metric, an updated value for at least one operating setting of the image capture device, wherein the updated value is optimized to capture the object of interest;
- adjusting the operating setting of the image capture device using the updated value;
- repeating the above actions until a trigger is received .
[0008] The trigger thus stops the capturing of images in the loop buffer, so that (older) images are no longer overwritten after the trigger is received. The trigger can be a user trigger (e.g. the user presses a "capture" button on the camera) or the occurrence of a predetermined event (e.g. the loop buffer is filled , or a predetermined amount of time has passed since an earlier event).
[0009] I n an embodiment according the invention, the image capture device is adapted to capture images at a high frame rate, for example at least 50 frames/second , 100 frames/second , 200 frames/second , 300 frames/second , or more. Higher frame rates allow better and more reliable object tracking of objects of interest. Object tracking can be performed using feature point tracking methods, using SI FT or SU RF or optical flow based methods or a combination thereof.
[0010] Higher frame rates also advantageously allow an algorithm to try many different operating settings (e.g. focus, exposure time, iso, aperture) and to converge on an ideal set of capture settings. With "operating setting" is thus meant any setting that influences the way the image capture device captures images.
[0011] Preferably, the components (image capture unit, memory, processor units, any dedicated signal processing units, etc) needed to perform the method are integrated in a single device, such as a camera or smartphone apparatus.
[0012] Currently, image capture devices that can operate at 300 frames/second or even 900 frames/second are commercially available. These devices can advantageously be used in an apparatus according the invention.
[0013] I n an embodiment according the invention, the processing comprises detecting an imaged object in one or, preferably, a plurality of captured images. That way, an object of interest can be tracked in a plurality of images in the loop buffer. The operating setting of the image capture device can be optimized for optimal imaging of the object of interest.
[0014] In an embodiment according the invention, processing the image to obtain an image metric comprises evaluating a function which takes the last m images as input, where m equals at least 2. Algorithms that use more than 2 images as input can be more robust against false detections. An object can be more reliably tracked.
[0015] I n an embodiment according the invention , processing the image to obtain one or more image metric values comprises at least one of:
- calculating one or more motion vector, for the complete image or for a tracked object;
- calculating a blur metric; and
- calculating a brightness metric.
[0016] In an embodiment according the invention, the operating setting of the image capture device comprises one of:
- zoom factor
- focus;
- aperture;
- iso; and
- exposure time. [0017] I n an embodiment according the invention , after S images are captured, where S is an integer larger than one, of the most recent S images, S-1 images are deleted from the loop buffer. In an embodiment, at regular intervals selected images are deleted from the loop buffer. For example, images in which the object of interest is not present may be advantageously deleted . It is also possible to delete every m- th image (where m is an integer > 1 ) to free up space in the loop buffer at the cost of reducing the effective frame rate at which images are stoed in the loop buffer.
[0018] I n an embodiment according the invention , the method comprises determining an object of interest based on the one or more image metric values.
[0019] I n an embodiment according the invention, the object of interest is determined based on a determined motion vector field . In an embodiment, an object is determined and/or tracked using feature points.
[0020] The invention further provides an apparatus for acquiring images, the apparatus comprising:
- a controller ;
- an image capture unit for capturing images;
- a memory for storing a loop buffer,
wherein the controller is configured to implement a method as described in this application.
[0021] In an embodiment according the invention, the apparatus further comprises:
- an acceleration sensor for determining motion of the apparatus;
wherein the controller is configured to use the signal from the acceleration sensor in an image processing function.
[0022] In an embodiment according the invention, the apparatus further comprises: - an orientation sensor for determining an orientation of the apparatus.
wherein the controller is configured to use the signal from the orientation sensor in an image processing function.
[0023] I n an embodiment according the invention , the image processing function uses the signal from the sensor to compensate calculated motion vectors for motion of the apparatus.
[0024] The invention further provides a computer storage medium comprising a computer program which , when executed on a processing unit of an image acquisition apparatus, causes said apparatus to behave according to any one of the methods described in this application. [0025] Advantageous of embodiments of the invention can be understood from the following use case. A nature photographer intends to capture a focused image of a frog in mid-jump. While the frog is sitting, the photographer aims his camera at the frog and enables the continuous loop buffer recording. He may manually select the frog as the imaged object of interest, or he may leave the camera on an automatic setting. As soon as the frog jumps, the automatic detection algorithm would identify the moving frog as the imaged object of interest. At high frame rates, e.g. at 300 frames/second, even a fast event such as the jump of the frog results in a series of captured images in which the movement of the frog is sufficiently gradual to allow the camera to track the frog by identifying the frog in each captured image in the loop buffer, and to adjust operating settings accordingly so that the frog in the captured images is optimised (e.g. optimal sharpness, depth of focus, lighting, etc). I mmediately after the jump, the photographer disables the loop buffer recording. He can then, at leisure, select an image from the loop buffer for permanent storage. Alternatively, the loop buffer recording can be stopped automatically when the system detects large movements of the object of interest.
[0026] I n an advantageous embodiment, the object tracking algorithms make use of a model or other description of the object of interest. This will allow the algorithms to re-find an object of interest, for example after it has moved off-frame for a while. This for example, frequently occurs in animal photography. When a photographer follows a flying bird in loop buffer recording mode, the bird may occasionally be out of sight. An object tracking algorithm making use of persistent knowledge of the object to be tracked (e.g. a model), can continue tracking the bird after it comes back in view again.
[0027] The timing issue is resolved utilizing the loop buffer that continually stores images or frames for a limited time period upon activation via a trigger, so that the user is able to perfectly time a photograph even after the event of interest has taken place (providing of course the buffer has not exceeded its cycle time for the desired frame). Accessing the frames of the loop buffer can be done using the trigger upon which the device halts input of new frames into the buffer as well as ceasing deletion of frames already in the buffer. The user is then able to select the required frame(s).
[0028] The invention adds real-time "automated image processing" to this concept to increase the quality of stored frames dynamically and conditionally. The automated image processing will adjust focus, zoom, aperture and shutter speed (hereafter: exposure time) either separate or in any combination together dynamically via object tracking and light intensity measurements processed in realtime.
[0029] Application of these principles can drastically improve the image quality of stored frames. According to an embodiment of the invention , to obtain optimal focus for moving objects on the images in the video loop buffer, real-time automated image processing can be utilized to either follow a selected object or automatically detect an object to follow. After selection of an object -whether automatic or manual- its movement, relative movement and speed can be tracked and calculated, sub- sequentially the focus of the camera can be continually and progressively adjusted using predictive algorithms. The real-time automated image processing provides the required data for the algorithms. This enables a camera according the invention to automatically adjust zoom and focus before exposure of the light sensor or image recording device thus vastly increasing image quality. The same method can be used to automatically and dynamically adjust other features of the camera such as aperture, shutter speed or exposure time in any combination or separate.
[0030] For instance, envision a soccer ball travelling towards a camera at a reasonably high speed . Just using a video loop buffer will enable the user to photograph the ball at precisely the right time, however it is using prior art cameras nearly impossible to have that shot in optimal focus as well, for the user would have to have dynamically adjusted the focus to follow the ball while recording to the video loop buffer. The invention allows a user to capture a loop buffer with images where the ball is in focus. This is achieved by detecting that the object is moving towards the camera and the rate at which it is approaching, and by adjusting focus pre- emptively, accordingly and smoothly. When the user so wishes also lighting conditions (aperture and exposure time) may be adjusted for the required depth of field and lighting conditions, to further improve the image.
Brief description of the Figures
[0031] On the attached drawing sheets,
• figure 1 schematically shows a loop buffer and a camera device using a loop buffer;
• figure 2 schematically shows a method for acquiring an image according to an embodiment of the invention; • figure 3a schematically shows a method for recording and processing an image according to an embodiment of the invention;
• figures 3b and 3c schematically show images in a loop buffer, according to an embodiment of the invention;
· figure 4 schematically shows a method for processing an image according to an embodiment of the invention;
• figures 5 and 6 schematically show examples of image processing actions according to an embodiment of the invention;
• figures 7 and 8 schematically show further examples of image processing actions according to an embodiment of the invention; and
• figure 9 schematically shows a image acquisition apparatus according to an embodiment of the invention.
Detailed description
[0032] Figure 1 , which was briefly introduced in the background of the invention, schematically shows a loop buffer and a camera device 1 1 using a loop buffer 1 0. A loop buffer is also termed a circular buffer. It can be seen as a memory or storage area that has been divided into a number of slots 1 , in figure 1 labelled 1 , 2, 3, 4 ... n. Each slot can hold an image or frame. The circular aspect of the buffer is that when a next image, for example image n+ 1 is stored, it overwrites the oldest image (in this case, image 1 ). The loop buffer with capacity n thus contains the n most recently stored images. Of course, there is no need for a rigid division of the loop buffer's memory area in a number of equal-sized slots. The division can also be done ad-hoc. That is, a memory manager can manage the available memory so that new images are stored as long as there is sufficient space. When an image is to be stored , and there is not sufficient memory left in the loop buffer area, the memory manager will delete images, starting with the oldest, until enough memory is left. That way, the loop buffer can efficiently deal with images that have a variable size (for example, JPEG coded images).
[0033] The camera 1 1 includes a loop buffer. When the user uses the camera to take a picture, the camera actually records n images in the loop buffer. The user can then select, for example by pressing arrows on touch screen 12, the best image in the loop buffer for permanent storage. [0034] Figure 2 schematically shows a method 20 for acquiring an image by an image capturing apparatus (such as a camera device or a mobile phone with camera) according to an embodiment of the invention . I n action 21 , it is determined if the apparatus is in a "sampling" state. If not, the action is terminated . In an embodiment, the apparatus enters the sampling state when receiving a trigger from the user, e.g. a "start sampling" command.
[0035] I n the sampling state, the apparatus continuously, at least until another trigger is received in action 24, records images in the loop buffer (action 22) and processes them in action 23. The processing action 22 can advantageously not only refer to the latest added image, but also to earlier images. The processing action 23 can also comprise adjusting camera settings for the next image capture in action 22. Examples of processing will be explained in reference to figures 3-6.
[0036] The loop of capturing 22 and processing 23 images is repeated until a (user) trigger is received in action 24. When the trigger is received , the status of the apparatus is set to "selecting". The loop buffer is then effectively frozen, no new images are added and no older images are overwritten. However, in optional step 26, some images may be automatically deleted from the loop buffer. The images to be deleted may be automatically determined according to an algorithm.
[0037] The trigger in action 24 is not necessarily a user-provided trigger. It can also be a predetermined trigger. For example, the trigger can be generated when a loop buffer is filled, just before an oldest image is about to be overwritten. It can also be a predetermined moment after a first event, such as the start of motion of an object of interest. More examples of automatic triggers are described in reference to figures 7 and 8.
[0038] For example, images having too little or too much exposure may be automatically deleted. Also, images having a wrong focus may be automatically deleted. As will be discussed in reference to figures 3a-6, the apparatus may use adaptive algorithms to determine the optimal camera settings, such as focus, zoom, aperture, iso (gain), and exposure time. Especially in high-frequency acquisition modes, the camera can attempt several settings and evaluate results. For example, the camera may try three levels of exposure time before settling on a good value. The images with a (as it later turned out) sub-optimal exposure time may be automatically deleted.
[0039] Especially if the camera is equipped for high-frequency image capture using a loop buffer, it is advantageous if a number of redundant and sub-optimal images are automatically deleted, so that the user only has to choose between a set of quality images which have an adequately small time interval between them.
[0040] Where in the above reference is made to deleting an image from the loop buffer, it is understood that this can also comprise (temporarily) disabling the image so that it is excluded from the set offered to the user.
[0041] I n action 27, a set of images from the loop buffer is provided to the user, for example in the manner described in reference to figure 1 . The user can select one or more images for permanent storage in action 28.
[0042] Figure 3a schematically shows a method 30 for recording and processing an image according to an embodiment of the invention. It can be seen as an example implementation of actions 22 and 23.
[0043] I n action 31 , an image is captured using actual camera settings for zoom, focus, exposure time and aperture. The image is stored in the loop buffer. In action 32, the recent image or a number of recent images is processed . This processing can for example involve calculating image parameters such as brightness and sharpness. It can also comprise calculating a motion vector field, wherein the motion vectors indicate a displacement relative to an earlier image.
[0044] If the apparatus is set for automatic object tracking, as checked in action 33, the apparatus determines, in action 34, a main object based on a number of the most recent images. For example, it can be an object that is, on average, centered in the previous m images. It can also be an object that is moving, as determined by the motion vectors, relative to a stable background. The object can also be detected using pattern recognition methods using models of the object or templates from previous frames (or previous movies).
[0045] Alternatively, if the user has indicated an object to be tracked (for example, using a touch screen of the camera device), the apparatus finds the selected object in the recent image in action 35.
[0046] With the object identified , the apparatus evaluates the camera settings used to acquire the most recent image. These parameters can comprise, among others, zoom, focus, exposure time, and aperture. Based on the evaluation, adjusted parameters may be determined in action 37 for example using prediction algorithms to estimate the position of the object in the next frame. The camera can also mark an image for deletion if it can already be determined that the image is/will be sub- optimal, in action 38. The apparatus can evaluate a previous number of images that way. [0047] I n an embodiment, a camera parameter is updated for every new image capture moment. Alternatively, the parameter update cycle can occur at a lower rate than the image capture cycle. This is necessary if the camera hardware cannot adjust to the setting update in the (short) time between image captures.
[0048] I n an embodiment, the camera is configured to reduce the effective acquisition of images from F images/second to F/S images per second , so that the loop buffer can be used to store images for a longer time span. For example, assume that a buffer can hold 600 images. If the maximum acquisition rate F of the camera is 300 images per second , the buffer can hold images spanning two seconds. I n an exemplary embodiment the apparatus is configured to automatically determine picture quality (using a picture quality algorithm as described above and in the following examples) of the most recent acquired images, and keeps only the best one image for every S input images. If S equals 4, the effective acquisition rate is reduced to F/S = 75 images per second , and the loop buffer can hold images spanning 8 seconds. This increases the chances that the user can find an image corresponding to a moment of time of interest.
[0049] I n an embodiment, the device makes use of this redundancy by adopting a trial-and-error pattern . If the capture rate reduction factor is S, then for each set of S-1 images n-S, n-S+ 1 , .. , n-1 various camera settings are tried . After the S-1 experiments, the optimal settings are used for image n. At a later stage (e.g. after image n has been stored), images n-S, n-S+1 , n-1 can be deleted from the loop buffer to free space in the buffer. Then the cycle repeats itself, and again S-1 trial images are temporarily stored in the buffer, after which an optimal image is acquired and stored.
[0050] This is schematically illustrated in figure 3b. The frame n-3 is captured using the focus operating parameter of the acquisition unit set to value "A", and the exposure time operating parameter is set to "E". I n frame n-2, the focus was "B" and exposure time was "E", while in frame n-1 , the focus was "C" and exposure time was "F". The image quality algorithm then determines that under current conditions, the optimal value for focus is "B" and the optimal value for exposure time is "F". Image n is then captured using these optimal values. After image n has been captured , images n-3, n-2 and n-1 may be deleted. In the previous iteration , after capture of image n-4, images n-7 through n-5 have been deleted.
[0051] Figure 3c schematically shows a further aspect of a loop buffer according the invention . When the user is recording an event, and the loop buffer is filling up, the loop buffer can be configured to automatically delete every m-th image in the loop buffer when the loop buffer is nearly full. This will reduce the choice for the user to select a captured image, but allows a longer time span to be captured in the loop buffer. So, in case an event (for example the flight of the bird in figure 8) takes longer than expected , the loop buffer can automatically start deleting intermediate images (indicated with a cross in figure 3b) so that a range of images representing a longer time period can be held in the loop buffer. The time lapse between images stored in the loop buffer thus becomes effectively dependent on the time lapse between the start and end of loop buffer capturing.
[0052] Figure 4 schematically shows a method 40 for processing an image according to an embodiment of the invention. I n action 41 , motion vectors are calculated using a motion estimation algorithm. The motion vectors describe, for each pixel or block of pixels, the displacement between a previous image (e.g. image n-1 ) and a next image (e.g. image n). The skilled person will have access to a variety of motion estimation algorithms, as they are widely used in e.g. video encoding algorithms such as MPEG and various computer vision applications.
[0053] I n action 42 a blur metric is calculated for each pixel or block of pixels in a recent image (e.g. image n). A blur metric can be based on high frequency presence (FFT), and/or based on spatial Gaussian derivatives (differential geometry). A blur metric can be used in an algorithm to determine if the focus level is optimal. In action 43, a brightness metric is calculated for each pixel or block of pixels in a recent image. This metric can be used to evaluate exposure time or aperture. The brightness metric can include a global intensity value, and/or local intensity and local contrast values.
[0054] Other possible metrics (not shown in figure 4) include object recognition, for example based on feature points (e.g. SI FT, SU RF, etc) or based on motion (e.g. visual attention based on motion). Processing can include object tracking, for example based of feature points (e.g. SI FT, SU RF, etc, for example as determined in one of the metrics). Object tracking can also or additionally be based on flow field (using e.g optical flow algorithms).
[0055] The invention is thus not limited to the metrics shown in figure 4. Other metrics may advantageously be calculated to assist the processing algorithms.
[0056] Figures 5 and 6 schematically show examples 50, 60 of image processing steps according to an embodiment of the invention. The series of images 51 - 54 correspond to a sports scene where a ball B moves towards a goal. The background is held more or less steady, the ball B is the main moving object in the scene.
[0057] I n a scene like this, the user will typically have not enough time to point out the object of interest using a touch screen user interface. However, since the camera unit according the invention automatically processes images recorded in the loop buffer, the motion vector field can be calculated. The vector field corresponding to image 54 is shown schematically in field 55. The ball object B is clearly distinguished by the relatively large motion vectors in a small area of the screen. The camera algorithm can use this cue in the object determination action 34 of figure 3, and determine that the object B is in fact the object that should be in focus.
[0058] Figure 6 shows an alternative range 60 of images 61 -64. Now the ball B is relatively steady in the image, whereas the background is moving rapidly (the user of the camera is tracking the ball). Again the ball and therefore the object of interest can be found by evaluating the motion vector field 65. I n this case, the area of the screen with a motion that is sharply different from the background (panning) motion may be detected as corresponding to a relevant object. Again, the ball can be found in that manner.
[0059] I n an embodiment, the apparatus can make use of further cues. For example, the central position of the distinguishing motion vectors may be a cue that this is an object of interest. In addition, the apparatus can comprise a motion sensor so that a rotational motion of the camera can be detected . That way, the camera can determine by comparing the calculated motion vectors with the input from the motion sensor, which parts of the image correspond to the background (i.e. are stationary in the fixed scene and thus have motion vectors that track the motion sensor input) and which are relatively moving (relative to the fixed scene).
[0060] Figure 7 schematically shows a further example of image processing actions according to an embodiment of the invention. Figure 7 comprises a series of images with boundaries or frames 71 - 78. I n image 71 , the user selects the frog F as object of interest, for example by touching the touch screen of a camera on or near the location where the frog is displayed . A square is drawn in image 71 to indicate that the frog F is selected as object of interest. I n an embodiment, the loop buffer is already operating (i.e. capturing images). In an alternative embodiment, the act of selecting the object of interest starts the collection of images in the loop buffer. I n a further embodiment, the collection of images is only started when the object of interest starts moving. [0061] After selection of the object of interest, the camera determines feature points of the object, for example using SI FT or SU RF algorithms. Exemplary feature points P are indicated in frame 72. While the object is stationary, the object is easily tracked.
[0062] When the object starts to move (image 73), it will be tracked automatically. The camera detects in frame 73 the new position of frog F. This can be done using for example motion vectors M combined with the previous location of F as indication. Alternatively or in addition, the object is tracked by finding again the feature points P (image 74).
[0063] The frog F can be tracked throughout the jump, in frames 75-77, using any of the object tracking mechanisms disclosed in this application. While object is being tracked, the camera parameters are continually optimized to capture the object's image. When the object stops moving again , the camera can automatically end the loop buffer capturing. This can also be selected automatically. In a further alternative embodiment, the loop buffer capturing is automatically stopped when the loop buffer is about to overwrite the images corresponding to the start of the movement. That way, the user can be assured that at least the start of the movement is not overwritten, even if the total movement takes more time than the loop buffer can hold images for. Alternatively, the automatic image decimating feature as explained in reference to figure 3b can be used to assure that images corresponding to the start of the movement are not (all) overwritten.
[0064] From the complete set S of captures (composite frame 78), the user can select the best capture(s) for long term storage.
[0065] Figure 8 shows a further example of object tracking. I n this case bird B is selected (either automatically or manually) as object of interest in sketch 81 . The inner rectangle indicated FoV represents the frame or Field of View of the camera. I n sketch 82, the bird B is still in the FoV. Feature points P are used to track the bird.
[0066] I n sketch 83, the bird B has left the FoV and is thus not visible to the camera and the bird's image is therefore not present on the images being captured . The feature points P cannot be found in the images being captured . I n sketch 84, the bird B is back in the FoV. The feature points P are found again by the detection algorithms, so that the camera can continue tracking the object of interest, bird B.
[0067] I n the experience of the applicant, many prior art cameras have a tendency to select, after an object of interest has left the FoV, a more or less random object X (for example the tree in sketch 84) as object of interest, for example because it just happens to be in focus. Embodiments of the present invention overcome that drawback through improved object tracking.
[0068] I n an embodiment of the invention , the images in which the object of interest is not in the FoV (which thus do not capture the object of interest) are automatically deleted from the loop buffer (this can be part of action 26 of figure 2). This has two advantages: it frees up space in the loop buffer so that more relevant images can be held , and it ensures that the user will only have to make a selection form relevant images (images showing the bird) and not from "failed" images in which the bird was out of the field of view.
[0069] Figure 9 schematically shows a image acquisition apparatus 100 according to an embodiment of the invention . The apparatus can be configured to execute any of the methods as described above.
[0070] The apparatus 100 comprises a controller 101 for controlling the subsystems of the apparatus. The controller 101 will generally comprise a programmable microprocessor. The controller is connected to a video processor 102 (e.g. an FPGA, ASI C, DSP, CPU or GPU or any combination thereof), which in turn is connected to a video capture unit 103, such as a camera unit, and to a touch screen 105. The video capture unit can receive zoom, focus, exposure time, and aperture settings from the controller 101 and can record images at an acquisition rate F. The images are stored in a loop buffer in memory 106. The touch screen 105 is configured to display images from the capture unit 103 or the memory 106. It can also receive user inputs and provide the inputs to the controller 1 01 . The apparatus can have a further input module 107 for handling user inputs, for example trigger buttons provided on the apparatus housing, and an audio capture unit 104 for capturing audio. The apparatus can also comprise a motion sensor 108 and/or an orientation sensor 109. The sensors 108, 1 09 provide their inputs to the controller, so that the camera's motion and/or orientation (including derivative values such as acceleration and rate of change of inclination) can be a factor in image processing algorithms. I n particular, the inputs can be used to help determine an object of interest, or to locate said object in subsequently recorded images (object tracking).
[0071] It will be clear to the skilled person that a variety of types of consumer devices can implement functionality according to embodiments of the invention, e.g. video and still camera's, smartphones, tablet computers, notebooks, etc. [0072] I n the foregoing description of the figures, the invention has been described with reference to specific embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the scope of the invention as summarized in the attached claims.
[0073] I n particular, combinations of specific features of various aspects of the invention may be made. An aspect of the invention may be further advantageously enhanced by adding a feature that was described in relation to another aspect of the invention. For example, various exemplary embodiments of loop buffers and loop buffer functionality are provided in the disclosure. Various examples of image processing for object tracking are also shown, which can be advantageously used to track an object of interest. It will be clear to a skilled person that all loop buffer variants (including the trivial loop buffer of size 1 as used in several classical still photography cameras) and image processing can be advantageously combined to form embodiments of the invention, even those combinations that are not specifically disclosed as examples. I n the context of this application, "recording" an image means generating digital data representing said image (e.g. the stream of digital data as produced by a sensor chip of the capturing device). The recorded data is stored and processed , in no particular order. I ndeed , it is clear to the skilled person that the order of storing a captured image and processing the image is not important. It is possible to first process the image on the fly (e.g. by processing the stream of data as produced by the sensor chip of the capturing device) and then store it, or first (temporarily) store the generated data and then process the stored data. While the use of a loop buffer which can hold a significant number of images is advantageous, both to allow the user to later select the ideal image and to enable temporal processing for object tracking and to calculate the camera settings, it will be clear to the skilled person that a (loop) buffer which holds more than one image is not strictly essential to gain the advantages of the object tracking features as described in this application
[0074] It is to be understood that the invention is limited by the annexed claims and its technical equivalents only. I n this document and in its claims, the verb "to comprise" and its conjugations are used in their non-limiting sense to mean that items following the word are included , without excluding items not specifically mentioned. In addition, reference to an element by the indefinite article "a" or "an" does not exclude the possibility that more than one of the element is present, unless the context clearly requires that there be one and only one of the elements. The indefinite article "a" or "an" thus usually means "at least one".

Claims

Claims
1 . Method (20, 30) for acquiring images, the method comprising:
- recording (22, 31 ) an image using an image capture device (103);
- storing (22) the image in a loop buffer;
- determining (34, 35) an object of interest in the image;
- processing (23, 32, 36) the image to obtain one or more image metric values;
- calculating (37), based on the one or more image metric values, an updated value for at least one operating setting of the image capture device, wherein the updated value is optimized to capture the object of interest;
- adjusting the operating setting of the image capture device using the updated value; and
- repeating the above actions until a trigger is received.
2. Method (20, 30) of claim 1 , wherein images are captured and processed at a rate of at least 50 images/second, 100 images /second, 200 images/second, or 300 images/second.
3. Method (20, 30) of claim 1 or 2, wherein calculating an updated value for at least one operating setting of the image capture device comprises using prediction algorithms to estimate the position of the object in the next frame.
4. Method (20,30) of any one of the previous claims, comprising:
- initially selecting, by the user or automatically, an object of interest;
- starting the repeated recording of an image when an event of the object of interest is detected, such as a change in movement.
5. Method (20, 30) of any one of the previous claims, wherein determining the object of interest comprises one of determining (34) the object of interest as a main object based on a number of recent images and identifying (35) the object of interest as a user selected main object.
6. The method (20, 30) of any one of the previous claims, wherein processing the image to obtain one or more image metric values comprises evaluating a function which takes the last m images as input, where m equals at least 2.
7. The method (20, 30) of any one of the previous claims, wherein processing the image to obtain one or more image metric values comprises at least one of:
- calculating (41 ) one or more motion vectors, preferably for the complete image or for the tracked object;
- calculating (42) a blur metric; and
- calculating (43) a brightness metric.
8. The method (20, 30) of any one of the previous claims, wherein the operating setting of the image capture device (103) comprises one of:
- zoom factor;
- focus;
- aperture;
- iso; and
- exposure time.
9. The method (20, 30) of any one of the previous claims, wherein after S images are captured, where S is an integer larger than one, of the most recent S images, S- 1 images are deleted from the loop buffer.
10. The method (20, 30) of any one of the previous claims, comprising
- determining (34) an object of interest based on the one or more image metric values.
1 1 . The method (20, 30) of claim 10, wherein the object of interest is determined based on a determined motion vector field.
12. The method (20, 30) of any one of the previous claims, comprising
- deleting (25), prior to repeating the actions, selected images from the loop buffer.
13. Apparatus (100) for acquiring images, the apparatus comprising:
- a controller (101 );
- an image capture unit (103) for capturing images;
- a memory (106) for storing a loop buffer, wherein the controller is configured to implement a method according to any one of claims 1 -12.
14. Apparatus (100) according to claim 13, further comprising:
- an acceleration sensor (108) for providing a signal representing a motion of the apparatus;
wherein the controller is configured to use the signal from the acceleration sensor in an image processing function.
15. Apparatus (100) according to claim 13 or 14, further comprising:
- an orientation sensor (109) for providing a signal representing an orientation of the apparatus.
wherein the controller is configured to use the signal from the orientation sensor in an image processing function.
16. Apparatus (100) according to claim 14 or 15, wherein the image processing function uses the signal from the sensor to compensate calculated motion vectors for motion of the apparatus (100).
17. Computer storage medium comprising a computer program which, when executed on a processing unit of an apparatus (100) for acquiring images, causes said apparatus to function according to any one of the claims 1 -13.
PCT/EP2014/074036 2013-11-08 2014-11-07 Method and apparatus for acquiring images WO2015067750A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
NL2011771A NL2011771C2 (en) 2013-11-08 2013-11-08 Method and apparatus for acquiring images.
NL2011771 2013-11-08

Publications (1)

Publication Number Publication Date
WO2015067750A1 true WO2015067750A1 (en) 2015-05-14

Family

ID=50114480

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2014/074036 WO2015067750A1 (en) 2013-11-08 2014-11-07 Method and apparatus for acquiring images

Country Status (2)

Country Link
NL (1) NL2011771C2 (en)
WO (1) WO2015067750A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3621292A1 (en) * 2018-09-04 2020-03-11 Samsung Electronics Co., Ltd. Electronic device for obtaining images by controlling frame rate for external moving object through point of interest, and operating method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020005895A1 (en) * 1997-08-05 2002-01-17 Mitsubishi Electric, Ita Data storage with overwrite
US6734902B1 (en) * 1997-12-12 2004-05-11 Canon Kabushiki Kaisha Vibration correcting device
US20110279691A1 (en) * 2010-05-10 2011-11-17 Panasonic Corporation Imaging apparatus
WO2012166044A1 (en) 2011-05-31 2012-12-06 Scalado Ab Method and apparatus for capturing images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020005895A1 (en) * 1997-08-05 2002-01-17 Mitsubishi Electric, Ita Data storage with overwrite
US6734902B1 (en) * 1997-12-12 2004-05-11 Canon Kabushiki Kaisha Vibration correcting device
US20110279691A1 (en) * 2010-05-10 2011-11-17 Panasonic Corporation Imaging apparatus
WO2012166044A1 (en) 2011-05-31 2012-12-06 Scalado Ab Method and apparatus for capturing images

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3621292A1 (en) * 2018-09-04 2020-03-11 Samsung Electronics Co., Ltd. Electronic device for obtaining images by controlling frame rate for external moving object through point of interest, and operating method thereof
US11223761B2 (en) 2018-09-04 2022-01-11 Samsung Electronics Co., Ltd. Electronic device for obtaining images by controlling frame rate for external moving object through point of interest, and operating method thereof

Also Published As

Publication number Publication date
NL2011771C2 (en) 2015-05-11

Similar Documents

Publication Publication Date Title
US9736356B2 (en) Photographing apparatus, and method for photographing moving object with the same
US8068164B2 (en) Face recognition auto focus apparatus for a moving image
EP3008696B1 (en) Tracker assisted image capture
US20100188511A1 (en) Imaging apparatus, subject tracking method and storage medium
US10516823B2 (en) Camera with movement detection
CN112703533A (en) Object tracking
EP3516581B1 (en) Automatic selection of cinemagraphs
WO2015179023A1 (en) Enhanced image capture
US20230040548A1 (en) Panorama video editing method,apparatus,device and storage medium
AU2015264713A1 (en) Enhanced image capture
WO2019104569A1 (en) Focusing method and device, and readable storage medium
CN109451240B (en) Focusing method, focusing device, computer equipment and readable storage medium
JP2020522943A (en) Slow motion video capture based on object tracking
JP6212991B2 (en) Image processing apparatus, imaging apparatus, and program
KR101938381B1 (en) Imaging apparatus and imaging method
JP2018007272A (en) Image processing apparatus, imaging apparatus, and program
NL2011771C2 (en) Method and apparatus for acquiring images.
JP2011087257A (en) Semiconductor integrated circuit and imaging apparatus
JP2010074315A (en) Object tracking method and imaging device
CN107431756B (en) Method and apparatus for automatic image frame processing possibility detection
JP5945425B2 (en) Imaging apparatus and imaging method thereof
JP3960758B2 (en) Monitoring device
US20230177860A1 (en) Main object determination apparatus, image capturing apparatus, and method for controlling main object determination apparatus
US20230276117A1 (en) Main object determination apparatus, imaging apparatus, and control method for controlling main object determination apparatus
JP2023180468A (en) Image processing device and method, imaging system, program and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14798768

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14798768

Country of ref document: EP

Kind code of ref document: A1