WO2016009199A2 - Minimisation of blur in still image capture - Google Patents

Minimisation of blur in still image capture Download PDF

Info

Publication number
WO2016009199A2
WO2016009199A2 PCT/GB2015/052042 GB2015052042W WO2016009199A2 WO 2016009199 A2 WO2016009199 A2 WO 2016009199A2 GB 2015052042 W GB2015052042 W GB 2015052042W WO 2016009199 A2 WO2016009199 A2 WO 2016009199A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
blur
motion
camera unit
axis
Prior art date
Application number
PCT/GB2015/052042
Other languages
French (fr)
Other versions
WO2016009199A3 (en
Inventor
James Andrew DALLAS
James Alexander LEIGH
Original Assignee
Omg Plc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Omg Plc filed Critical Omg Plc
Publication of WO2016009199A2 publication Critical patent/WO2016009199A2/en
Publication of WO2016009199A3 publication Critical patent/WO2016009199A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • H04N23/684Vibration or motion blur correction performed by controlling the image sensor readout, e.g. by controlling the integration time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors

Definitions

  • the present invention relates to a camera unit that comprises an image sensor that is operable to capture still images, and in some embodiments a camera unit captures images intermittently without a triggering by the user.
  • the present invention is concerned with blurring of captured images caused by motion of the camera unit, referred to herein as motion blur, as opposed to motion of objects within a scene being imaged.
  • a camera unit that comprises a control circuit that controls the image sensor to capture images intermittently without a user triggering capture of the individual images.
  • Such a camera unit may, for example, capture images in response to sensors that sense physical parameters of the camera unit or its surroundings. That allows for intelligent decisions on the timing of image captures, in a way that increases the chances of the images being of scenes that are significant to the user.
  • the camera unit captures images without a user triggering capture, the user does not know the intermittent times at which image capture will occur.
  • image capture occurs whilst the user moves naturally through variable lighting conditions and there is a greater chance of the camera unit being directed at a scene that is difficult to expose correctly and there is no possibility of the user taking any positive action to correct or improve exposure, for example by pointing, framing and or adjusting exposure quality, and the conditions are not stable during the image capture operation.
  • such a camera unit might typically have a relatively wide field of view. This is to compensate for the fact that the camera unit will typically not be directed at a scene that has a natural point of interest since the user does not know when image capture will occur.
  • a wide field of view increases the chances of a situation where the scene being imaged has a greater dynamic range than the image sensor, which may be a small image sensor suited to a wearable device.
  • Reduction of motion blur can be achieved by a mechanical optical image stabilisation (OIS) system that moves one or more of the optical components or sensor to compensate for motion of the camera unit that may be detected by a motion sensor.
  • OIS optical image stabilisation
  • a flash consumes significant power and impacts battery drain.
  • the flash of light from the flash unit may be unacceptable to the user in some situations.
  • the flash of light might typically be unacceptable to the user for a camera unit that captures images intermittently, because such a camera unit may capture relatively large numbers of images during activities in which the user will not wish to be interrupted.
  • Use of a still or stabilized platform is inconvenient to set up in a way that may be entirely unacceptable to the user.
  • Use of a still or stabilized platform might typically be
  • the present invention is concerned with tackling motion blur.
  • a method of controlling a camera unit that comprises an image sensor arranged to capture still images and a motion sensor arranged to detect motion of the camera unit, the method comprising:
  • determining whether the image is of acceptable quality taking into account at least the degree of blurring indicated by the detected motion, and either storing the captured image if it is of acceptable quality else repeating the steps of capturing an image, detecting motion of the camera unit, and determining whether the image is of acceptable quality until an image of acceptable quality is captured and stored.
  • the present invention involves detection of the motion of the camera unit during image capture using the motion sensor, and use of the detected motion to indicate the degree of blurring. This is used to decide whether an image that has been captured has acceptable quality, and if not the image capture is repeated until an image of acceptable quality is captured.
  • the method will tend to continue capturing images until the motion changes such that the blurring is reduced.
  • This provides for capture and storage of an image of acceptable quality without the disadvantages associated with using a mechanical OIS system to reduce motion blur during image capture or with post-processing to remove motion blur from captured images.
  • the method is simple to implement and the processing requirement is very low since it involves merely analysis of the detected motion and an assessment of whether the image is of acceptable quality.
  • the determination of whether an image is of acceptable quality may also take into account other parameters such as the brightness of the image, although typically the indicated degree of blurring will be the predominant factor in the determination.
  • the present invention has particular advantage when applied to a camera unit in which the method is performed intermittently without triggering by a user, for example a camera unit that comprises plural sensors arranged to sense physical parameters of the camera unit or its surroundings, in which case the method may be performed intermittently in response to the outputs of the sensors.
  • a camera unit that comprises plural sensors arranged to sense physical parameters of the camera unit or its surroundings, in which case the method may be performed intermittently in response to the outputs of the sensors.
  • the blurring of an image may be indicated from the detected motion as follows.
  • Motion of a camera unit can be broken into two components, that is rotational motion and translational motion. In general both components contribute to motion blur and so may be used together.
  • the motion sensor is a gyroscope sensor and the detected motion of the camera unit is angular motion of the camera unit around at least one axis, preferably around three orthogonal axes.
  • rotational motion typically provides a larger degree of motion blur than translational motion, during a single image exposure, and also because the amount of image blur caused by rotation is independent of scene depth.
  • use of rotational motion to indicate the degree of blurring has been observed to provide greater effectiveness than translational motion.
  • rotation around each axis may be considered separately.
  • further advantage may be achieved by use of a blur measure derived from a combination of the angular motion around each of the three orthogonal axes.
  • the blur is influenced by the rotation around each axis in combination, in practice use of a combined measure has been observed to provide better results than considering measures of rotation around each axis separately.
  • the combination may be a weighted sum of the angular motion around each of three orthogonal axes, wherein the weights might typically not be identical.
  • the weights may be scaled relative to each other by factors that are based on the amount of blur measured relative to the pixel pitch.
  • the method may be applied to an image sensor that is globally shuttered or an image sensor that is rolling shuttered.
  • the blur measure may be the weighted sum of the angular motion around each of the three orthogonal axes detected by the gyroscope sensor, detected over the exposure time of the image sensor.
  • the blur measure may be a combination, e.g. a weighted sum, of blur values in respect of each line that are the weighted sum of the angular motion around each of the three orthogonal axes detected by the gyroscope sensor, detected over the exposure time of respective lines.
  • a camera unit comprising an image sensor and a control circuit for controlling the camera unit that is arranged to perform an image capture operation similar to the method.
  • Fig. 1 is a schematic block diagram of a camera
  • Fig. 2 is a schematic view of the camera showing the alignment of rotational axes with the image sensor
  • Fig. 3 is a sample image
  • Fig. 4 is a graph of the pixel blur over time during capture of the sample image of Fig. 3;
  • Fig. 5 is a graph of a sigmoid function of weight against blur value
  • Fig. 6 is a flow chart of a first method of capturing images that is implemented in the camera.
  • Fig. 1 is a schematic block diagram of a camera 1 comprising a camera unit 2 mounted in a housing 3.
  • the camera 1 is wearable.
  • the housing 3 has a fitment 4 to which is attached a lanyard 5 that may be placed around a user's neck.
  • Other means for wearing the camera 1 could alternatively be provided, for example a clip to allow attachment to a user's clothing.
  • the camera unit 2 comprises an image sensor 10 and a camera lens assembly 11 in front face of the housing 13.
  • the camera lens assembly 11 focuses an image of a scene 16 on the image sensor 10 which captures the image and may be of any suitable type for example a CMOS (complimentary metal-oxide-semiconductor) device.
  • the camera lens assembly 11 may include any number of lenses and may provide a fixed focus that preferably has a wide field of view.
  • the size of the image sensor 10 has a consequential effect on the size of the other components and hence the camera unit 2 as a whole.
  • the image sensor 10 may be of any size, but since the camera 1 is to be worn, the image sensor 10 is typically relatively small.
  • the image sensor 10 may typically have a diagonal of 6.00mm (corresponding to a 1/3" format image sensor) or less, or more preferably 5.68mm (corresponding to a 1/3.2" format image sensor) or less.
  • the image sensor has 5 megapixels in a 2592-by-1944 array in a standard 1/3.2" format with 1.75 ⁇ square pixels, producing an 8-bit raw RGB Bayer output, having an exposure time of the order of milliseconds and an analogue gain multiplier.
  • the camera unit 2 In normal use, the camera unit 2 will be directed generally in the same direction as the user, but might not be directed at a scene that has a natural point of interest since the user does not know when image capture will occur. For this reason, it is desirable that the camera lens assembly 11 has a relatively wide field of view ("wide angle").
  • the camera lens assembly 11 may typically have a diagonal field of view of 85 degrees or more, or more preferably 100 degrees or more.
  • the camera unit 2 includes a control circuit 12 that controls the entire camera unit 2.
  • the control circuit 12 controls the image sensor 10 to capture still images that may be stored in a memory 13.
  • the control circuit 12 may be implemented by a processor running an appropriate program.
  • the control circuit 12 may include conventional elements to control the parameters of operation of the image sensor 10 such as exposure time.
  • the memory 13 may take any suitable form, a non-limitative example being a flash memory that may be integrated or provided in a removable card.
  • a buffer 14 is included to buffer captured images prior to permanent storage in the memory 13.
  • the buffer 14 may an integrated element separate from the memory 13, or may be a region of the memory 13 selected by the control circuit 12.
  • the camera unit 2 further includes plural sensors 15 that sense different physical parameters of the camera unit 2 or its surroundings (three sensors 15a to 15c being shown in Fig. 1 for illustration, although any number may be provided).
  • the sensors 15 include a gyroscope sensor 15a arranged to detect angular motion, in particular angular velocity of the camera unit 2 around three orthogonal axes.
  • the gyroscope sensor 15a may be implemented by a MEMS (Micro-Electro- Mechanical System) gyroscope.
  • the amount of angular rotation may be obtained by integrating the detected angular velocity around each axis.
  • the gyroscope sensor 15a is an example of a motion sensor that detects motion of the camera unit 2.
  • the sensors 15 may include other types of motion sensor that detect motion of the camera unit 2, for example translational and/or angular motion, that may be velocity and/or acceleration.
  • the sensors 15 may include an accelerometer that detects translational acceleration of the camera unit 2.
  • sensing of location of the camera unit 2 for example using a GPS (global positioning system) receiver; sensing of ambient light using a light sensor; sensing of magnetic fields using a magnetometer; sensing of motion of external objects using an external motion sensor, that may be for example an infra-red motion sensor; sensing of temperature using a thermometer; and sensing of sound.
  • GPS global positioning system
  • the control circuit 12 performs the image capture operation intermittently without being triggered by the user.
  • the control circuit 12 may perform the image capture operation based on various criteria, for example in response to the outputs of the sensors 15, or based on the time elapsed since the previous image capture operation, or on a combination of these and/or other criteria.
  • the user does not generally know when image capture will occur and so will not be taking any specific action to improve image quality.
  • capture of images may be triggered when the outputs of the sensors 15 indicate a change or a high level on the basis that this suggests occurrence of an event that might be of significance to the user. Capture may be triggered based on a single sensor or a combination of sensors 15. That allows for intelligent decisions on the timing of image captures, in a way that increases the chances of the images being of scenes that are in fact significant to the user. Images are captured intermittently over a period of time, for example by capturing an image when the period since the last capture exceeds a limit, or by over time reducing the thresholds on the outputs of the sensors used for triggering. Thus, image capture occurs whilst the user moves naturally through variable lighting conditions.
  • Exposure of the image sensor 10 during image capture may be controlled by the control unit 12. Exposure control may be performed by controlling the operation of the image sensor 10 to vary the exposure time.
  • the camera lens assembly 11 may also be controllable to vary the exposure, for example by varying the optical aperture, but this might not be implemented in a low-cost camera unit 2. If so implemented, then exposure control may also be performed by controlling the camera lens assembly 11.
  • the control circuit 12 performs an image capture operation to capture and store and image, in a manner described in more detail below. During the performance of such an image capture operation, the control circuit 12 derives and uses a blur measure
  • Fig. 2 illustrates the orientation of three orthogonal axes with respect to the image sensor 10 of the camera 1. These axes are a first axis X and a second axis Y each in the plane of the image sensor 10 and a third axis Z perpendicular to the plane of the image sensor 10. The first and second axes are aligned with the major axes of the rectangular shape of the image sensor 10. In the orientation shown in Fig.2 the first axis X is horizontal, but of course the camera 1 could be used in any orientation.
  • Fig. 2 The three axes shown in Fig. 2 are used as a reference frame in this example. This is convenient, because the axes are aligned with the geometry of the image sensor 10. However, in general any set of orthogonal axes could be used as a reference frame.
  • the blur measure is derived from a combination of the angular motion detected around each of three axes.
  • This combination may be a blur value B that is a weighted sum given by the following equation
  • 0i are the amounts of angular motion around the respective axes that occurs during exposure
  • Wi are the weights in respect of each of the three axes.
  • the weights Wi are scaled relative to each other by factors that are based on the amount of blur measured relative to the pixel pitch, as follows.
  • the amount of blur measured relative to the pixel pitch that is in units of the pixel pitch, will be referred to as the pixel blur.
  • the pixel blur may be derived from the amounts of angular motion 0i around the respective axes as follows.
  • the pixel blur P x along the first axis X and the pixel blur P y along the second axis Y may be scaled by respective scaling factors S x and S y , relative to the amount of angular motion 0 y around the second axis Y and the amount of angular motion 0 X around the second axis X during exposure, respectively, using the following equations:
  • R x and R y are the pixel resolutions (in pixels) of the image sensor 10 along the first axis X and the second axis Y
  • F x and F y are the fields of view (in the same angular units as angular motion) of the camera unit 2 along the first and second axes.
  • the pixel blur caused by a rotation about the third axis Z varies with the distance from the centre of rotation. Therefore, the pixel blur P z around the third axis is considered to be the pixel blur along largest circle that fits in the image.
  • the length of such a circle is ⁇ -d, where d is the size of the image sensor 10 in the direction of its shortest axis, and so the pixel blur P z around the third axis Z may be derived taking into account the pixel resolution in that direction.
  • the pixel blur P z around the third axis Z may be scaled by a scaling factor S z relative to the amount of angular motion ⁇ ⁇ around the third axis Z using the following equation:
  • R y is the pixel resolution (in pixels) of the image sensor 10 along the second axis Y
  • T is a full turn (in the same angular units as angular motion).
  • weights wi are scaled relative to each other by factors that are based on these scaling factors Si to take account of the pixel blur caused by the rotation in different directions by using weights Wi given by the following equations:
  • the adjustment factors may all have the same value, which may be one, to provide an equal contribution, or their ratios may be varied to some degree. So that the rotation around each axis does provide some contribution, no adjustment factor is less than a fifth of any other, i.e. the ratio of any pair of adjustment factors is in the range from 0.2 to 5.
  • weights wi are applied to rotation around each axis, what matters is their relative size, in the sense that a common scaling applied to all the weights wi simply scales the overall magnitude of the blur measure M.
  • weights w x and w y are scaled relative to the weight w y by the equations:
  • the blur measure M is derived differently depending on whether the image sensor 10 is globally shuttered or rolling shuttered, to take account of the differing exposures in each case, as follows.
  • the image sensor 10 may be globally shuttered. In that case, the entire image, including each row, is exposed at the same time. This means that each pixel experiences the same rotation across the exposure time.
  • the blur measure M is simply the blur value B that is the weighted sum described above derived from the rotation detected over the exposure time of the image sensor 10.
  • the amount of rotation about each axis used to derive the blur value B is an integral of the angular velocity about that axis detected by the gyroscope sensor 15a over the exposure time.
  • the image sensor 10 may be rolling shuttered. In that case, rows of pixels are exposed successively and read out sequentially. Hence, the rows of pixels have
  • each row of pixels may experience a different rotation across its respective exposure time as the motion of the camera unit 2 changes.
  • a pixel blur B that is the weighted sum described above is derived in respect of in respect of lines of the image that are exposed at different times from the rotation detected over the exposure times of those lines. This may be done by windowing the angular velocity detected by the gyroscope sensor 15a by a "blur window" that corresponds to the exposure time of successive lines.
  • the amount of rotation about each axis used to derive the blur value B in respect of the line is an integral of the angular velocity about that axis detected by the gyroscope sensor 15a over the exposure time of the line.
  • the length of the blur window is determined by the exposure time of the camera unit 2 and the sampling rate of the gyroscope sensor 15a.
  • the blur window has a length of 5 measurements from the gyroscope sensor 15a. This allows the blur values B to track the motion that occurs during the overall readout time of the image.
  • the sampling rate of the gyroscope sensor 15a may be insufficient to derive a different blur value for every line of pixels. That is, the blur window is updated at the sampling rate of the gyroscope sensor 15a resulting in derivation of blur values B at that sampling rate. For example, at a typical frame rate of 30fps and a sampling rate of the gyroscope sensor 15a of lKHz, then 33 blur values B are derived in respect of an image.
  • Fig. 3 shows a sample image
  • Fig. 4 illustrates the pixel blurs generated from the angular rotation generated during the capture of that sample image.
  • Fig. 4 illustrates the pixel blurs P x , P y and P z in respect of the angular rotation about each axis that are combined to derive the blur value B as described above.
  • Fig. 4 shows that (1) the pixel blurs P x , P y and P z in respect of the rotation about the different axes vary from one another, and (2) the degree of blurring, and hence the blur value B, is greater in the middle of the image than in the top or bottom of the image, both of which effects can be seen in the sample image itself in Fig. 3.
  • the blur measure M for the image as a whole is derived as a combination of the blur values B generated across the overall image. This combination may be a weighted sum of the blur values B.
  • the blur measure M may be derived in accordance with the equation
  • Bj is the i-th blur value
  • xj is a weight in respect of the i-th blur value and the summation occurs over all the blur values generated for an image.
  • the weights are the same, for example taking the value of one.
  • the weights may take account of the perception of blur to a viewer.
  • the weights xj may have a value of zero when the blur value Bj is below a perception threshold and that increase with the blur value above the perception threshold. This takes account of an observation that people do not tend to perceive low levels of blur.
  • the weights xj may have that increase with the blur value Bj up to a saturation threshold of the blur value Bj above which the weights xj have a constant value. This takes account of an observation that people tend to perceive blur up to a saturation point after which further increases in blur are not perceived.
  • both these examples of perceptually influenced weighting may be implemented by using weights Xj that are a sigmoid function of the blur values Bj, for example as shown in Fig. 5.
  • FIG. 6 An image capture operation performed by the control circuit 12 and using such a blur measure M is shown in Fig. 6 and performed as follows.
  • This example is intended for a wearable camera in which the image capture operation is performed intermittently without triggering by a user, as described above.
  • the image sensor 10 and the gyroscope sensor 15a may be powered down between performances of the image capture operation.
  • the image sensor 10 and the gyroscope sensor 15a are supplied with power so that the image sensor 10 starts to capture images and the gyroscope sensor 15a starts to detect the angular motion of the camera unit 2.
  • the following steps are performed in an exposure selection stage used to derive the desired exposure.
  • step S2 a still image is captured.
  • the exposure is controlled, the exposure taking a predetermined initial value the first time that step S2 is performed.
  • the captured image is stored in the buffer 14.
  • the exposure is controlled by varying the exposure time of the image sensor 10, and optionally also the aperture if the lens assembly 11 permits that.
  • step S3 the angular motion of the camera unit 2 is detected by the gyroscope sensor 15a.
  • step S4 the blur measure M in respect of the captured image is derived from the angular motion detected in step S3, in the manner described above.
  • steps S2 to S4 are repeated to capture plural images during the exposure selection stage, resulting in separate blur measures M in respect of each image being derived in repeated performances of step S4. All the thus-derived blur measures M are stored for use in step S9 as will be described below.
  • a brightness measure of the brightness of the captured image is derived from the image captured in step S2.
  • the brightness may be measured by the luminance of the image or any other type of brightness measure.
  • the measured brightness may be the overall brightness.
  • the brightness measure may be derived from a light sensor separate from the image sensor 10 that measures the brightness of illumination, for example a TTL (through the lens) light sensor.
  • the brightness measure may be derived from the image in any manner suitable for automatic exposure control.
  • the brightness measure might in the simplest case be the average brightness of the image, or in more complicated case be derived from the brightness of areas of the captured image weighted by an exposure mask.
  • Such an exposure mask comprises different weights corresponding to different areas of the image. This causes the response of the exposure control to be biased towards areas which have a relatively high weight, causing those areas to be more correctly exposed, at the expense of areas having a relatively low weight to be less correctly exposed.
  • the brightness measure is analysed to determine if the exposure has converged to the desired level taking into account the brightness measure. This analysis may be performed in accordance with any automatic exposure technique. For example, the brightness measure may be compared to a target level to determine if the exposure been brought to that target level. If not, then the method returns via step S7 to steps S2 and S3 so that another image is captured.
  • step S7 the exposure (exposure time and, if variable, aperture) is adjusted in accordance with the automatic exposure technique.
  • the adjustment may drive the brightness measure for the subsequently captured image towards the target level, by using difference between the brightness measure from step S5 and the target level as a feedback parameter for the adjustment.
  • steps S6 and S7 cause step S2 to be performed repeatedly to capture images in a cycle with the exposure of the captured images being varied in dependence on the brightness measures of previously captured images.
  • the camera unit 2 may typically also perform an auto-white balance (AWB) procedure.
  • steps S5 to S7 may also adjust the white balance (colour balance) by step S5 additionally comprising derivation of an appropriate an colour measure indicating the white balance, step S6 also involving analysis of the colour measure to determine if AWB convergence has occurred, and step S7 also involving adjustment of the colour balance, for example by adjustment of the relative gain of different colour channels.
  • an autofocus procedure may be also be performed at the same time.
  • step S6 When it is determined in step S6 that the exposure has converged, the method proceeds to step S8 in which an exposure (exposure time and, if variable, aperture) for a subsequent capture stage is initially selected as the exposure to which convergence has occurred in the preceding steps.
  • step S8 selecting an exposure for the capture stage on the basis of the determined brightness of illumination of the captured images alone.
  • step S9 the degree of blurring for future capture of images is predicted from the plural blur measures M derived in step S4 from the motion detected during the capture of images in the repeated performances of step S2.
  • the prediction may be performed by deriving a predicted blur measure Mp in any manner, for example by taking a simple average of the plural blur measures M, by low-pass filtering the blur measures M, or by a prediction that weights more recent blur measures M more greatly on the basis that they have a greater predictive power.
  • step S10 an exposure for the subsequent capture stage is selected taking into account the degree of blurring predicted in step S9.
  • the exposure exposure time and, if variable, aperture
  • step S 10 the exposure time for the subsequent capture stage is reduced from the exposure time initially selected in step S8.
  • the aperture if variable, may be controlled to maintain the same value to maximise depth of field, or to increase so as to limit the overall reduction in exposure.
  • the reduction in the exposure time is performed because that limits the amount of motion blur. This may be done at the expense of some degree of under-exposure on the premise that a darker image is preferable to the user than a blurry one or a noisy one if the reduced exposure is compensated by increased gain.
  • the determination in step S10 of whether or not the predicted degree of blurring is acceptable may be made by comparing the blur measure M representing the predicted degree of blurring with a threshold. That threshold may be selected based on experimental observation of image capture of typical scenes under typical operating conditions.
  • the threshold may be set in a number of ways, some non-limitative examples being as follows.
  • the threshold may be fixed.
  • the threshold may be dependent on the exposure time initially selected in step S8, for example being increased to allow increased blurring when the exposure time is relatively high on the basis that users may be more accepting of a blurry image if it has high brightness.
  • the threshold may be set or adjusted by the user.
  • the threshold may be adjusted to manage power consumption and hence battery life.
  • steps S8 and S10 are to select the exposure time for the capture stage on the basis of both (a) the determined brightness of illumination of the captured images, and (b) the predicted blur measure Mp and hence also the detected motion from which it is derived.
  • step SI 1 a still image is captured.
  • the exposure exposure time and, if variable, aperture
  • the captured image is stored in the buffer 14.
  • step S 12 the angular motion of the camera unit 2 is detected by the gyroscope sensor 15a.
  • step SI 3 the blur measure M in respect of the captured image is derived from the angular motion detected in step SI 1, in the manner described above.
  • step S14 it is determined whether the image captured in step SI 1 is of acceptable quality, taking into account the blur measure M derived in step S13.
  • the blur measure M may be taken into account in various ways. Two non-limitative options are as follows.
  • the determination takes account of only the blur measure M derived in step S13, as for example in the following first and second options.
  • the first option is simply to compare the blur measure M with a threshold. In that case, the blur measure M being below the threshold may indicate acceptable quality and vice versa.
  • the second option is to take account of the blur measure M in some more complicated way.
  • the determination also takes account of one or more measures of another parameter representing quality of the image as for example in the following third and fourth options.
  • suitable parameters include a brightness measure of the brightness of the captured image derived from the image as discussed above; a measure indicating how well exposed the image is; or another measure of the light conditions including the colour measure used in the AWB procedure.
  • the third option is to derive a quality metric that combines the blur measure with the one or more other measures.
  • the combination may be performed in any manner, for example a weighted sum of the measures.
  • the blur measure will have the dominant effect on the quality metric.
  • the quality metric will be compared with a threshold.
  • a fourth option is to have separate conditions on the blur measure M (for example comparing it with a threshold as in the first option), and on measures of other parameters.
  • step S14 determines that the image is of acceptable quality. If the method proceeds to step S15 in which the image captured in step SI 1 is stored in the memory 13.
  • step S14 determines whether the image is not of acceptable quality. If the method proceeds back to steps SI 1 and Sa 12, via steps S16 and S17 which will be described below. In this manner, steps SI 1 to S14 are repeated until an image of acceptable quality is captured and then stored in step SI 5.
  • the overall result is that the image stored in the memory 13 is of acceptable quality taking into account the degree of blurring indicated by the detected motion.
  • Steps S16 and S 17 are used to reduce the exposure time in the event that the detected motion is not indicative of an acceptable degree of blurring over an extended period of time, as follows.
  • step SI 6 it is determined whether a predetermined period has elapsed since the first time an image was captured in step S 11. If not then the method proceeds directly to steps SI 1 and S12 to repeat the image capture.
  • step S16 determines whether a predetermined period has elapsed. If it is determined step S16 that a predetermined period has elapsed, then the method proceeds to step S17 in which the exposure time to be used in step SI 1 is reduced from that previously used, as originally selected in the exposure selection stage. Thereafter, step SI 1 is performed with the reduced exposure time.
  • the aperture if controlled, may maintain the same value to maximise depth of field, or may be increased to limit the overall reduction in exposure.
  • the exposure time is reduced on the basis that the failure to obtain an acceptable degree of blurring over the predetermined period suggests that the unacceptable blurring is likely to continue.
  • the reduction in the exposure time is performed because that limits the amount of motion blur. This is done at the expense of some degree of underexposure on the premise that a darker image is preferable to the user than a blurry one, or a noisy one if the reduced exposure is compensated by increased gain.
  • step SI 6 although the amount of reduction of exposure time may be used.
  • advantage may be achieved by amount of reduction of exposure time being dependent on the blur measures M derived in the repeated performances of step SI 3, for example by increasing the amount of reduction when the blur measures M are relatively high.
  • Steps S16 and S17 have the advantage of finishing the capture operation when the camera unit 2 is in a state of motion. This has the advantage of reducing power
  • step SI 8 sharpness processing is performed on the image stored in step SI 8.
  • the sharpness processing may be performed to a degree that is dependent on the blur measure M, for example increasing the degree to which sharpness is increased when the blur measure M is indicative of a relatively high degree of blurring.
  • step SI 9 the image sensor 10 and the gyroscope sensor 15a cease to be supplied with power.
  • a single blur measure M is derived, combining the detected angular motion around all three axes of rotation.
  • separate blur measures may be derived from the detected angular motion around each axis of rotation.
  • the image capture operation may be modified to use the separate blur measures with separate conditions on each, for example by comparing each separate blur measure to a threshold, with any one of the blur measures exceeding the threshold being taken to indicate an image of unacceptable quality.
  • the combination of the angular motion around each of the three orthogonal axes into a single blur measure provides better results, because the blur of the image in fact results from a combination of the angular motions around different axes.

Abstract

A camera unit that comprises an image sensor and a motion sensor performs an image capture operation in which still images are captured intermittently without triggering by a user. It is determined whether a captured image is of acceptable quality taking into account at least the degree of blurring indicated by the detected motion. If so, the captured image is stored, or else further images are captured until an image of acceptable quality is captured and stored.

Description

Minimisation of Blur in Still Image Capture
The present invention relates to a camera unit that comprises an image sensor that is operable to capture still images, and in some embodiments a camera unit captures images intermittently without a triggering by the user.
The present invention is concerned with blurring of captured images caused by motion of the camera unit, referred to herein as motion blur, as opposed to motion of objects within a scene being imaged.
Many standard forms of automatic exposure control can produce motion blur that may be significant for a proportion of images in normal usage.
Most digital still camera images are taken while the camera unit is stationary, with a user pointing the camera unit, framing the shot, checking the image quality in a preview mode, and altering the camera direction, framing, or other settings such as exposure until they have a satisfactory shot. This minimises change during the image capture operation to the scene being imaged. However, even in this situation, motion blur is in practice evident in a proportion of images captured by typical users.
These problems are exacerbated in the case of a camera unit that comprises a control circuit that controls the image sensor to capture images intermittently without a user triggering capture of the individual images. Such a camera unit may, for example, capture images in response to sensors that sense physical parameters of the camera unit or its surroundings. That allows for intelligent decisions on the timing of image captures, in a way that increases the chances of the images being of scenes that are significant to the user.
In general, since the camera unit captures images without a user triggering capture, the user does not know the intermittent times at which image capture will occur. Thus, image capture occurs whilst the user moves naturally through variable lighting conditions and there is a greater chance of the camera unit being directed at a scene that is difficult to expose correctly and there is no possibility of the user taking any positive action to correct or improve exposure, for example by pointing, framing and or adjusting exposure quality, and the conditions are not stable during the image capture operation.
Furthermore, such a camera unit might typically have a relatively wide field of view. This is to compensate for the fact that the camera unit will typically not be directed at a scene that has a natural point of interest since the user does not know when image capture will occur. However, such a wide field of view increases the chances of a situation where the scene being imaged has a greater dynamic range than the image sensor, which may be a small image sensor suited to a wearable device.
Some existing approaches for dealing with this issue are as follows.
Reduction of motion blur can be achieved by a mechanical optical image stabilisation (OIS) system that moves one or more of the optical components or sensor to compensate for motion of the camera unit that may be detected by a motion sensor.
However, such a mechanical OIS system is expensive and increases the size and complexity of the camera unit, adding difficulty during manufacture. Furthermore, such a mechanical OIS system also consumes power during operation which reduces battery life, this being a particular issue for a wearable camera.
Reduction of motion blur can be achieved by image processing. However, this is difficult to perform effectively in practice and generally it is considered better to avoid motion blur in an image than to correct it after capture. Such image processing techniques generally assume a global shuttering and so perform less well for rolling shuttering which is more common on an image sensor of low cost. Furthermore, such image processing techniques are relatively complex and slow, and increase the processing requirement. This makes such techniques less attractive as the number of images captured increases, as for example is typical in a camera unit that captures images intermittently.
Many motion blur solutions have previously been aimed at general purpose cameras, or machine vision systems, both of which have different constraints from a camera unit that captures images intermittently.
Dealing with low light scenes is usually addressed by use of a flash unit or use of a still or stabilized platform to support the camera. However, these approaches are not practical in all situations. A flash consumes significant power and impacts battery drain. Furthermore, the flash of light from the flash unit may be unacceptable to the user in some situations. The flash of light might typically be unacceptable to the user for a camera unit that captures images intermittently, because such a camera unit may capture relatively large numbers of images during activities in which the user will not wish to be interrupted. Use of a still or stabilized platform is inconvenient to set up in a way that may be entirely unacceptable to the user. Use of a still or stabilized platform might typically be
unacceptable to the user for a camera unit that captures images intermittently, because the user does not know the intermittent times at which image capture will occur and will typically bear the camera unit while undertaking other activity.
An additional point is that the issues are more difficult for still images than for video images. This is because video tends to converge to the correct exposure over time as the lighting context changes. In contrast individual still images tend to be viewed for a longer time and so the perceived quality threshold may be higher than video.
The present invention is concerned with tackling motion blur.
According to the present invention, there is provided a method of controlling a camera unit that comprises an image sensor arranged to capture still images and a motion sensor arranged to detect motion of the camera unit, the method comprising:
capturing a still image;
during capture of the image, detecting motion of the camera unit; and
determining whether the image is of acceptable quality taking into account at least the degree of blurring indicated by the detected motion, and either storing the captured image if it is of acceptable quality else repeating the steps of capturing an image, detecting motion of the camera unit, and determining whether the image is of acceptable quality until an image of acceptable quality is captured and stored.
Thus, the present invention involves detection of the motion of the camera unit during image capture using the motion sensor, and use of the detected motion to indicate the degree of blurring. This is used to decide whether an image that has been captured has acceptable quality, and if not the image capture is repeated until an image of acceptable quality is captured. By taking account of the degree of blurring indicated by the detected motion, in the event of the motion of the camera unit causing blurring, the method will tend to continue capturing images until the motion changes such that the blurring is reduced.
This provides for capture and storage of an image of acceptable quality without the disadvantages associated with using a mechanical OIS system to reduce motion blur during image capture or with post-processing to remove motion blur from captured images. The method is simple to implement and the processing requirement is very low since it involves merely analysis of the detected motion and an assessment of whether the image is of acceptable quality.
The determination of whether an image is of acceptable quality may also take into account other parameters such as the brightness of the image, although typically the indicated degree of blurring will be the predominant factor in the determination.
The present invention has particular advantage when applied to a camera unit in which the method is performed intermittently without triggering by a user, for example a camera unit that comprises plural sensors arranged to sense physical parameters of the camera unit or its surroundings, in which case the method may be performed intermittently in response to the outputs of the sensors. As discussed above, in such a camera unit image capture occurs whilst the user moves naturally through variable lighting conditions, without the user taking action to improve image quality, and so there is a greater likelihood of the conditions being unstable during capture of any particular image in a way resulting in motion blur.
The blurring of an image may be indicated from the detected motion as follows.. Motion of a camera unit can be broken into two components, that is rotational motion and translational motion. In general both components contribute to motion blur and so may be used together.
However, particular advantage is achieved when the motion sensor is a gyroscope sensor and the detected motion of the camera unit is angular motion of the camera unit around at least one axis, preferably around three orthogonal axes. This has proven effective, because during normal use rotational motion typically provides a larger degree of motion blur than translational motion, during a single image exposure, and also because the amount of image blur caused by rotation is independent of scene depth. Thus, use of rotational motion to indicate the degree of blurring has been observed to provide greater effectiveness than translational motion.
When using rotational motion around three orthogonal axes, rotation around each axis may be considered separately. However, further advantage may be achieved by use of a blur measure derived from a combination of the angular motion around each of the three orthogonal axes. As the blur is influenced by the rotation around each axis in combination, in practice use of a combined measure has been observed to provide better results than considering measures of rotation around each axis separately.
The combination may be a weighted sum of the angular motion around each of three orthogonal axes, wherein the weights might typically not be identical. In particular, the weights may be scaled relative to each other by factors that are based on the amount of blur measured relative to the pixel pitch.
Furthermore, the method may be applied to an image sensor that is globally shuttered or an image sensor that is rolling shuttered. In the case of global shuttering, the blur measure may be the weighted sum of the angular motion around each of the three orthogonal axes detected by the gyroscope sensor, detected over the exposure time of the image sensor. In the case of rolling shuttering, the blur measure may be a combination, e.g. a weighted sum, of blur values in respect of each line that are the weighted sum of the angular motion around each of the three orthogonal axes detected by the gyroscope sensor, detected over the exposure time of respective lines.
Further according to the present invention, there is provided a camera unit comprising an image sensor and a control circuit for controlling the camera unit that is arranged to perform an image capture operation similar to the method.
An embodiment of the present invention will now be described by way of non- limitative example with reference to the accompanying drawings, in which:
Fig. 1 is a schematic block diagram of a camera;
Fig. 2 is a schematic view of the camera showing the alignment of rotational axes with the image sensor;
Fig. 3 is a sample image;
Fig. 4 is a graph of the pixel blur over time during capture of the sample image of Fig. 3;
Fig. 5 is a graph of a sigmoid function of weight against blur value;
Fig. 6 is a flow chart of a first method of capturing images that is implemented in the camera.
Fig. 1 is a schematic block diagram of a camera 1 comprising a camera unit 2 mounted in a housing 3. The camera 1 is wearable. To achieve this, the housing 3 has a fitment 4 to which is attached a lanyard 5 that may be placed around a user's neck. Other means for wearing the camera 1 could alternatively be provided, for example a clip to allow attachment to a user's clothing.
The camera unit 2 comprises an image sensor 10 and a camera lens assembly 11 in front face of the housing 13. The camera lens assembly 11 focuses an image of a scene 16 on the image sensor 10 which captures the image and may be of any suitable type for example a CMOS (complimentary metal-oxide-semiconductor) device. The camera lens assembly 11 may include any number of lenses and may provide a fixed focus that preferably has a wide field of view.
The size of the image sensor 10 has a consequential effect on the size of the other components and hence the camera unit 2 as a whole. In general, the image sensor 10 may be of any size, but since the camera 1 is to be worn, the image sensor 10 is typically relatively small. For example, the image sensor 10 may typically have a diagonal of 6.00mm (corresponding to a 1/3" format image sensor) or less, or more preferably 5.68mm (corresponding to a 1/3.2" format image sensor) or less. In one implementation, the image sensor has 5 megapixels in a 2592-by-1944 array in a standard 1/3.2" format with 1.75μπι square pixels, producing an 8-bit raw RGB Bayer output, having an exposure time of the order of milliseconds and an analogue gain multiplier.
In normal use, the camera unit 2 will be directed generally in the same direction as the user, but might not be directed at a scene that has a natural point of interest since the user does not know when image capture will occur. For this reason, it is desirable that the camera lens assembly 11 has a relatively wide field of view ("wide angle"). For example, the camera lens assembly 11 may typically have a diagonal field of view of 85 degrees or more, or more preferably 100 degrees or more.
The camera unit 2 includes a control circuit 12 that controls the entire camera unit 2. The control circuit 12 controls the image sensor 10 to capture still images that may be stored in a memory 13. The control circuit 12 may be implemented by a processor running an appropriate program. The control circuit 12 may include conventional elements to control the parameters of operation of the image sensor 10 such as exposure time.
Similarly, the memory 13 may take any suitable form, a non-limitative example being a flash memory that may be integrated or provided in a removable card.
A buffer 14 is included to buffer captured images prior to permanent storage in the memory 13. The buffer 14 may an integrated element separate from the memory 13, or may be a region of the memory 13 selected by the control circuit 12.
The camera unit 2 further includes plural sensors 15 that sense different physical parameters of the camera unit 2 or its surroundings (three sensors 15a to 15c being shown in Fig. 1 for illustration, although any number may be provided).
The sensors 15 include a gyroscope sensor 15a arranged to detect angular motion, in particular angular velocity of the camera unit 2 around three orthogonal axes. As an example, the gyroscope sensor 15a may be implemented by a MEMS (Micro-Electro- Mechanical System) gyroscope. The amount of angular rotation may be obtained by integrating the detected angular velocity around each axis. Thus, the gyroscope sensor 15a is an example of a motion sensor that detects motion of the camera unit 2.
More generally, the sensors 15 may include other types of motion sensor that detect motion of the camera unit 2, for example translational and/or angular motion, that may be velocity and/or acceleration. As an example, the sensors 15 may include an accelerometer that detects translational acceleration of the camera unit 2.
Other non-limitative examples of the types of sensing and sensors 15 include:
sensing of location of the camera unit 2 for example using a GPS (global positioning system) receiver; sensing of ambient light using a light sensor; sensing of magnetic fields using a magnetometer; sensing of motion of external objects using an external motion sensor, that may be for example an infra-red motion sensor; sensing of temperature using a thermometer; and sensing of sound.
The control circuit 12 performs the image capture operation intermittently without being triggered by the user. The control circuit 12 may perform the image capture operation based on various criteria, for example in response to the outputs of the sensors 15, or based on the time elapsed since the previous image capture operation, or on a combination of these and/or other criteria. The user does not generally know when image capture will occur and so will not be taking any specific action to improve image quality.
In the case of triggering in response to the outputs of the sensors 15, capture of images may be triggered when the outputs of the sensors 15 indicate a change or a high level on the basis that this suggests occurrence of an event that might be of significance to the user. Capture may be triggered based on a single sensor or a combination of sensors 15. That allows for intelligent decisions on the timing of image captures, in a way that increases the chances of the images being of scenes that are in fact significant to the user. Images are captured intermittently over a period of time, for example by capturing an image when the period since the last capture exceeds a limit, or by over time reducing the thresholds on the outputs of the sensors used for triggering. Thus, image capture occurs whilst the user moves naturally through variable lighting conditions.
Exposure of the image sensor 10 during image capture may be controlled by the control unit 12. Exposure control may be performed by controlling the operation of the image sensor 10 to vary the exposure time. Optionally, the camera lens assembly 11 may also be controllable to vary the exposure, for example by varying the optical aperture, but this might not be implemented in a low-cost camera unit 2. If so implemented, then exposure control may also be performed by controlling the camera lens assembly 11.
The control circuit 12 performs an image capture operation to capture and store and image, in a manner described in more detail below. During the performance of such an image capture operation, the control circuit 12 derives and uses a blur measure
representing the degree of blurring for captured images from the angular motion detected by the gyroscope sensor 15a. The derivation of the blur measure will now be described.
Fig. 2 illustrates the orientation of three orthogonal axes with respect to the image sensor 10 of the camera 1. These axes are a first axis X and a second axis Y each in the plane of the image sensor 10 and a third axis Z perpendicular to the plane of the image sensor 10. The first and second axes are aligned with the major axes of the rectangular shape of the image sensor 10. In the orientation shown in Fig.2 the first axis X is horizontal, but of course the camera 1 could be used in any orientation.
The three axes shown in Fig. 2 are used as a reference frame in this example. This is convenient, because the axes are aligned with the geometry of the image sensor 10. However, in general any set of orthogonal axes could be used as a reference frame.
Different reference frames are related by linear combinations representing the rotational transformation between them. This means that the calculations described below applied to a different reference frame produce the same blur measure irrespective of the reference frame. Similarly, if the reference frame of the angular rotations detected by the gyroscope sensor 15a are not already aligned with the reference frame shown in Fig. 2, then they may be converted into that reference frame by a simple linear combinations of the detected angular rotations.
It has been appreciated that the angular motion around each of three orthogonal axes generates blurring of captured images. Rotation around the first axis X generates motion blur linearly along the second axis Y. Rotation around the second axis Y generates motion blur linearly along the first axis X. Rotation around the third axis Z generates motion blur circularly around the centre of image. Thus, the motion blur caused by rotation around the first axis X and the second axis Y will be the same at each location on the image that is exposed simultaneously, whereas the motion blur caused by rotation around the third axis Z will be of different direction and magnitude at different locations on the image that are exposed simultaneously. These motion blurs combine to produce an overall blurring of the image, that may be different at different locations on the image either in the case of rotation around the third axis Z or in the case of different locations on the image being exposed at different times.
As the motion blurs combine, the blur measure is derived from a combination of the angular motion detected around each of three axes. This combination may be a blur value B that is a weighted sum given by the following equation
Figure imgf000011_0001
where 0i are the amounts of angular motion around the respective axes that occurs during exposure and Wi are the weights in respect of each of the three axes. The weights Wi are scaled relative to each other by factors that are based on the amount of blur measured relative to the pixel pitch, as follows.
By way of definition, the amount of blur measured relative to the pixel pitch, that is in units of the pixel pitch, will be referred to as the pixel blur. The pixel blur may be derived from the amounts of angular motion 0i around the respective axes as follows.
By assuming the field of view varies linearly across the image, the pixel blur Px along the first axis X and the pixel blur Py along the second axis Y may be scaled by respective scaling factors Sx and Sy, relative to the amount of angular motion 0y around the second axis Y and the amount of angular motion 0X around the second axis X during exposure, respectively, using the following equations:
Figure imgf000011_0002
where Rx and Ry are the pixel resolutions (in pixels) of the image sensor 10 along the first axis X and the second axis Y, and Fx and Fy are the fields of view (in the same angular units as angular motion) of the camera unit 2 along the first and second axes.
The pixel blur caused by a rotation about the third axis Z varies with the distance from the centre of rotation. Therefore, the pixel blur Pz around the third axis is considered to be the pixel blur along largest circle that fits in the image. The length of such a circle is π-d, where d is the size of the image sensor 10 in the direction of its shortest axis, and so the pixel blur Pz around the third axis Z may be derived taking into account the pixel resolution in that direction. In the case that the second axis Y is shorter than the first axis X, then the pixel blur Pz around the third axis Z may be scaled by a scaling factor Sz relative to the amount of angular motion θζ around the third axis Z using the following equation:
Figure imgf000012_0001
where Ry is the pixel resolution (in pixels) of the image sensor 10 along the second axis Y, and T is a full turn (in the same angular units as angular motion). In the case that the first axis X is shorter than the second axis Y, then the same equation is used substituting x for y.
The weights wi are scaled relative to each other by factors that are based on these scaling factors Si to take account of the pixel blur caused by the rotation in different directions by using weights Wi given by the following equations:
Figure imgf000012_0002
where ai are adjustment factors that adjust the relative contribution of the rotations around the three axes. Generally, this means that the weights wi in respect of each axis are not identical.
The adjustment factors may all have the same value, which may be one, to provide an equal contribution, or their ratios may be varied to some degree. So that the rotation around each axis does provide some contribution, no adjustment factor is less than a fifth of any other, i.e. the ratio of any pair of adjustment factors is in the range from 0.2 to 5.
Since weights wi are applied to rotation around each axis, what matters is their relative size, in the sense that a common scaling applied to all the weights wi simply scales the overall magnitude of the blur measure M. Thus, the weights wx and wy are scaled relative to the weight wy by the equations:
(wx / Wz) = (Ry / π-Ry) · (T / Fy) · (ax / az)
(wy / Wz) = (Rx/ π-Ry) · (T / Fx) · (ay / az)
The blur measure M is derived differently depending on whether the image sensor 10 is globally shuttered or rolling shuttered, to take account of the differing exposures in each case, as follows.
The image sensor 10 may be globally shuttered. In that case, the entire image, including each row, is exposed at the same time. This means that each pixel experiences the same rotation across the exposure time. In this case, the blur measure M is simply the blur value B that is the weighted sum described above derived from the rotation detected over the exposure time of the image sensor 10. The amount of rotation about each axis used to derive the blur value B is an integral of the angular velocity about that axis detected by the gyroscope sensor 15a over the exposure time.
The image sensor 10 may be rolling shuttered. In that case, rows of pixels are exposed successively and read out sequentially. Hence, the rows of pixels have
successively different exposure times, albeit having exposure times of the same length. This means that each row of pixels may experience a different rotation across its respective exposure time as the motion of the camera unit 2 changes.
In this case, a pixel blur B that is the weighted sum described above is derived in respect of in respect of lines of the image that are exposed at different times from the rotation detected over the exposure times of those lines. This may be done by windowing the angular velocity detected by the gyroscope sensor 15a by a "blur window" that corresponds to the exposure time of successive lines. The amount of rotation about each axis used to derive the blur value B in respect of the line is an integral of the angular velocity about that axis detected by the gyroscope sensor 15a over the exposure time of the line. The length of the blur window is determined by the exposure time of the camera unit 2 and the sampling rate of the gyroscope sensor 15a. For example, if the exposure time is 5ms and the sampling rate of the gyroscope sensor 15a is 1kHz, then the blur window has a length of 5 measurements from the gyroscope sensor 15a. This allows the blur values B to track the motion that occurs during the overall readout time of the image.
In practice the sampling rate of the gyroscope sensor 15a may be insufficient to derive a different blur value for every line of pixels. That is, the blur window is updated at the sampling rate of the gyroscope sensor 15a resulting in derivation of blur values B at that sampling rate. For example, at a typical frame rate of 30fps and a sampling rate of the gyroscope sensor 15a of lKHz, then 33 blur values B are derived in respect of an image.
By way of illustration, Fig. 3 shows a sample image and Fig. 4 illustrates the pixel blurs generated from the angular rotation generated during the capture of that sample image. In particular, Fig. 4 illustrates the pixel blurs Px, Py and Pz in respect of the angular rotation about each axis that are combined to derive the blur value B as described above. In the case of this sample image, Fig. 4 shows that (1) the pixel blurs Px, Py and Pz in respect of the rotation about the different axes vary from one another, and (2) the degree of blurring, and hence the blur value B, is greater in the middle of the image than in the top or bottom of the image, both of which effects can be seen in the sample image itself in Fig. 3.
The blur measure M for the image as a whole is derived as a combination of the blur values B generated across the overall image. This combination may be a weighted sum of the blur values B. For example, the blur measure M may be derived in accordance with the equation
M =∑(Xj Bj)
where Bj is the i-th blur value, xj is a weight in respect of the i-th blur value and the summation occurs over all the blur values generated for an image.
In one possibility, the weights are the same, for example taking the value of one.
In another possibility, the weights may take account of the perception of blur to a viewer. In one example of such a perceptually influenced weighting, the weights xj may have a value of zero when the blur value Bj is below a perception threshold and that increase with the blur value above the perception threshold. This takes account of an observation that people do not tend to perceive low levels of blur. In another example of such a perceptually influenced weighting, the weights xj may have that increase with the blur value Bj up to a saturation threshold of the blur value Bj above which the weights xj have a constant value. This takes account of an observation that people tend to perceive blur up to a saturation point after which further increases in blur are not perceived.
For example, both these examples of perceptually influenced weighting may be implemented by using weights Xj that are a sigmoid function of the blur values Bj, for example as shown in Fig. 5.
An alternative option for taking account of such saturation in the perception of blur is for the blur values B in respect of different lines to be clipped at a saturation threshold, prior to being combined. In that case, they may be combined simply by summation.
An image capture operation performed by the control circuit 12 and using such a blur measure M is shown in Fig. 6 and performed as follows.
This example is intended for a wearable camera in which the image capture operation is performed intermittently without triggering by a user, as described above. In this case, the image sensor 10 and the gyroscope sensor 15a may be powered down between performances of the image capture operation. Thus, in the first step SI the image sensor 10 and the gyroscope sensor 15a are supplied with power so that the image sensor 10 starts to capture images and the gyroscope sensor 15a starts to detect the angular motion of the camera unit 2. Thereafter, the following steps are performed in an exposure selection stage used to derive the desired exposure.
In step S2, a still image is captured. During the image capture, the exposure is controlled, the exposure taking a predetermined initial value the first time that step S2 is performed. The captured image is stored in the buffer 14. As discussed above, the exposure is controlled by varying the exposure time of the image sensor 10, and optionally also the aperture if the lens assembly 11 permits that.
At the same time as the image is captured in step S2, in step S3 the angular motion of the camera unit 2 is detected by the gyroscope sensor 15a.
In step S4, the blur measure M in respect of the captured image is derived from the angular motion detected in step S3, in the manner described above. As will be described below, steps S2 to S4 are repeated to capture plural images during the exposure selection stage, resulting in separate blur measures M in respect of each image being derived in repeated performances of step S4. All the thus-derived blur measures M are stored for use in step S9 as will be described below.
In step S5, a brightness measure of the brightness of the captured image is derived from the image captured in step S2. This effectively uses the image sensor 10 as a light sensor for determining the brightness of illumination. The brightness may be measured by the luminance of the image or any other type of brightness measure. The measured brightness may be the overall brightness. In step S5, as an alternative to the brightness measure being derived from the captured image, the brightness measure may be derived from a light sensor separate from the image sensor 10 that measures the brightness of illumination, for example a TTL (through the lens) light sensor.
The brightness measure may be derived from the image in any manner suitable for automatic exposure control. For example the brightness measure might in the simplest case be the average brightness of the image, or in more complicated case be derived from the brightness of areas of the captured image weighted by an exposure mask. Such an exposure mask comprises different weights corresponding to different areas of the image. This causes the response of the exposure control to be biased towards areas which have a relatively high weight, causing those areas to be more correctly exposed, at the expense of areas having a relatively low weight to be less correctly exposed. In step S6, the brightness measure is analysed to determine if the exposure has converged to the desired level taking into account the brightness measure. This analysis may be performed in accordance with any automatic exposure technique. For example, the brightness measure may be compared to a target level to determine if the exposure been brought to that target level. If not, then the method returns via step S7 to steps S2 and S3 so that another image is captured.
In step S7, the exposure (exposure time and, if variable, aperture) is adjusted in accordance with the automatic exposure technique. For example, the adjustment may drive the brightness measure for the subsequently captured image towards the target level, by using difference between the brightness measure from step S5 and the target level as a feedback parameter for the adjustment.
In this manner, steps S6 and S7 cause step S2 to be performed repeatedly to capture images in a cycle with the exposure of the captured images being varied in dependence on the brightness measures of previously captured images.
The camera unit 2 may typically also perform an auto-white balance (AWB) procedure. In that case, steps S5 to S7 may also adjust the white balance (colour balance) by step S5 additionally comprising derivation of an appropriate an colour measure indicating the white balance, step S6 also involving analysis of the colour measure to determine if AWB convergence has occurred, and step S7 also involving adjustment of the colour balance, for example by adjustment of the relative gain of different colour channels. In a similar manner, if the lens assembly 11 provides a variable focus that is controllable, rather than a fixed focus, then an autofocus procedure may be also be performed at the same time.
When it is determined in step S6 that the exposure has converged, the method proceeds to step S8 in which an exposure (exposure time and, if variable, aperture) for a subsequent capture stage is initially selected as the exposure to which convergence has occurred in the preceding steps. Thus, in itself, step S8 selecting an exposure for the capture stage on the basis of the determined brightness of illumination of the captured images alone.
In step S9, the degree of blurring for future capture of images is predicted from the plural blur measures M derived in step S4 from the motion detected during the capture of images in the repeated performances of step S2. The prediction may be performed by deriving a predicted blur measure Mp in any manner, for example by taking a simple average of the plural blur measures M, by low-pass filtering the blur measures M, or by a prediction that weights more recent blur measures M more greatly on the basis that they have a greater predictive power.
In step S10, an exposure for the subsequent capture stage is selected taking into account the degree of blurring predicted in step S9. In the event that the degree of blurring indicated by the predicted blur measure M is acceptable, then in step S 10 the exposure (exposure time and, if variable, aperture) for the subsequent capture stage is selected to be that initially selected in step S8.
On the other hand, in the event that the degree of blurring indicated by the predicted blur measure M is unacceptable, then in step S 10 the exposure time for the subsequent capture stage is reduced from the exposure time initially selected in step S8. The aperture, if variable, may be controlled to maintain the same value to maximise depth of field, or to increase so as to limit the overall reduction in exposure.
The reduction in the exposure time is performed because that limits the amount of motion blur. This may be done at the expense of some degree of under-exposure on the premise that a darker image is preferable to the user than a blurry one or a noisy one if the reduced exposure is compensated by increased gain.
The determination in step S10 of whether or not the predicted degree of blurring is acceptable may be made by comparing the blur measure M representing the predicted degree of blurring with a threshold. That threshold may be selected based on experimental observation of image capture of typical scenes under typical operating conditions. The threshold may be set in a number of ways, some non-limitative examples being as follows. The threshold may be fixed. The threshold may be dependent on the exposure time initially selected in step S8, for example being increased to allow increased blurring when the exposure time is relatively high on the basis that users may be more accepting of a blurry image if it has high brightness. The threshold may be set or adjusted by the user. The threshold may be adjusted to manage power consumption and hence battery life.
In this manner, as S10 can reduce the exposure time for the capture stage from that selected in step S8, the overall effect of steps S8 and S10 is to select the exposure time for the capture stage on the basis of both (a) the determined brightness of illumination of the captured images, and (b) the predicted blur measure Mp and hence also the detected motion from which it is derived.
Thereafter, the following steps are performed in a capture stage using the exposure selected in the exposure selection stage.
In step SI 1, a still image is captured. In at least the first performance of step SI 1, the exposure (exposure time and, if variable, aperture) is that selected in the exposure selection stage. The captured image is stored in the buffer 14.
At the same time as the image is captured in step SI 1, in step S 12 the angular motion of the camera unit 2 is detected by the gyroscope sensor 15a.
In step SI 3, the blur measure M in respect of the captured image is derived from the angular motion detected in step SI 1, in the manner described above.
In step S14, it is determined whether the image captured in step SI 1 is of acceptable quality, taking into account the blur measure M derived in step S13. The blur measure M may be taken into account in various ways. Two non-limitative options are as follows.
In some options, the determination takes account of only the blur measure M derived in step S13, as for example in the following first and second options.
The first option is simply to compare the blur measure M with a threshold. In that case, the blur measure M being below the threshold may indicate acceptable quality and vice versa.
The second option is to take account of the blur measure M in some more complicated way.
In other options, the determination also takes account of one or more measures of another parameter representing quality of the image as for example in the following third and fourth options. By way of example, suitable parameters include a brightness measure of the brightness of the captured image derived from the image as discussed above; a measure indicating how well exposed the image is; or another measure of the light conditions including the colour measure used in the AWB procedure.
The third option is to derive a quality metric that combines the blur measure with the one or more other measures. The combination may be performed in any manner, for example a weighted sum of the measures. Typically the blur measure will have the dominant effect on the quality metric. Then, the quality metric will be compared with a threshold. A fourth option is to have separate conditions on the blur measure M (for example comparing it with a threshold as in the first option), and on measures of other parameters.
In the event that it is determined in step S14 that the image is of acceptable quality, then the method proceeds to step S15 in which the image captured in step SI 1 is stored in the memory 13.
However, in the event that it is determined in step S14 that the image is not of acceptable quality, then the method proceeds back to steps SI 1 and Sa 12, via steps S16 and S17 which will be described below. In this manner, steps SI 1 to S14 are repeated until an image of acceptable quality is captured and then stored in step SI 5. The overall result is that the image stored in the memory 13 is of acceptable quality taking into account the degree of blurring indicated by the detected motion.
Steps S16 and S 17 are used to reduce the exposure time in the event that the detected motion is not indicative of an acceptable degree of blurring over an extended period of time, as follows.
In step SI 6, it is determined whether a predetermined period has elapsed since the first time an image was captured in step S 11. If not then the method proceeds directly to steps SI 1 and S12 to repeat the image capture.
However, if it is determined step S16 that a predetermined period has elapsed, then the method proceeds to step S17 in which the exposure time to be used in step SI 1 is reduced from that previously used, as originally selected in the exposure selection stage. Thereafter, step SI 1 is performed with the reduced exposure time. The aperture, if controlled, may maintain the same value to maximise depth of field, or may be increased to limit the overall reduction in exposure.
The exposure time is reduced on the basis that the failure to obtain an acceptable degree of blurring over the predetermined period suggests that the unacceptable blurring is likely to continue. Hence the reduction in the exposure time is performed because that limits the amount of motion blur. This is done at the expense of some degree of underexposure on the premise that a darker image is preferable to the user than a blurry one, or a noisy one if the reduced exposure is compensated by increased gain.
In step SI 6, although the amount of reduction of exposure time may be
predetermined, advantage may be achieved by amount of reduction of exposure time being dependent on the blur measures M derived in the repeated performances of step SI 3, for example by increasing the amount of reduction when the blur measures M are relatively high.
Steps S16 and S17 have the advantage of finishing the capture operation when the camera unit 2 is in a state of motion. This has the advantage of reducing power
consumption of the camera unit 2, which is particularly important to maximise battery life in a wearable camera. Noise free images require a low gain value which favours longer exposure times, which in turn increases the likelihood of blurry images. However, this method uses the indicated degree of blur to trade off the time spent taking a high quality image against the quality of the image.
In optional step SI 8, sharpness processing is performed on the image stored in step
S5 to increase the sharpness of the stored image. In this step, the sharpness processing may be performed to a degree that is dependent on the blur measure M, for example increasing the degree to which sharpness is increased when the blur measure M is indicative of a relatively high degree of blurring.
Finally, in step SI 9, the image sensor 10 and the gyroscope sensor 15a cease to be supplied with power.
In the above example, a single blur measure M is derived, combining the detected angular motion around all three axes of rotation. As an alternative, separate blur measures may be derived from the detected angular motion around each axis of rotation. In that case, the image capture operation may be modified to use the separate blur measures with separate conditions on each, for example by comparing each separate blur measure to a threshold, with any one of the blur measures exceeding the threshold being taken to indicate an image of unacceptable quality. However, it has been observed that the combination of the angular motion around each of the three orthogonal axes into a single blur measure in practice use provides better results, because the blur of the image in fact results from a combination of the angular motions around different axes.

Claims

Claims
1. A method of controlling a camera unit that comprises an image sensor arranged to capture still images and a motion sensor arranged to detect motion of the camera unit, the method comprising:
capturing a still image;
during capture of the image, detecting motion of the camera unit; and
determining whether the image is of acceptable quality taking into account at least the degree of blurring indicated by the detected motion, and either storing the captured image if it is of acceptable quality else repeating the step of capturing an image, detecting motion of the camera unit, and determining whether the image is of acceptable quality until an image of acceptable quality is captured and stored.
2. A method according to claim 1, wherein the step of determining whether the image is of acceptable quality also takes account of one or more measures of another parameter representing quality of the image.
3. A method according to claim 1 or 2, wherein the motion sensor is a gyroscope sensor and the detected motion of the camera unit is angular motion of the camera unit around at least one axis.
4. A method according to claim 3, wherein the detected motion of the camera unit is angular motion of the camera unit around three orthogonal axes.
5. A method according to claim 3 or 4, further comprising deriving at least one blur measure representing the degree of blurring for captured images from the angular motion detected by the gyroscope sensor, and said step of determining whether the image is of acceptable quality taking into account at least the degree of blurring indicated by the detected motion is performed by taking into account at least the degree of blurring represented by the at least one blur measure.
6. A method according to claim 5, wherein the at least one blur measure comprises a blur measure derived from a combination of the angular motion around each of three orthogonal axes detected by the gyroscope sensor.
7. A method according to claim 6, wherein said combination of the angular motion around each of three orthogonal axes is a weighted sum of the angular motion around each of three orthogonal axes.
8. A method according to claim 7, wherein the weighted sum of the angular motion around each of the three orthogonal axes has weights in respect of each of the three orthogonal axes that are not identical.
9. A method according to claim 7 or 8, wherein the weighted sum of the angular motion around each of the three orthogonal axes has weights in respect of each of the three orthogonal axes that are scaled relative to each other by factors that are based on the amount of blur measured relative to the pixel pitch.
10. A method according to claim 9, wherein
the three orthogonal axes comprise a first and second axes in the plane of the image sensor and a third axis perpendicular to the plane of the image sensor,
the weight in respect of the first axis is scaled relative to the weight in respect of the third axis by the product of (a) the pixel resolution of the image sensor along the second axis divided by the pixel resolution of the image sensor along largest circle that fits in the image, (b) a full turn divided by the angular field of view in a plane containing the second axis, and (c) a first adjustment ratio having a value in the range from 0.2 to 5, and the weight in respect of the second axis is scaled relative to the weight in respect of the third axis by the product of (a) the pixel resolution of the image sensor along the first axis divided by the pixel resolution of the image sensor along largest circle that fits in the image, (b) a full turn divided by the angular field of view in a plane containing the first axis, and (c) a second adjustment ratio in respect of the third axis having a value in the range from 0.2 to 5, and the ratio of the first to second adjustment ratio having a value in the range from 0.2 to 5.
11. A method according to any one of claims 5 to 10, wherein the image sensor is globally shuttered and the blur measure is derived from the angular motion detected by the gyroscope sensor over the exposure time of the image sensor.
12. A method according to any one of claims 5 to 10, wherein the image sensor is rolling shuttered and the blur measure is a combination of blur values in respect of lines of pixels of the image that are exposed at different times, the blur values being derived from the angular motion detected by the gyroscope sensor over the exposure time of respective lines.
13. A method according to claim 12, wherein said combination of blur values is a weighted sum of the blur values in respect of each line.
14. A method according to claim 13, wherein the weighted sum of the blur values in respect of each line has weights that are zero when the blur value is below a perception threshold and that increase with the blur value above the perception threshold.
15. A method according to claim 12 or 13, wherein the weighted sum of the blur values in respect of each line has weights that increase with the blur value up to a saturation threshold above which the weights have a constant value.
16. A method according to any one of claims 13 to 15, wherein the weighted sum of the blur values in respect of each line has weights that are a sigmoid function of the blur values.
17. A method according to claim 12, wherein the blur values in respect of each line are clipped at a saturation threshold.
18. A method according to any one of the preceding claims, further comprising performing sharpness processing that increases the sharpness of the stored image to a degree that is dependent on the degree of blurring indicated by the detected motion.
19. A method according to any one of the preceding claims, wherein the method is performed intermittently without triggering by a user.
20. A method according to claim 19, wherein the camera unit comprises plural sensors arranged to sense physical parameters of the camera unit or its surroundings, and the method is performed intermittently in response to the outputs of the sensors.
21. A camera unit comprising:
an image sensor arranged to capture still images;
a motion sensor arranged to detect motion of the camera unit; and
a control circuit for controlling the camera unit, the control circuit being arranged to perform an image capture operation comprising:
capturing an image;
during capture of the image, detecting motion of the camera unit; and
determining whether the image is of acceptable quality taking into account at least the degree of blurring indicated by the detected motion, and either storing the captured image if it is of acceptable quality else repeating the steps of capturing an image, detecting motion of the camera unit, and determining whether the image is of acceptable quality until an image of acceptable quality is captured and stored.
22. A camera comprising a housing and a camera unit according to claim 21 mounted in a housing.
PCT/GB2015/052042 2014-07-18 2015-07-15 Minimisation of blur in still image capture WO2016009199A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1412818.5 2014-07-18
GBGB1412818.5A GB201412818D0 (en) 2014-07-18 2014-07-18 Minimisation of blur in still image capture

Publications (2)

Publication Number Publication Date
WO2016009199A2 true WO2016009199A2 (en) 2016-01-21
WO2016009199A3 WO2016009199A3 (en) 2016-03-10

Family

ID=51494821

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2015/052042 WO2016009199A2 (en) 2014-07-18 2015-07-15 Minimisation of blur in still image capture

Country Status (2)

Country Link
GB (1) GB201412818D0 (en)
WO (1) WO2016009199A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017140566A1 (en) * 2016-02-19 2017-08-24 Fotonation Limited A method for correcting an acquired image
WO2018103314A1 (en) * 2016-12-07 2018-06-14 中兴通讯股份有限公司 Photograph-capture method, apparatus, terminal, and storage medium
US11477382B2 (en) 2016-02-19 2022-10-18 Fotonation Limited Method of stabilizing a sequence of images

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4586534B2 (en) * 2004-12-28 2010-11-24 セイコーエプソン株式会社 Imaging apparatus, camera shake correction apparatus, mobile phone, and camera shake correction method
US7509038B2 (en) * 2005-09-29 2009-03-24 Seiko Epson Corporation Determining maximum exposure time to limit motion blur during image capture
US8823813B2 (en) * 2011-06-06 2014-09-02 Apple Inc. Correcting rolling shutter using image stabilization
US8913140B2 (en) * 2011-08-15 2014-12-16 Apple Inc. Rolling shutter reduction based on motion sensors
US9596398B2 (en) * 2011-09-02 2017-03-14 Microsoft Technology Licensing, Llc Automatic image capture
GB201116566D0 (en) * 2011-09-26 2011-11-09 Skype Ltd Video stabilisation
US9503645B2 (en) * 2012-05-24 2016-11-22 Mediatek Inc. Preview system for concurrently displaying multiple preview images generated based on input image generated by image capture apparatus and related preview method thereof

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017140566A1 (en) * 2016-02-19 2017-08-24 Fotonation Limited A method for correcting an acquired image
US11477382B2 (en) 2016-02-19 2022-10-18 Fotonation Limited Method of stabilizing a sequence of images
WO2018103314A1 (en) * 2016-12-07 2018-06-14 中兴通讯股份有限公司 Photograph-capture method, apparatus, terminal, and storage medium
US10939035B2 (en) 2016-12-07 2021-03-02 Zte Corporation Photograph-capture method, apparatus, terminal, and storage medium

Also Published As

Publication number Publication date
WO2016009199A3 (en) 2016-03-10
GB201412818D0 (en) 2014-09-03

Similar Documents

Publication Publication Date Title
US20160173749A1 (en) Still image capture with exposure control
JP5276444B2 (en) Camera exposure optimization technology considering camera and scene movement
US9912864B2 (en) Methods and apparatus for using a camera device to support multiple modes of operation
KR101142316B1 (en) Image selection device and method for selecting image
US20210360138A1 (en) Smart shutter in low light
US9706120B2 (en) Image pickup apparatus capable of changing priorities put on types of image processing, image pickup system, and method of controlling image pickup apparatus
CN109194882B (en) Image processing method, image processing device, electronic equipment and storage medium
TWI394435B (en) Method and system for determining the motion of an imaging apparatus
CN111034170A (en) Image capturing apparatus with stable exposure or white balance
US20160044222A1 (en) Detecting apparatus, detecting method and computer readable recording medium recording program for detecting state in predetermined area within images
JP4420906B2 (en) Imaging device
CN110493522A (en) Anti-fluttering method and device, electronic equipment, computer readable storage medium
KR102592745B1 (en) Posture estimating apparatus, posture estimating method and computer program stored in recording medium
WO2016009199A2 (en) Minimisation of blur in still image capture
US9143684B2 (en) Digital photographing apparatus, method of controlling the same, and computer-readable storage medium
US20160156825A1 (en) Outdoor exposure control of still image capture
JP5118590B2 (en) Subject tracking method and imaging apparatus
JP2010028418A (en) Imaging apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15741264

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15741264

Country of ref document: EP

Kind code of ref document: A2