WO2007057498A1 - Imaging system with adjustable optics - Google Patents

Imaging system with adjustable optics Download PDF

Info

Publication number
WO2007057498A1
WO2007057498A1 PCT/FI2005/050409 FI2005050409W WO2007057498A1 WO 2007057498 A1 WO2007057498 A1 WO 2007057498A1 FI 2005050409 W FI2005050409 W FI 2005050409W WO 2007057498 A1 WO2007057498 A1 WO 2007057498A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
exposure time
measurement
exposure
time
Prior art date
Application number
PCT/FI2005/050409
Other languages
French (fr)
Inventor
Ossi Kalevo
Hannu Kakkori
Mika Kunnari
Mikko Ollila
Original Assignee
Nokia Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Corporation filed Critical Nokia Corporation
Priority to EP05808852A priority Critical patent/EP1949671A4/en
Priority to PCT/FI2005/050409 priority patent/WO2007057498A1/en
Priority to CNA2005800523022A priority patent/CN101326814A/en
Priority to KR1020107003295A priority patent/KR20100023056A/en
Priority to JP2008540635A priority patent/JP5086270B2/en
Publication of WO2007057498A1 publication Critical patent/WO2007057498A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00281Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a telecommunication apparatus, e.g. a switched network of teleprinters for the distribution of text-based information, a selective call terminal
    • H04N1/00307Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture with a telecommunication apparatus, e.g. a switched network of teleprinters for the distribution of text-based information, a selective call terminal with a mobile telephone apparatus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2101/00Still video cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/0077Types of the still picture apparatus
    • H04N2201/0084Digital still camera

Definitions

  • Such a situation is typical for example when imaging sports or other activities, where the scene contains fast moving objects and rapidly changing situations.
  • a second example of the current invention is to detect auto-focus from multiple optics positions within one frame. This can be done by the optics adjustment when the detection area pixels are not exposed, but still during the exposure of the total image frame.
  • the detection area is the area of interest in the image, which is used for the focus detection.
  • the time allocated for the lens movement i.e. non-exposure time, is:
  • the lens can be moved with fixed speed, but it can also be moved with varying speed and trajectory, for example several cycles of range during the frame.
  • the lens can move from minimum to maximum between the first frame and the lens can move back from maximum to minimum between the second frame. In this way two curves can be created and so the effect of different contrast areas in different part of the image can be reduced.
  • the lens characterization values e.g. MTF/PSF
  • MTF/PSF lens characterization values
  • frame blanking 92a, 92b, 92c, line blanking 91 a, 91 b, embedded/ancillary data 94a, 94b, 94c, 94d are illustrated.
  • the reading of the lines is referred by sign 98, where as the exposure time is referred by sign 95 and visible data by 99.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)
  • Automatic Focus Adjustment (AREA)
  • Focusing (AREA)
  • Exposure Control For Cameras (AREA)

Abstract

This invention provides a solution for determining a non-exposure time during which imaging optics can be adjusted without affecting the image being captured. In the solution an image sequence comprising at least two images is acquired, at least one of which said at least two images is used as measurement image and at least one other of which said at least two images is used as final image. Exposure times are determined for the measurement image and the final image. By means of the exposure times for the measurement image and the final image, the non-exposure time can be determined. As a result of this, the imaging optics can be adjusted during non-exposure time.

Description

IMAGING SYSTEM WITH ADJUSTABLE OPTICS
Field of the Invention
This invention relates generally to field of imaging, and particularly to imaging with an imaging system having adjustable optics.
Background of the Invention
In the past years digital imaging systems, such as digital cameras have taken remarkable role in imaging technology. Digital cameras are characterized by one or more built-in processors and they record images in digital form. Because of their electronic nature, a digital camera (or a digital camera module) can be readily integrated to another electronic device, of which mobile telecommunication device (mobile terminal) is nowadays a common example. Depending on the master device (i.e. the device the camera module is integrated with) the camera module can communicate with several other components and systems of said device. E.g. in a camera phone, the camera module is typically operatively communicating with one or more processors, and in the case of a digital camera, the device can comprise some other type of dedicated signal processing components.
Adjustable optics in context of digital imaging system relates to a possibility to use electronically controlled image focusing, such as auto- focusing and optical zoom functions to adjust the properties of the image to be captured. These operations are becoming more and more important in the imaging devices. Auto-focusing and zooming may be accomplished with traditional lens optics with moving lens components, or today they can also be accomplished using optical systems based on lenses with adjustable shape or other adjustable means to affect their refractive power.
The imaging system comprises a lens system that focuses light in order to create an image of a scene. The light is focused onto a semiconductor device that records light electrically. This semiconductor device can typically be e.g. a CMOS (Complementary Metal Oxide Semiconductor) or CCD (Charge-Coupled Device) sensor. The sensor is mainly composed of a collection of light-sensitive pixels which convert light into electrical charge which electrical charge is further converted into digital image data. On said sensors a technique called binning can be used. Binning combines charge in adjacent pixels in order to increase the effective sensitivity of the imaging system and to reduce the amount of pixels in the image.
The imaging system also comprises shutter means. The main types of the shutters are a global shutter and a rolling shutter. Currently the rolling shutter is used together with a CMOS sensor, and the global shutter is used with a CCD sensor, but the shutters can be also used in different manner. The shutter means is used to restrict the exposure of the image sensor. The shutter operation consists at least of operations, such as reset, exposure and read operation, but also operations such as open and close can take place. The shutter means, both global and rolling, can be implemented electronically or mechanically, but in mechanical implementation also variable aperture or Neutral Density (ND) filters may be used. Rolling shutter is known by its feature to exposure image substantially line by line, wherein the global shutter aims to exposure all the pixels in the image at substantially same time.
The imaging system comprises also focus detector that measures current focus values, typically from one or multiple regions of the image, and the results are used in a control function being also included in the imaging system. The measurement of focus is typically based on the contrast between adjacent areas of the image and therefore the control function tries to find the best focus for the image by maximizing the contrast in the image.
The imaging system comprises also exposure detector that measures the current level of the light exposure in image pixels and its result is used in the control function also. The control function utilizes the current level of the exposure and compares it to the target exposure level. Based on this comparison exposure time, analogue gain, digital gain, aperture and ND filters are controlled. The control function also utilizes the information being received from the user interface. For example, if the user wants to zoom into the image, the control function starts to change lens positions. Optics driver is used when the lens system is moved and it is typically controlled by I2C (Inter-lntergrated Circuit) commands or using Pulse Width Modulation (PWM) signals.
The imaging system may also comprise, or be in connection with, input devices (e.g. control buttons for zoom, scene selection and shutter control). Flash is also typically used in the imaging system. All the image processing including focus detector, exposure detector, control function and actual image processing can be done in the camera module, in the camera processor, in the application engine, in the base band engine or in any combination of those. The processing can also be implemented by using software or hardware processing blocks. At least detector and control functions of the image processing have to operate in real-time.
In this description imaging may refer to still imaging, video imaging or viewfinder imaging. Still imaging produces visual information that is characterized by being non- moving. A still image is stored into a memory right after it has been taken. Video imaging produces moving visual representation that changes with time. In the video imaging a series of visual representations are taken in order to give an impression of motion when shown in succession. Viewfinder imaging provides images for the viewfinder display. The viewfinder of a digital imaging system is typically an integrated color display that provides a preview of the scene that the user is capturing. The viewfinder image that is seen on the display is typically taken from the image sensor and after scaling down in the sensor or in the processor from its original resolution displayed on the viewfinder display. Typically there is no need for storing the viewfinder images. Viewfinder images should preferably be updated on the viewfinder display promptly and with minimum delay in order to provide good real time feel and response to the user.
Focus in the imaging can be done both automatically (auto-focus) or manually involving user interactions. Further, auto-focus (AF) function can be implemented by using single shot auto-focusing or by using continuous auto-focusing. Single shot auto-focusing is typically applied when capturing still images and continuous auto-focusing is applied in video imaging.
Single shot auto-focus is typically implemented in such a way that the lens is moved thru its range, by using fixed increments, and focus detector values are recorded. When the scanning has finished, the lens is moved to the position where the contrast was found to have maximum. Single shot auto-focus can be activated, for example, by pressing the image capture button halfway. Therefore, when the capture button is pressed all the way down, the imaging optics has already been adjusted properly and hence the image can be captured immediately giving good user experience. Performance of the focusing system can be characterized by the time it takes to find the best focus and the accuracy of focused image. In continuous auto-focusing the focus detector values are determined from captured images substantially continuously and focusing is improved by adjusting imaging optics whenever focus detector values indicate that this is necessary. Typically, and especially in video imaging, captured images are also displayed on the viewfinder display in real-time. The advantage of continuous auto-focusing is that the optics can be kept continuously in focus and therefore also the viewfinder image stays in focus all the time. In video recording this is clear necessity, but also when recording still images this is highly beneficial and a single still image can then be captured without delay or after a short delay by fine tuning the basic continuous focusing with a quick single shot focusing procedure.
From what has been said above, it is clear that focusing for still images, video imaging and viewfinder display have somewhat different requirements. Depending whether a rolling shutter or global shutter type exposure control is used, then a lens movement for auto-focus (or zooming) during exposure can cause different type of artefacts. Especially when rolling shutter is used, the blanking time between images (when no optical image information is collected by the image sensor) is usually very short and hence there is not enough time available for the lens movement during that time without causing artefacts to the image. Also when modern high-resolution sensors are used, the viewfinder image usually needs to be sub-sampled, binned or downscaled, because of the bandwidth limitation in the interface between the camera module and subsequent parts of the image processing chain, therefore the quality of the auto-focus detection is limited in the later parts of the image processing chain because of the limited resolution of the viewfinder images.
In the most time critical applications, where the rolling shutter is used, the auto-focus detection information can be calculated by dedicated hardware or software immediately when the image information from the detection regions is available for the focus detector. In other words, the auto-focusing needs not to be based on sub-sampled, binned or downscaled viewfinder images but it can be performed on a selected part of the image using full resolution of that part. Typically, the regions of the focus detection are located in the middle of the image area. In this case, the decision for the next frame lens movement can be available before all the lines of the current images are fully exposed and transmitted. In that case the problem is that, if the lenses are moved immediately after the auto-focus processing is done for the center part of the image, then the last lines of the current image are exposed with moving lens and the artefact caused by it can be easily seen in the captured or viewed image. Similar kind of an artefact can be caused, if the exposure of the next image frame is started before the lens movement has ended. This situation is possible with both the focus lens and the optical zoom lens movements. In this case, with the rolling shutter, the first lines of the image are corrupted, and with the global shutter the whole image is corrupted. The zoom lens can be moved only at the time when the image sensor is not exposing, i.e. between image frames. Timing of the commands is very critical: also if the dedicated hardware is used, there might be delay before the lenses actually move.
Image focusing, especially with a single shot auto-focus takes significant amount of time and may cause in a rapidly changing situation that the scene aimed to be captured is already unavai lable, when the camera system is finally ready and the image is focused.
Such a situation is typical for example when imaging sports or other activities, where the scene contains fast moving objects and rapidly changing situations.
Implementations for the usage of the rolling shutter with adjustable optics can be found from related art. For example, the command for the lens movement can be given immediately after the decision for the lens movement has been made without considering the effect for the image being captured. In these cases, typically the last lines of the image become corrupted. In another example, the command for the lens movement is given only after the whole image has been captured. In this case the start of the lens movement is effectively delayed until the whole image has been captured and then, depending the length of the blanking and exposure times, the lens is moved during the blanking period. But due to the shortness of that period quite often the first lines of the next image become corrupted because the lens movement continues too long.
The auto-focus detection is traditionally made by measuring auto-focus detection values frame by frame. This type of detection requires that the whole image frame, or the whole sub-sampled image frame, needs to be AD converted when the focus detection is performed. Quite often some frames become skipped due to the lack of time for proper focus detection or for a proper image viewing. This increases the focusing time even more. With video images the frames are not usually skipped, but then the artefacts caused by the exposure and the lens movement can be seen from the recorded video sequence. It can be clearly seen that solutions for exposing the image properly at the time when the lens also needs to be moved for focusing or zooming purposes, and without damaging the images to be captured, need still to be developed further in order to overcome the deficiencies of the state of art.
According the Applicant's understanding, the knowledge of the image sensor status has not yet been fully utilized when timing the adjustment of the imaging optics for focusing or zooming. This invention aims to provide a solution that maximizes the time available for adjustments of the optics and at the same time minimizes the artefacts caused for captured images. In the same time the invention aims to minimize the response times providing improved user experience.
Summary of the Invention
It is an object of the current invention to provide a solution for exposing image properly together with the optics adjustment operation at the same time minimizing artefacts caused for the captured images.
It is another object of the current invention to provide various methods for minimizing response times in the image focusing process.
These objects can be achieved by an imaging method, an imaging device, an imaging module and a computer program product for acquiring an image sequence comprising at least two images, at least one of which is used as measurement image and at least one other of which is used as final image, determining a measurement image exposure time and a final image exposure time, determining a non- exposure time between the measurement image exposure time and the final image exposure time, and allowing adjustment of imaging optics during said non-exposure time.
These objects can also be achieved by methods, modules and computer program products for determining a non-exposure time as described in the characterizing portions of claims 32, 33, 34, 36, 37, 38.
First example of the current invention is so called a timing solution for the optics adjustment. In this example a proper timing is determined by auto focus detection values. The timing describes how the auto focus and/or zoom optics need to be adjusted.
Because the first example of the invention defines an exact point in time when focus or zoom optics can be adjusted, image artefacts can be avoided. If frame blanking times for the given situation are small, then the invention brings much more time margin for the optics control beyond the blanking time.
Said first example also minimizes the latency in the control loop and improves the real-time performance, since it is guaranteed that the auto focus statistic calculations have been finished for the previous frame and the optics adjustment has been readily applied before the next frame.
Settling time for the auto-focus/zoom hardware, i.e. the total time needed for the position of the optics to finally freeze after starting to move it, might be on the same range as the blanking time. Therefore it is important to be able to provide long enough settling time before exposing the pixels under interest. Long settling time allows small startup current in an auto-focus/zoom actuator controller, which is an advantage especially in portable applications where only battery with limited capacity can be available. When the whole non-exposure time can be used for the optics adjustment, the actuator does not have to be extremely fast, which means that less power can be used for the optics adjustment.
A second example of the current invention is to detect auto-focus from multiple optics positions within one frame. This can be done by the optics adjustment when the detection area pixels are not exposed, but still during the exposure of the total image frame. The detection area is the area of interest in the image, which is used for the focus detection.
This second example enables shorter time for finding the focus. In addition lower power consumption can also be achieved, because continuous focus is not always needed. This example also improves usability.
A third example of the current invention is to control blanking period or exposure and lens movement times.
This example provides such video and viewfinder images that are not corrupted, which images can also be provided at maximal repetition frequency in the light and control condition in question. However, if the maximal image frequency is not wanted, the fixed image frequency enables the optics adjustment and assures the maximal exposure time, whether the optics is adjusted (a lot, a little, at all). Similarly, if the exposure time is short, the zooming may be accelerated or temporal peak effect may be decreased, when the optics are adjusted slower. The third example also enables flexible automatic night/day mode, whereby the image frequency may slow down according to the exposure time, but not more than that.
Description of the Drawings
The accompanying drawings which are incorporated in and constitute a part of this specification, illustrate examples relating to this invention and, together with the description, explain the objects, advantages and principles of the invention. In the drawings
Figure 1 illustrates an example of an image sequence,
Figure 2 illustrates an example of timing solution for the optics adjustment, Figure 3 illustrates example of an image frame comprising auto- focus windows,
Figure 4 illustrates an example of an auto-focus system,
Figure 5 illustrates an example of one optics position and adjustment during a frame period,
Figure 6 illustrates an example of measuring N optics positions within one frame,
Figure7 illustrates an example of a focus measure as a function of an optics position,
Figure 8 illustrates an example of a full focus scan during one frame,
Figure 9 illustrates an example of a focus scan with the global shutter,
Figure 10 illustrates an example of a static blanking period solution,
Figure 11 illustrates an example of a dynamic blanking period solution, and
Figure 12 illustrates an example of a device according to the invention.
Detailed Description of the Invention
The current invention relates to an imaging system with adjustable optics. The imaging system may be a digital still image camera, a digital video camera, a mobi le terminal capable of either still imaging or video imaging or both, or any other electronic device capable of imaging. The imaging system comprises adjustable optics that can be moved (e.g. auto-focus lens or optical zoom lens) and a sensor (e.g. CCD sensor or CMOS sensor). System further comprises an image processing means that relate to the image sensor and may locate on a camera module, on a separate processing circuit, on an application engine of mobile device, or on a combination of the previous. Processing operation consists at least of forming an image, improvement functions for the image and a real-time controlling, such as lighting (EC), white balance (WB) and sharpness (F). The real-time processing can be implemented automatically, whereupon no actions from the user are needed. The imaging system comprises also input devices or is connected to such, by means of which it is possible to control the operations of the camera. These operations can be e.g. a zoom control, an object selection, a mode selection and a launcher that activates the image capturing or the video imaging. When speaking of a lens in the following description optics comprising e.g. a traditional lens or a liquid lens or similar is meant. Therefore when "lens movement" or "moving of a lens" is written in the description, the skilled person will appreciate that the moving is actually operation of a traditional lens, but when e.g. liquid lens is used, the moving is a some other adjusting operation, by means of which the light can be projected to the image sensor and by means of which the image can be outlined.
Further, as said on the background of the invention, the imaging system comprises also shutter means, such as a global shutter or a rolling shutter. In the following description specific terms are used, which terms are intended to be used for the sake of clarity. These terms are not intended to define or limit the scope of the invention unnecessarily, but to form a better concept for features or the current invention.
Figure 1 illustrates an example an "image sequence" comprising at least two frames F1 , F2. One of the frames is a measurement image F1 , and the other is a final image F2. The final image is the one being stored and the measurement image can be used for measuring e.g. the focus or exposure time. A measurement area M1 can be defined to the measurement image F2 and it is used for the measurement. The final image can be a raw image being obtained from a sensor. Therefore digital image processing or other algorithms can be performed to such a final image before the actual storage. There can be several measurement images within one image frame. The measurement image is typically smaller than the final image, and it is not stored. However, when image sequence is a video image sequence, typically also the measurement images are so called final images, and will be stored.
"Blanking time" relates to the time during which the sensor is not able to record meaningful image data due to frame/line or pixel resetting or any other sensor architecture originated reasons or due to user defined control. The "blanking time" not necessarily correspond time, when the image is not exposed, but the time when the pixels are not sent from the sensor forward. The blanking time is shown in Figure 1 between the two frames F1 , F2. In the rolling shutter the light is continuously received, but before the actual reading, the pixels and lines are reset before the amount of the exposure time. The time when the pixels in the sensor are not exposed is within the vertical blanking period. Still the sensor can be exposed during each blanking periods (e.g. at least the following line is exposed during line blanking). The blanking time can occur between image frames, between lines and also between pixels.
Both images have their own exposure times shown in Figure 1. The exposure of the final image F2 partially overlaps the blanking time.
However, the exposure of the measurement image F1 can behave completely differently to the final image F2. The exposure for the measurement image F1 starts before a readout of the measurement area M1. What should be noticed from Figure 1 , is that the exposure of the final image F2 does not continue to the blanking time following the final image F2.
The "non-exposure time" between the frames F1 , F2 defines how long there is time from the beginning of the blanking time, during which the exposure is not done at least to the next pixel, line or image, and during which the lens can be moved. In figure 1 , the non-exposure time starts from when the exposure of the measurement image F1 has been ended. The non-exposure time can be extended depending on the time needed for the lens movement. In the case of video, the non-exposure time cannot be extended as much as with viewfinder or measurement images.
The extension of the non-exposure time can be implemented by increasing the channel rate, which creates faster image readout. In such an implementation, because the image is read faster, there will be longer time for blanking, which can be used for the lens movement. It should be noticed that the prior art teaches against the channel rate increment. This is because of the costs and because the increment would cause EMC- noise. But with smaller processes and differential serial interfaces, this kind of an increasing becomes possible.
In the case of viewfinder images the focus and the measurement is targeted to a smaller image, and therefore the image is sub-sampled, binned or downscaled thus providing an image of smaller size. The smaller image can in addition be read faster, whereby there will be more time left for the non-exposure. In the case of still images, the measurement images are smaller than the actual final image, and therefore there will be time left automatically for the non-exposure.
The basic method comprises steps for acquiring an image sequence formed of series of image frames or series of image sections, e.g. lines. At least a portion of the image sequence is used for controlling the lens. Blanking times are defined in the image sequence between images, between lines or between pixels. In addition, the non-exposure time is defined. The lens is at the end moved during the non-exposure time, which non-exposure time may partly comprise of the blanking time. The purpose of the solution is not to let the lens movement (auto- focus, zoom) go upon the exposure. Therefore a time period for the lens movement, during which the exposure is not done, i.e. non- exposure time, is defined. The non-exposure time can be defined basically as the time gap between successive images and their exposes. The imaging system comprises means for enabling the operations during the non-exposure time. Those operations include e.g. defining the exposure time and defining the location in the image where information is taken. In addition during that time, the system needs to be aware of the actions of the pixel cell and of the actions being done earlier. In addition said means are arranged to know all the delays to each operation, e.g. how much delay is got wi th different amount of lens movement. Further said means are capable of knowing whether the focus has gone or goes wrong.
Following descriptions discloses the different examples of the current invention.
1) Timing solution for the lens movement
Figure 2 shows a solution for achieving a proper timing for the lens movement. The pixel data is exposed and transmitted (1 ) from the sensor 100. The processor 110 receives the image data, being the measurement image, and includes e.g. auto-focus logic 1 12, where the auto-focus detection is made. Detection block 112 calculates auto- focus statistics or then the statistics is already calculated in the imaging sensor and transmitted to the control CPU 113 (e.g. through I2C or within the image frame). After being calculated, the auto-focus statistics are informed (3) to the control CPU 113. The control CPU 113 reads (4) the auto-focus statistics and makes necessary conclusions of what is the needed non-exposure time, i.e. how the auto-focus and/or zoom lenses need to be altered. The control CPU 113 can use also the information received from a user interface, when deciding the needs for the lens movement. In this example line counters in the receiver logic 111 that receives the image data from the sensor 100 can be used. The control CPU 113 reads (5) the line counter register in the receiver 111 after the auto-focus statistics are calculated, and by knowing the number of pixels in the image sensor, transmission clock frequency and possible delay in the zoom hardware, it is possible to determine whether the lens can be moved or not. In one example, an image frame from a 1600 x 1200 sensor is received with a rate of 20 frames per second (f/s). A transmission time is 40 microseconds per line, if there are 50 blanking lines per frame. If decision of the lens adjustment is ready when a line number 1020 has been received there is still 180 lines left (1200 — 1020) to the end of the image and 7.2 milliseconds will be taken to complete the image transmission. This means that if the delay in the zoom hardware is 1 millisecond, whereby the delay requires time of 25 lines (1000 us / 40 us), one needs to wait until line number 1175 (1200 — 25) is received before command to the zoom hardware (114, 115) can be given (6, 7) so that zoom optics only start to move after completion of the current image.
The command (6) for the lens driver 114 is not given until it can be guaranteed that the moving of the lenses does not distract the current image exposure. Control CPU 113 also controls exposure values and the blanking periods so that the lens movements will not corrupt the next image frame. When the current or next image frames are not needed, the control CPU 113 takes care that the next focus detection regions are not corrupted with the lens movements, for example when the single shot focus is tried to make as soon as possible. If global shutter is used, the timing of a global shutter needs to be known, in order to start the lens movement as quickly as possible.
Figure 3 illustrates auto-focus locations inside a measurement image frame. The auto-focus locations can be considered as measurement areas and they are presented with reference number 101 — 108 in the sensor array 100. One exposure (with rolling shutter) example relating to first five- lines in the (CMOS) sensor using three- line exposure time is illustrated by the following table 1. Line 1 : Reset Row 1 : Row 1 : Row 1 : Read out Exposure Exposure
Line 2: — Line 2: Reset Line 2: Line 2: Exposure Exposure
Line 3: — Line 3: — Line 3: Reset Line 3: Exposure
Line 4: — Line 4: — Line 4: — Line 4: Reset
Line 5: — Line 5: — Line 5: — Line 5: —
Step 1 : First line Step Step Step 4: First line reset 2:lntegration 3:lntegration output starts continues
Line 1 : — Line 1 : — Line 1 : — Line 1 : —
Line 2: Read out Line 2: — Line 2: — Line 2: —
Line 3: Line 3: Read Line 3: — Line 3: — Exposure out
Line 4: Exposure Line 4: Line 4: Read out Line 3: — Exposure
Line 5: Reset Line 5: Line 5: Line 5: Read out Exposure Exposure
Step 5: Second Step 6: Third Step 7: Fourth Step 8: Picture line output line output line output complete
TABLE 1: Exposure example for timing solution
In order to describe the current example, step 4 of the table is taken into consideration. In step 4 the first line (line 1 ) is read, following two rows (rows 2, 3) are exposed, and the fourth line (line 4) is reset. After this the following steps proceed until the fifth line (line 5) has been read in step 8. The previous steps 1 — 3 initialize the exposure operation and this typically happens during the vertical blanking period. If actions of line 1 in steps 1—4 are traced, it can be seen that the line in question is first reset, then exposed during two lines, and is read at tie end. If on the other hand, reset is traced, it can be seen that the reset moves from line 1 in step 1 one line forward until it is in the line 5 during step 5. In this example the following lines or blanking time are not illustrated and the blanking time is assumed to be greater than the exposure time, whereby line 1 is not need to be reset for the next image until the last line of the current image has been read.
In this example line counters are described in context with the receiver logic 111 for evaluating the status of the imaging sensor. In addition the line counter can also be used for time measurement purposes. This example works as such in a situation, where the exposure time is very short, and where the lens movement is very quickly. Short exposure time means that it is shorter than vertical blanking period, whereby there is time for the lens movement within the blanking time. Typically the short exposure time can be smaller than e.g. 50/(20*(1200+50))s = 1/50Os = 2 ms, where 50 is amount of the blanking lines (= maximum exposure time in this example), 20 is the amount of the frames being read per second (2Mpix image comprises 1200 lines). Therefore, if the exposure time is shorter than this, there is enough time for the lens movement within the blanking period, because otherwise the first line of the following image will be exposed before the last line of the current image has been read. It should be further noticed that if the amount of the blanking lines is e.g. doubled, the blanking time is not however quite doubled, because (100/(20*(1200+100))s = 1/26Os ~= 3.85 ms. And it should be also noticed that the reading of the sensor pixels should be accelerated at the same time, because otherwise it is not possible to have 20 frames per second from the sensor. Typically at least 15 frames per second is required in order to have the distortion of the image caused by the rolling shutter tolerable. It is also possible to increase the static blanking time as great as possible, because thus the actual reading of the image will happen in shorter time, and thus the distortion of the image will be reduced. In addition more time can be given to the exposure and lens movement. It is further possible to maintain the blanking time small and to capture double amount of frames from the sensor, but in practice, the lens movement will affect to the imaging and the maximum exposure time will also be reduced. In other situation an example (3. Moving of lenses and exposure of image by utilizing blanking time), being described later, should be used, where blanking time is increased statically (3.1) or dynamically (3.2).
The table 1 presents an example of first five lines of a sensor having e.g. 1200 visible image lines and e.g. 50 blanking lines. The reading of the first line begins in a step 4, and steps 1—3 will occur during blanking time. The exposure time is during three lines, and by connecting the exposure time to the examples, where the sensor has 1200 lines and 50 blanking lines, and where the reading of the line occurs in 40μs, it will result in an exposure time of 120μs, Therefore 47 lines would be left to the lens movement (= non-exposure time), which means 47*40 = 1.88ms. Total blanking time for that sensor in this example is 50*40μs = 2ms.
2) Fast focus detection solution
Figure 4 illustrates an example of an auto-focus system. The auto- focus system comprises at least a sensor 300, a focus detection module 312, an auto-focus control module 314, an optics driver 315 and optics 316. The basic method is to move the lens thru its range and to record the contrast values, and then to move the lens to a position with best contrast. The current example of the invention makes it possible to find focus faster than by using implementations of related art. The idea of the current invention is to measure focus from one or multiple lens positions within one or many frames, thus making the focus search shorter. This example is described by following methods. In the methods positions for lenses need not to be with fixed increments, but auto-focus control takes the responsibility for selecting new lens position for the measurement. 2.1 Measuring one lens position within one frame
Figure 5 illustrates an example, where one lens position is measured within one frame reading time Tf. Contrast is detected from the measurement area M in the image frame. The measurement value for the lens position is obtained by gathering high frequency (and/or bandpass) content from sub-areas of the measurement area M. It is also possible that only the set of measured sub-areas are used in the evaluation phase. The lens is moved between positions Pn and Pn+1 during a lens movement time TLens between the readout of the last line of the measurement area M and a start of the exposure of the first line of the measurement area M in the next frame N+1 (not shown in figure 5). If the lens is moved outside these time windows, the lines in the measurement area will get mixed data and the measurement area does not anymore correspond only one lens position. Position Pn+1 is measured in the next frame N+1.
The time allocated for the lens movement, i.e. non-exposure time, is:
M lines ,
T — T M lines
1 ImS ~ Λ frame frame -T e.xp frame total lines
wherein Texp stands for the exposure time.
2.2 Measuring N lens positions within one frame
In the figure 6 two lens positions are measured within one exposed frame. The contrast is detected from areas M1 and M2 in an image frame I. The measurement value is obtained by gathering the high frequency (and/or band- pass) content from the sub- areas. It is also possible to use only the set of measured sub-areas in the evaluation phase. The exposure of the first line in the area M1 is started at the readout of line Uad- This means that the first line in the area M1 is starting exposed during the readout of line Lread- The lens is moved between positions Pn and Pn+1 during time TLenSMi-M2 between the readout of the last line of the area M1 in image frame N and start of the exposure of the first line of the area M2. The lens is moved between positions Pn+1 and Pn+2 during time TLenSM2-Mi (not showed in the image) between the readout of the last line of area M2 in image frame N and 5 the start of exposure of the first line of the area M1 in the next frame N+1. If the lens is moved outside these time windows, the lines in the measurement areas will get mixed data, and the measurement area does not correspond only one lens position. Positions Pn+2 and Pn+3 are measured in the next frame. The exposure time Texp is typically 10 constant in one frame, but the skilled person will appreciate that it can also vary in one frame.
_ (M2start -Mlend )lines
1 UnSMl-Ml ~ , ,. 1 frame 1 exp total lines
H e τ _ τ (M2end -Mlstart)lines l vJ 1 IeHSMl-Ml J frame , , , ,. J frame J exp total lines
Figure 7 shows a result after scanning the lens movement range. Figure 7 relates to figure 6 and shows curves relating to two separate measurement areas, where the other area has more information than 20 the other one. The peak focus position can be estimated by combining the curves.
Exposure time Texp is used to define how many measurements can be done within one frame. Also lens characterization values, e.g. 25 MTF/PSF (Modulation Transfer Function/Point Spread Function), can be utilized when decision for focus is evaluated.
The example 2.2 describes two measurement areas, but skilled person will appreciate that the amount is not limited to two. Similarly, the lens 30 positions are sequential in said example, but the positions can be different. The distance between the lens positions do not need to be always the same. Size and place of the areas can vary depending on the exposure time and time needed for the lens movement. 2.3 Continuous moment within one frame (or two frames)
This example (see Figure 8) is similar to the example 2.1, but in this example the lens is stopped to a particular lens position. The sub-areas of area M contain data from a subinterval of the total lens movement range. The interpretation of the focus values has to take this into account. Also exposure time needs to be taken into account within these calculations. The current example is useful, when image is taken of a flat object, such as a document or a business card.
In this example the lens can be moved with fixed speed, but it can also be moved with varying speed and trajectory, for example several cycles of range during the frame. In another implementation the lens can move from minimum to maximum between the first frame and the lens can move back from maximum to minimum between the second frame. In this way two curves can be created and so the effect of different contrast areas in different part of the image can be reduced. Also the lens characterization values (e.g. MTF/PSF) can be utilized when decision of focus is evaluated.
2.4 Fast focusing with global shutter
As said, global shutter is traditionally used with CCD sensors. However, also CMOS sensors can contain global reset and global shutter. Figure 9 shows an example where a cropped image 810 is used for auto-focus metering and a full image 800 can be used as a viewfinder image. The first timing chart 801 contains normal operation mode where the frame rate and the focus speed are limited usually with ADC (Analog to Digital Conversion) speed. Naturally, if the exposure time is very long, the exposure time can also be the limiting factor. The second timing chart 802 shows a system, where the focus speed is maximized, but the viewfinder image is not captured at all. In this case the focus speed is limited by the lens movement, the reset and the exposure time. The third timing chart 803 shows an example where fast focusing can be achieved but still the preview image can be shown with reasonable frame rate. When the cropping is done, all the charges outside the cropping area window can be ignored and not AD converted.
The fast focus detection solution has considerable advantages what comes to the time needed to find focus. For example, if "X" corresponds to the amount of needed measurements, then the time can be shortened to X frames by the example 2.1 (by limiting exposure time); by the example 2.2, time can be shortened to X/N; by the example 2.3 time can be shortened by one frame; and by the example 2.4 time can be shortened by increasing the frame rate for measurements. The example 2.3 also lowers power consumption because continuous focus is not needed.
3) Moving of lenses and exposure of image by utilizing blanking time
This example describes a method, where either blanking periods are controlled (dynamically varied blanking period), or wherein exposure and lens movement times are controlled (static blanking period). When the dynamically varied blanking period is used, maximum image frame rate can be achieved within the known exposure and the lens movement times without corrupting the image information. It also enables the use of automatic night and day light scene variation within the maximum image frame rate with or even without the lens movements. The frame rate is not constant, when the dynamically varied blanking period is used. However, static blanking period enables constant frame rate.
Both blanking time scenarios are possible to be used with rolling shutter and global shutter. With global shutter it should be noticed that the lens movement can begin immediately after the shutter has been closed although the whole image is not transmitted from the sensor. It is also important to remember that the both blanking solutions can be implemented without changing the system clock in the imaging sensor. This is a big benefit, because in such a situation, there is no need to skip image frames or transmit bad quality image frames when the Phase Locking Loop (PLL) is setting up.
3.1 Static blanking period
In a one implementation of the current example (Figure 10), the desired image frequency is achieved by means of the maximum blanking time based on the rate being used of the bus, after which the exposure time is limited to its maximum in such a way, which enables the moving of the lens within limits of the blanking time being defined depending on the way the lenses are moved. The exposure time can be limited in such a manner that a shortened exposure time is replaced with analogue or digital gain. Reference signs 96a, 96b represent a non- exposure time of the complete visible image frame, whereas signs 97a, 97b represent non-exposure time of the AF-statistics block. In figure 10 frame blanking 92a, 92b, 92c, line blanking 91 a, 91 b, embedded/ancillary data 94a, 94b, 94c, 94d are illustrated. The reading of the lines is referred by sign 98, where as the exposure time is referred by sign 95 and visible data by 99.
With this system, by setting the total amount of lines constant to assure that the image frequency maintains constant. However there are still enough time for limited exposure and enough limited time needed for the lens movement, within limits of rate of the bus and within limits of the read rate, without corrupting the image any other way than by increasing the gain and thus reducing the dynamic area.
When the static blanking period is used, the exposure and lens movement times are restricted so that the images can be captured without any artefacts in the images. This means that fie blanking time have to be as long as possible, which is enabled by the interface within the required frame rate. In addition, if long exposure or long lens movement times are required, then the other or both have to be limited. This means that the exposure time has to be compensated by using e.g. analogue gain, and the speed of zooming is reduced. 3.2 Dynamic blanking period
In another implementation of the current example for rolling shutter such a frame blanking time 102a, 102b, 102c is used for each image, which is needed for the lens movement and exposing of first lines of the next image, the amount of which is defined by the exposure time
105a, 105b. Therefore the frame blanking time 102a, 102b, 102c will vary according to the images and they need to be bigger than the corresponding exposure times 105a, 105b in order to have enough non-exposure time for a complete image. This example is illustrated in figure 11.
The lens movement can be started immediately after the last visible line of the previ ous image (Frame N- 1 ) is exposed. The control of lens movement can thus be started at the time it is known into which direction the lens is supposed to be moved (zoom control or auto-focus control) and when the delay needed for starting the lens movement is left from the exposure of the last visible pixel (line). In addition, right after the lens has taken its place, the exposure of the first pixel (line) can be started by resetting the pixels of the line in question. The lens movement and the time (106a, 107a, 106b, 107b) required by it, which are done for the next image (Frame N+1), are already known before the current image (FRAME N) is exposed, as long as it is known which operation is in action - zooming or auto-focus control - and which are the amount and direction of the lens movement. In a case of viewfinder images only focusing is important, and therefore the time (107a, 107b) for lens movement is different to the time (106a, 106b) for the lens movement with still or video images. The zoom control is resulted from the user and there has to be enough hysteresis in a continuous auto- focus control in order not to move the lenses continuously back and forth with great frequency. In addition, the exposure (105b) of the next image (FRAME N+1) is already known, whereby it is easy to calculate the amount of the needed blanking lines in the current image (FRAME N) frame blanking area (102b). In this example the reading rate of the sensor is not changed and therefore there is no need for impacting sensor's system clock, but only to the amount of visible and blanking lines to be transmitted and being in the image. The blanking pixels in the line blanking areas (101 a, 101 b) are such pixels, which are not included in the image being displayed, i.e. visible image 109a, 109b. There can be also other non visible image areas for example vertical blanking, ancillary/embedded data, dummy pixels, dark pixels, black pixels and manufacturer specific data. The changing the amount of visible lines corresponds image cropping which is usually done when images are zoomed digitally.
It should be noticed that Figure 11 illustrates changing frame blanking by inserting blanking lines, but at least with sensors of SMIA (Standard Mobile Imaging Architecture) specification, it is possible to change also line blanking by inserting pixels to the end of the line (101 a, 101 b). SMIA sensors are originally intended to control constant image frequency without changing the system clock. Similarly these sensors are designed to implement long exposure times without a need for read rate or continuing exposure of the image over the image limit. By using the control structure of this example it is possible to achieve as great image rate as possible. In addition the lens movement is not shown at any phase in the image. This system therefore provides as great image frequency as possible without corrupting the image and with desired exposure time by exposing the image in such a manner that the lens can be moved when desired. It should also be noticed that the blanking areas can be increased dynamically even little bit more than absolutely needed for achieving more suitable viewfinder frame updating. E.g. blanking lines are added so that from the current frame start to the next frame start is e.g. 1 ,2,..., n / 60 s.
3.3 General
Figures 10 and 11 illustrate static blanking period solution and dynamic blanking period solution respectively. In both Figures 10 and 11 the time for lens movement, exposure time and focusing data area are described. The biggest difference between Figures 10 and 10 is that in Figure 10 the blanking period is equal, and the time for the lens movement (96a, 97a, 96b, 97b) varies based on the required exposure time (95a, 95b) (or vice versa). In Figure 11 the time for the lens movement (106a, 107a, 106b, 107b) and time for the exposure (105a, 105b) are known and the blanking period varies based on those.
The previous processes (3.1 , 3.2) work also with global shutter in the following manner. The exposure of the image is shut by the global shutter, after which the lens movement can be started. The lens movement is started regardless of how long the reading of the image in question from the sensor still lasts. Similarly the global reset of pixels (or opening of the global shutter) and thus the exposure of the new image, will be started right after the lens has been moved to the right location and the previous visible image has been read out from the sensor, and the global shutter has been opened after said reading. It should be noticed that in this case the non-exposure time is the time during which the sensor is read after closing the shutter and the blanking time before the first used line of next image is resetted (typically global reset). During the non-exposure time the sensor does not receive light (or it is discarded) to its such pixels, which are visible in the final image or are used in measurements.
It should also be noticed that whether the process targeting to maximum image frequency (3.2) is not used, the more time can be used for the lens movement the shorter the exposure time is. Due to this, the optical zoom/auto-focus lens moves either faster, or with smaller temporal capacity the same path than it normally would.
In some situations, temporal corruption of viewfinder image is not wanted to be prevented, because such an image is not stored for further use. Therefore as quick auto-focus as possible with long exposure time can be achieved for both processes (3.1 , 3.2) by the lens movement also during exposure of pixels / pixel lines that do not belong to focus control. This moving is visible in the viewfinder image, but not in the final and stored still images. Also for the video images, it is better to implement the first auto-focus control as quickly as possible and thus the image can be corrupted as long as areas being used for statistics are not damaged.
It should be noticed that there is rarely a need for controlling lenses, due to which the exposure time can almost every time be controllable on the read rate according to its limited value. Therefore the implementation targeting to the maximum image frequency β.2) the blanking time can often be set to zero (or sensor limit). In addition, said implementation targeting to maximum image frequency works well in an automatic control of night/day mode (with or without the lens movement), whereby blanking is increased depending on the lighting conditions, but the viewfinder image is not slowed more than what is needed.
With the previous implementations such images are resulted, which are not corrupted and which are resulted - if desired - with the maximum image frequency in the lighting and control conditions in question. In addition, if the maximum image frequency is not targeted, it is possible to provide the lens movement wi th fixed image frequency and to assure the maximum exposure item according to the way lenses are moved. In similar, if the exposure time is short, then it is possible to rapid the zooming or to reduce temporal peak effect when the lens movement is slower.
Implementation
The previous examples can be implemented on the control CPU of an imaging system being part of an electronic device, such as a mobile device, a digital camera, a web camera or similar. Dedicated hardware implementations may be needed in the imaging sensor or in the receiver block in order to have faster and more accurate timings.
Figure 12 illustrates a possible configuration of the electronic device.
The device 1200 in Figure 12 comprises communication means 1220 having a transmitter 1221 and a receiver 1222, or is connected to such.
There can also be other communicating means 1280 having a transmitter 1281 and a receiver 1282 as well. The first communicating means 1220 can be adapted for telecommunication and the other communicating means 1280 can be a kind of short-range communicating means, such as a Bluetooth™ system, a WLAN system (Wireless Local Area Network) or other system which suits local use and for communicating with another device. The device 1200 according to the example in figure 12 also comprises a display 1240 for displaying visual information and the imaging data. Further the device 1200 may comprise an interaction means, such as a keypad 1250 for inputting data etc. In addition or instead of the keypad 1250, the device can comprises stylus, whether the display is a touch-screen display. The device 1200 comprises audio means 1260, such as an earphone 1261 and a microphone 1262 and optionally a codec for coding (and decoding, if needed) the audio information. The device 1200 comprises an imaging system 1210 or is connected to such. A control unit 1230 may be incorporated to the device 1200 for controlling functions and running applications in the device 1200. The control unit 1230 may comprise one or more processors (CPU, DSP). Further the device comprises memory 1270 for storing e.g. data, applications, and computer program code. One skilled in the art will appreciate that the imaging system may also incorporate any number of capabilities and functionalities, which suitable enhance efficiency of the system
The foregoing detailed description is provided for clearness of understanding only, and not necessarily limitation should be read therefrom into the claims herein.

Claims

Claims:
1. An imaging method comprising at least steps of
- acquiring an image sequence comprising at least two images, at least one of which is used as measurement image and at least one other of which is used as final image,
- determining a measurement image exposure time and a final image exposure time, - determining a non-exposure time between the measurement image exposure time and the final image exposure time, and
- allowing adjustment of imaging optics during said non- exposure time.
2. The imaging method according to claim 1 , wherein a smaller image in the image sequence is used as the measurement image, and wherein a larger image is used as the final image.
3. The imaging method according to claim 1 , wherein the measurement image is used as the final image in a case of a video image or a viewfinder image.
4. The imaging method according to claim 1 , wherein at least the final image is stored.
5. The imaging method according to claim 1 , wherein the non- exposure time is extended by one of the following ways or their combination: controlling of a size of the measurement image size, sub-sampling the measurement image, changing of a channel rate used for reading the image sequence, controlling the measurement image exposure time or the final image exposure time or both.
6. The method according to claim 1 , wherein said measurement image is one of the following: a measurement image frame, a measurement area in an image frame.
7. The method according to claim 1 , wherein auto-focus statistics is calculated from the measurement image.
8. The method according to claim 7, wherein the non-exposure time is determined by the auto-focus statistics and by information including an amount of pixels in an image sensor, a transmission clock frequency possible delay in a zoom hardware.
9. The method according to claim 1 , wherein at least one measurement area is defined at least in the measurement image, whereby an auto-focus is measured in said at least one measurement area from at least one lens position.
10. The method according to claim 9, wherein a focus measurement value is obtained by gathering high or band-pass frequency content from sub- areas of said at least one measurement area.
11. The method according to claim 10, wherein the optics is adjusted between readout of last line of a first measurement area and a start of an exposure of a first line of a second measurement area in a same image; between an exposure of a first measurement area in a first image and an exposure the second measurement area in a second image; or constantly during the imaging.
12. The method according to claim 1 , wherein blanking time is utilized when the non-exposure time is determined.
13. The method according to claim 12, wherein a maximum blanking time is used for defining the non-exposure time for the imaging optics adjustment.
14. The method according to claim 12, wherein the blanking time is controlled according to an exposure time in order to define the non-exposure time.
15. The method according to claim 1 , the image sequence is formed of still images, video images or viewfinder images, or a combination of these.
16. A method for determining a non-exposure time to be used for imaging optics adjustment, comprising at least steps of - acquiring an image sequence comprising at least one measurement image and at least one final image,
- calculating auto-focus statistics from the measurement image, - determining a non-exposure time by means of the auto-focus statistics, which non-exposure time is to be used for the final image .
17. A method for determining a non-exposure time to be used for imaging optics adjustment, comprising at least steps of
- acquiring an image sequence, comprising at least one measurement image and at least one final image,
- defining at least one measurement area at least in the measurement image,
- measuring an auto-focus from at least one lens position in the measurement area,
- defining the non-exposure time to be between a readout of a last line of the measurement area and a start of an exposure of a first line of a next measurement area.
18. The method according to claim 17, wherein the optics is adjusted between readout of a last line of the measurement area in one image and a start of an exposure of a first line of the measurement image in a successive image; during time between an exposure of one measurement area in one image and exposure of the next measurement area in the same image; or constantly during imaging.
19. A method for determining a non-exposure time to be used for imaging optics adjustment, comprising at least steps of
- acquiring an image sequence, comprising at least one measurement image and at least one final image, - defining at least one blanking time occurring in the image sequence,
- defining the non-exposure time either by controlling the imaging optics adjustment within the blanking time, or by controlling the blanking time.
20. An imaging device comprising adjustable imaging optics, at least an image sensor for gathering light to be provided as an image sequence to a processor, said image sequence comprising at least two images, at least one of which is a measurement image and at least one other of which is a final image, exposure control means for controlling exposure of images, wherein the imaging device is capable of
- determining a measurement image exposure time and a final image exposure time,
- determining a non-exposure time between the measurement image exposure time and the final image exposure time, and
- allowing adjustment of the imaging optics during said non-exposure time.
21. The device according to claim 20 being further capable of using a smaller image in the image sequence as the measurement image, and a larger image as the final image.
22. The device according to claim 20, wherein the measurement image is the final image in a case of a video image or a viewfinder image.
23. The device according to claim 20, being further capable of storing the final image.
24. The device according to claim 20, being further capable of calculating auto-focus statistics from the measurement image.
25. The device according to claim 24, being further capable of determining the non-exposure time by the auto-focus statistics and by information including an amount of pixels in an image sensor, a transmission clock frequency possible delay in a zoom hardware.
26. The device according to claim 20, being further capable of defining at least one measurement area at least in the measurement image, and of measuring auto-focus in said at least one measurement area from at least one lens position.
27. The device according to claim 20, being further capable of using a maximum blanking time for defining the non-exposure time.
28. The device according to claim 20, being further capable of controlling a blanking time according to an exposure time in order to define the non-exposure time.
29. The device according to claim 20, wherein the image sequence is formed of still images, video images or viewfinder images, or a combination of these.
30. The device according to claim 20, wherein the shutter means being either a rolling shutter or a global shutter.
31. An imaging module for determining a non-exposure time being capable of implementing the method according to at least one of the claims 1 — 15.
32. An imaging module for determining a non-exposure time being capable of - acquiring an image sequence comprising at least one measurement image and at least one final image,
- calculating auto-focus statistics from the measurement image, - determining a non-exposure time by means of the auto-focus statistics, which non-exposure time is to be used for the final image.
33. An imaging module for determining a non-exposure time being capable of
- acquiring an image sequence, comprising at least one measurement image and at least one final image,
- defining at least one measurement area at least in the measurement image,
- measuring an auto-focus from at least one lens position in the measurement area,
- defining the non-exposure time to be between a readout of a last line of the measurement area and a start of an exposure of a first line of a next measurement area.
34. An imaging module for determining a non-exposure time being capable of - acquiring an image sequence, comprising at least one measurement image and at least one final image,
- defining at least one blanking time occurring in the image sequence,
- defining the non-exposure time either by controlling the imaging optics adjustment within the blanking time, or by controlling the blanking time.
35. A computer program product for imaging comprising code means stored on a readable medium, adapted, when run on a computer, to implement the method according to at least one of the claims 1 — 15.
36. A computer program product for imaging comprising code means stored on a readable medium, adapted, when run on a computer, to implement the method according to the claim 16.
37. A computer program product for imaging comprising code means stored on a readable medium, adapted, when run on a computer, to implement the method according to at least one of the claims 17 — 18.
38. A computer program product for imaging comprising code means stored on a readable medium, adapted, when run on a computer, to implement the method according to the claim 18.
PCT/FI2005/050409 2005-11-15 2005-11-15 Imaging system with adjustable optics WO2007057498A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP05808852A EP1949671A4 (en) 2005-11-15 2005-11-15 Imaging system with adjustable optics
PCT/FI2005/050409 WO2007057498A1 (en) 2005-11-15 2005-11-15 Imaging system with adjustable optics
CNA2005800523022A CN101326814A (en) 2005-11-15 2005-11-15 Imaging system with adjustable optical device
KR1020107003295A KR20100023056A (en) 2005-11-15 2005-11-15 Imaging system with adjustable optics
JP2008540635A JP5086270B2 (en) 2005-11-15 2005-11-15 Imaging system with adjustable optics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/FI2005/050409 WO2007057498A1 (en) 2005-11-15 2005-11-15 Imaging system with adjustable optics

Publications (1)

Publication Number Publication Date
WO2007057498A1 true WO2007057498A1 (en) 2007-05-24

Family

ID=38048319

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2005/050409 WO2007057498A1 (en) 2005-11-15 2005-11-15 Imaging system with adjustable optics

Country Status (5)

Country Link
EP (1) EP1949671A4 (en)
JP (1) JP5086270B2 (en)
KR (1) KR20100023056A (en)
CN (1) CN101326814A (en)
WO (1) WO2007057498A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2083321A1 (en) * 2008-01-22 2009-07-29 Canon Kabushiki Kaisha Imaging apparatus and lens apparatus
EP2164243A4 (en) * 2007-06-04 2011-01-19 Sharp Kk Portable terminal, control method for portable terminal, control program for portable terminal, and computer readable recording medium having recorded the program therein
EP2592821A1 (en) * 2011-11-10 2013-05-15 Research In Motion Limited Camera autofocus apparatus and associated method
EP2847636A4 (en) * 2012-06-27 2015-12-16 Nokia Technologies Oy Imaging and sensing during an auto-focus procedure
WO2016069226A1 (en) * 2014-10-31 2016-05-06 Qualcomm Incorporated Time extension for image frame processing

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5400406B2 (en) 2009-02-06 2014-01-29 キヤノン株式会社 Imaging device
JP2012521673A (en) 2009-03-19 2012-09-13 フレクストロニクス エイピー エルエルシー Dual sensor camera
JP5471004B2 (en) * 2009-04-22 2014-04-16 カシオ計算機株式会社 Focus adjustment apparatus, focus adjustment method, and program
WO2012132122A1 (en) * 2011-03-31 2012-10-04 富士フイルム株式会社 Imaging device, and focus control method therefor
WO2014078735A1 (en) * 2012-11-16 2014-05-22 Molecular Devices, Llc System and method of acquiring images with a rolling shutter camera while asynchronously sequencing microscope devices
CN111355895B (en) * 2018-12-05 2021-07-16 北京图森智途科技有限公司 Image exposure and gain adjustment method, imaging device and vehicle
CN111818272B (en) * 2020-06-30 2021-09-03 浙江大华技术股份有限公司 Method for eliminating image flicker, electronic device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5177525A (en) * 1990-09-26 1993-01-05 Nikon Corporation Control method for an auto focus apparatus
EP0720360A1 (en) * 1994-12-30 1996-07-03 Eastman Kodak Company An electronic camera with rapid automatic focus of an image upon an image sensor
US5563658A (en) * 1994-12-16 1996-10-08 Eastman Kodak Company Electronic camera with rapid automatic focus of an image upon an image sensor
JP2001177771A (en) * 1999-12-16 2001-06-29 Toshiba Corp Solid-state image sensing device
JP2006064855A (en) * 2004-08-25 2006-03-09 Konica Minolta Opto Inc Automatic focusing device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0834551B2 (en) * 1988-07-19 1996-03-29 松下電器産業株式会社 Automatic focus adjustment device
JPH04229783A (en) * 1990-12-27 1992-08-19 Sony Corp Video camera
US6683651B1 (en) * 1999-10-28 2004-01-27 Hewlett-Packard Development Company, L.P. Method of automatically adjusting focus in a shutterless digital camera
JP2001296470A (en) * 2000-04-14 2001-10-26 Hitachi Ltd Electronic still camera
US20040165090A1 (en) * 2003-02-13 2004-08-26 Alex Ning Auto-focus (AF) lens and process

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5177525A (en) * 1990-09-26 1993-01-05 Nikon Corporation Control method for an auto focus apparatus
US5563658A (en) * 1994-12-16 1996-10-08 Eastman Kodak Company Electronic camera with rapid automatic focus of an image upon an image sensor
EP0720360A1 (en) * 1994-12-30 1996-07-03 Eastman Kodak Company An electronic camera with rapid automatic focus of an image upon an image sensor
JP2001177771A (en) * 1999-12-16 2001-06-29 Toshiba Corp Solid-state image sensing device
JP2006064855A (en) * 2004-08-25 2006-03-09 Konica Minolta Opto Inc Automatic focusing device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP1949671A4 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2164243A4 (en) * 2007-06-04 2011-01-19 Sharp Kk Portable terminal, control method for portable terminal, control program for portable terminal, and computer readable recording medium having recorded the program therein
EP2083321A1 (en) * 2008-01-22 2009-07-29 Canon Kabushiki Kaisha Imaging apparatus and lens apparatus
US7822332B2 (en) 2008-01-22 2010-10-26 Canon Kabushiki Kaisha Imaging apparatus and lens apparatus
EP2592821A1 (en) * 2011-11-10 2013-05-15 Research In Motion Limited Camera autofocus apparatus and associated method
EP2847636A4 (en) * 2012-06-27 2015-12-16 Nokia Technologies Oy Imaging and sensing during an auto-focus procedure
US10136046B2 (en) 2012-06-27 2018-11-20 Nokia Technologies Oy Imaging and sensing during an auto-focus procedure
WO2016069226A1 (en) * 2014-10-31 2016-05-06 Qualcomm Incorporated Time extension for image frame processing
US9462188B2 (en) 2014-10-31 2016-10-04 Qualcomm Incorporated Time extension for image frame processing

Also Published As

Publication number Publication date
CN101326814A (en) 2008-12-17
EP1949671A4 (en) 2008-11-05
JP2009516448A (en) 2009-04-16
JP5086270B2 (en) 2012-11-28
EP1949671A1 (en) 2008-07-30
KR20100023056A (en) 2010-03-03

Similar Documents

Publication Publication Date Title
WO2007057498A1 (en) Imaging system with adjustable optics
CN101052101B (en) Apparatus and method for image pickup
US7689113B2 (en) Photographing apparatus and method
JP4546565B2 (en) Digital image processing
US9596398B2 (en) Automatic image capture
JP5802520B2 (en) Imaging device
US20020030749A1 (en) Image capturing apparatus
CN102739961B (en) Can the image processing apparatus of generating wide angle
CN110198418B (en) Image processing method, image processing device, storage medium and electronic equipment
JP2006324976A (en) Moving imaging apparatus and program thereof
CN101355651A (en) Image pickup device
EP2472852A2 (en) Digital imaging with autofocus
WO2011145342A1 (en) Imaging device
US7729601B1 (en) Shutter for autofocus
JP4210189B2 (en) Imaging device
JP7320024B2 (en) IMAGE SENSOR, IMAGING DEVICE, IMAGE DATA PROCESSING METHOD, AND PROGRAM
KR102368625B1 (en) Digital photographing apparatus and the method for the same
CN110266967B (en) Image processing method, image processing device, storage medium and electronic equipment
JP2006148550A (en) Image processor and imaging device
JP2008301161A (en) Image processing device, digital camera, and image processing method
JP4534250B2 (en) Movie imaging apparatus and program thereof
CN110463184B (en) Image processing apparatus, image processing method, and non-volatile tangible medium
KR20080057345A (en) Imaging system with adjustable optics
US20150085172A1 (en) Image capturing apparatus and control method thereof
JP4993683B2 (en) Imaging apparatus and control method thereof

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200580052302.2

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2005808852

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2008540635

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 1020087011579

Country of ref document: KR

NENP Non-entry into the national phase

Ref country code: DE

WWP Wipo information: published in national office

Ref document number: 2005808852

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 1020107003295

Country of ref document: KR

Ref document number: KR