CN114208153A - Multi-shot image capture without anti-shake - Google Patents

Multi-shot image capture without anti-shake Download PDF

Info

Publication number
CN114208153A
CN114208153A CN201980099166.4A CN201980099166A CN114208153A CN 114208153 A CN114208153 A CN 114208153A CN 201980099166 A CN201980099166 A CN 201980099166A CN 114208153 A CN114208153 A CN 114208153A
Authority
CN
China
Prior art keywords
image
handheld device
sensing cells
deviation
movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980099166.4A
Other languages
Chinese (zh)
Other versions
CN114208153B (en
Inventor
山本隆師
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN114208153A publication Critical patent/CN114208153A/en
Application granted granted Critical
Publication of CN114208153B publication Critical patent/CN114208153B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6812Motion detection based on additional sensors, e.g. acceleration sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/48Increasing resolution by shifting the sensor relative to the scene

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Studio Devices (AREA)

Abstract

A method and apparatus for a smartphone to obtain a high resolution image by combining separate images captured with in-plane deviations, each in-plane deviation corresponding to the spacing of adjacent sensing cells in a Bayer array filtered image sensor in the smartphone. No anti-shake mechanism is used. The deviation is generated as a result of a human user shaking his hand and/or in response to an indication provided to the user instructing the user to move the smartphone on a path that results in a deviation that matches a desired sensing cell spacing of the sensing cells.

Description

Multi-shot image capture without anti-shake
Technical Field
The present invention relates to obtaining high resolution images using hardware that takes advantage of naturally occurring camera shake and/or human user movement provided in response to an indication of an image sensor during the capture of a series of images. In particular, the invention relates to image sensors that use bayer array color filters in handheld devices (e.g., smartphones).
Background
In a CMOS image sensor using a monochrome image sensor and a Color Filter Array (CFA), such as a bayer array color filter, one pixel (also referred to herein as a sensing cell (sensel)) does not receive all color information of an image. This is due to the layout of the bayer array, where the green filtered pixels are twice as frequent as either the red filtered pixels or the blue filtered pixels.
To compensate for this, a common technique is to obtain color information of each pixel by performing interpolation between 4 pixels (red, green, and blue pixels). In this case, at a small number of red and blue pixels, only information of a low spatial frequency region can be possessed as compared with green. Pixels to which the green filter of the bayer filter array is applied may be referred to herein as green filter sense cells, others may be inferred by analogy.
Fig. 1 shows an example of a bayer array. The point W obtains color information from the neighboring GBRG. In this way, the full color information at point W can be estimated. The full color information for point X can be calculated in the same manner as for point W. The point X uses the same blue information as the point W. Similarly, point Z shares red information with point X. Point Y uses red information common to point X and point Z. Thus, red and blue are in principle only half the frequency information of green in one direction.
To overcome the limitation of the bayer array, a technique called pixel shift multi-shot is used. This is a method of taking a plurality of images and combining all of the images to provide all color information in a 1-pixel region using an anti-shake mechanism, thereby moving the image sensor by 1 pixel in the vertical direction and the horizontal direction. This can obtain a higher resolution image. Fig. 2 shows a graphical representation of this movement.
However, this method is not suitable for an image capturing apparatus having a size limitation. In particular, it is not suitable for a camera of a smartphone for the following reasons:
it is difficult to scale down the sensor displacement mechanism to a size suitable for a smartphone camera.
Even for high resolution models, the pixel pitch of the DSLR (in fig. 2, the pixel pitch is, for example, the center-to-center distance between the pixels covered by G in the top row and the pixels B directly below the top row) is about 4 microns, typically about 6 microns. However, the pixel pitch of the sensor of the smartphone is about 1 micron, which is very small compared to the DSLR. At high speeds, it is difficult to accurately control movement over distances of 1 micron.
As described above, it is difficult to improve resolution by pixel shift multi-shot in a smartphone. Therefore, the conventional approach is to increase apparent resolution by complementing with software. However, there is always a difference from reality when software is used to increase resolution. Depending on the subject and situation, the results may vary significantly.
Disclosure of Invention
Methods, apparatuses, chips, processors with memory and computer programs for implementing the invention are provided herein.
Embodiments are provided to implement aspects or portions thereof that facilitate any aspect.
In the present invention, a high resolution image may be obtained by combining data of individual images captured with a bias corresponding to the spacing of adjacent sensing cells in a bayer array filtered image sensor in a handheld device. In one embodiment, determining which deviations match the spacing of the sensing cells facilitates the above-described operation.
In an embodiment of the invention, the deviation is due to a movement provided by a natural camera shake movement of a human user. In other embodiments, the movement is provided in response to an indication provided to a human user. These two modes of operation may be combined in embodiments or used independently.
The indication encourages the human user to move the handheld device in a particular direction or toward a target while capturing the image during the capture process. The indication may be obtained by calculation such that, according to the calculation, movement of the handheld device should cause the handheld device to follow a spatial path, thereby moving the sensing cells by a deviation that matches the sensing cell spacing/position. From another perspective, the sensing cells of the image sensor undergo relative motion with respect to the imaging radiation such that the imaging radiation from the same object falls on several different adjacent sensing cells. The indication thus reflects a calculated estimate of the preferred mode of the mobile device.
The indication may be updated during the capture process, e.g. in real time, as there may be differences in the extent to which the actual movement provided by the human user follows the indication. By using natural camera shake movement and/or movement provided in response to an indication, no mechanical anti-shake mechanism may be used in the method.
Some of the deviations will match the sensing cell spacing of one or more sensing cells over any period of time. By evaluating which images have corresponding deviations that match the sensing cell spacing, separate images of the same scene can be obtained. In these images, for imaging rays originating from the same source point, the image sensor will obtain data from a different sensing cell relative to the other images. This may improve image resolution compared to using one image alone, as the individual images may be combined with the reference image to form a combined image. In the combined image, the data for any one of the resulting pixels is a combination of the sensed cell data from each of the individual images. Preferably, there is a defined relationship between any one of the resulting pixels and the sensing cell data. For example, the sensor cell data from an individual image preferably includes data from four filtered sensor cells, including data from red filtered sensor cells in one image, data from blue filtered sensor cells in another image, data from green filtered sensor cells in yet another image, and data from green filtered sensor cells in yet another image. Preferably, the sensing cells are adjacent. Fewer than four or more than four sensing cells may be used. By using data from these filtered sensing cells, more color information can be captured. Furthermore, the resolution is improved and interpolation can be avoided, since there are multiple sensing cells, each individually contributing to any one of the resulting pixels. Data from any one of the sensing cells in the captured image need not be shared (but could be shared if desired).
In one aspect, there is provided an image capture method for a handheld device, the handheld device comprising an image sensor and a color filter array, the method comprising:
obtaining a plurality of images corresponding to a time period t;
determining a time at which a deviation matches a relative displacement between positions of two or more sensing cells of the image sensor, the deviation corresponding to a deviation between a first image at a time point t1 and a second image at a time point t2 over a time period t;
wherein the deviation is due to movement of the handheld device.
The image capture method may be performed by a processor of a handheld device.
The handheld device may be a device adapted to be supported by a person's body to be positioned and held in a position at all times to take a plurality of pictures over a time period t. Preferably, the handheld device is adapted to be supported by one or both human hands. The skilled person will therefore appreciate the appropriate size and mass of the handheld device to meet these requirements. Preferably, the handheld device is a terminal like a mobile phone (e.g. a smartphone with a screen). It should be understood that the method may be applicable to other devices that are subject to external movement (e.g., vibration) and/or may be controlled to move, but that cannot use anti-shake mechanisms due to various limitations such as size.
The color filter array may be a bayer filter array, or any color filter array.
The image sensor may be of any suitable type, for example a CMOS image sensor.
In the prior art, the combination of the color filter array of the image sensor and the monochrome image capture results in a lower resolution compared to the resolution of the original captured data, because for any resulting panchromatic pixel from one captured image, the data for that resulting pixel must be obtained by interpolation from the surrounding four (red, green and blue) sensing cells.
The pixel pitch of the image sensor may be about 1 micron. This may be the size of the distance between adjacent sensing cells arranged in the horizontal array and the vertical array. Other pixel pitches or sensing cell layouts may be used. Thus, the pixel pitch corresponds to the spacing of adjacent sensing cells. Similarly, the sensing cells are at certain positions in the array, and there is a relative displacement between any two sensing cells, typically specified in terms of horizontal and vertical distances. The pixels may be adjacent when the relative displacement between the positions of the sensing cells is minimal.
The plurality of images may be obtained by an image sensor or from a memory.
Obtaining the image may be an ongoing process performed in real time and therefore does not have to be terminated before the determining step. The determination and the obtaining may be performed continuously or simultaneously.
Multiple images may be captured during time period t. During the time period t, the image sensor captures a plurality of images, each captured at a different point in time tx, e.g., point in time t1, point in time t2, etc. Thus, a first image may be captured at time point t1, a second image may be captured at time point t2, and so on. The time point t1 may be earlier or later than the time point t 2. Time point t1 may correspond to a time point at or shortly after the time of image capture initiation. Thus, the first image may be a reference image, but other images may also be used as reference images for calculating the deviation.
The duration of the time period t may be variable or predetermined. The capturing of the image and the start of the time period t may be initiated by a command, for example pressing a "shoot" button. The time period t may terminate after a reasonable period of time for the human user to take a picture, or when the method obtains enough matching images. A practical upper limit for performing the process may be 2-4 seconds.
During all or part of the time period t, the handheld device is moving. Thus, since the image sensor does not move relative to the handheld device, the two move together.
By being the deviation between the first (e.g. reference) image and the second image, the deviation may correspond to the deviation between these images. The deviation may be an estimated in-plane deviation of the image or image sensor obtained in various ways.
A deviation may be determined to be a match when the value of the deviation is approximately equal to, or more preferably equal to, the value of the relative displacement between the positions. This is to be understood as meaning various methods of determining a match. For example, a deviation may be determined to match when the magnitude of the deviation is approximately equal to, or more preferably equal to, the pixel pitch. In another example, a deviation may be determined to be a match when the horizontal and vertical displacements corresponding to the deviation (e.g., of the handheld device) are approximately equal or more preferably equal to the pixel pitch. In another example, the deviations may match when the horizontal and vertical displacements are approximately equal or more preferably equal to the horizontal and vertical displacements between the positions of the two sensing cells. A suitable cutoff of approximately equal may be when the deviation is within 30% of the relative displacement (total or individual) between pixel pitches or positions. Other methods of determining a match may be used, for example, depending on the manner in which the deviation is calculated.
Determining a match may also involve determining a particular point in time as a matching point in time or determining a particular image as a matching image. These are alternative methods of identifying image data for subsequent use. The data of the images that are matching images may be combined as disclosed herein. Data of images that do not match may be discarded.
The identifier may be output for each specific point in time or for each specific image.
The method may be performed so as to determine a match between two adjacent sensing cells, or a match between three adjacent sensing cells, or a match between four adjacent sensing cells, or a match between more adjacent sensing cells. Thus, there will be one, two or three matching images, and one reference image, respectively. The method may terminate when the deviation passes all three sensing cell locations adjacent to the center reference sensing cell.
In one implementation, the movement of the handheld device is due to a force externally applied to the handheld device.
A force is applied to the handheld device and, in turn, to the image sensor due to the lack of relative motion between the handheld device and the image sensor. This may be due to a rigid connection between the image sensor and the handheld device, or no anti-shake mechanism applied. During image capture, the direction and magnitude of the force may change over time, thus causing changes in the direction, distance, and speed of the handheld device.
In one implementation, the movement of the handheld device is a handheld shaking vibration.
This may be the result of an external force. The skilled user will know the typical characteristics of hand-held trembling vibrations, which vary from person to person, and arrange for suitable methods to measure movements and/or calculate resulting deviations.
The hand-held shaking vibration may be a well-known camera shaking vibration provided by a human user. This may be caused by a small vibration that typically occurs when a picture button is pressed and/or the person holds the object.
In one implementation, the image capture method further comprises:
obtaining an aiming indication, a first function of which is to indicate a target position or direction of movement of the handheld device, such that movement of the handheld device in or towards the indicated direction of movement should result in a deviation that matches the relative displacement between the positions of the two or more sensing cells.
Calculating the indicated target position or direction of movement. Thus, it is possible to predict how the mobile handset should produce the desired offset in response to the aiming indication. However, since the response is provided by the human user, the matching may not occur because the movement may deviate.
In one implementation, the image capture method further comprises: determining whether movement of the handheld device in response to the aiming indication results in a deviation that matches the relative displacement between the locations of the two or more sensing cells. Thus, the method can be iterated.
In one implementation, the image capture method further comprises:
obtaining the bias from sensor data of the handheld device, the sensor data including gyroscope data and/or accelerometer data;
and/or
And obtaining the deviation by carrying out image recognition analysis on the plurality of images.
The deviation may be measured or determined by various methods. The deviations may be calculated or estimated as in-plane horizontal and vertical deviations with respect to the image sensor.
The gyroscope data and the image frame data may include a "time stamp". The time stamp can be used to properly correlate the gyroscope and image data even if the image processing time is long.
In one implementation, the image capture method further comprises: the aiming indication is obtained and/or updated so as to obtain an image having a corresponding deviation that matches a relative displacement between any of the two or more sensing cells for which a match has not been determined.
The aiming indication may be obtained or updated based on an analysis of already obtained images for which there is a match, or based on a deviation of already obtained matches. The goal may be to obtain color data for four sensing cells. This will produce enough matching images. By updating the aiming indication, all color data of the sensing cells can be obtained. However, even if the method only achieves one match, some improvement is still achieved.
In one implementation, the image capture method further comprises: when the performing the determination is for more than two sensing cells, the sensing cells include any three sensing cells selected from eight sensing cells surrounding a central reference sensing cell, such that the central reference sensing cell and the three selected sensing cells preferably include two green filtered sensing cells, one blue filtered sensing cell and one red filtered sensing cell.
In one implementation, the three sensing cells selected from the eight sensing cells include any three adjacent sensing cells.
The bayer filter array comprises a regular two-dimensional array of sensing cells. Any green filtered sensing cell is vertically or horizontally adjacent to two red pixels and two blue pixels in the array away from any edge. To obtain all data from four (red, green, blue) adjacent sensing cells in such an array, there are four possible ways to obtain the data of the nearest adjacent sensing cell. Any of these methods may be used to obtain the data. Thus, three adjacent pixels include sensing cells arranged in an L-shape around the central reference pixel.
In one implementation, the image capture method further comprises:
the aiming indication is provided in a manner perceptible to a human user, the manner including display on a display screen, output to any of a graphical user interface and a directional indicator.
The aiming indication may be provided in various ways. The aiming indication can be provided in real time.
In one implementation, in the image capture method, the real-time aiming indication is an overlay, a crosshair, a target indicator relative to a current direction, an overlay (desired) image relative to a current image of the plurality of images.
In one implementation, the image capture method further comprises:
obtaining and/or updating the aiming indication, wherein a second function of the aiming indication is to indicate a target position or direction of movement of the handheld device, such that movement in or towards the indicated direction of movement should cause the handheld device to return towards an original direction or position of the handheld device.
The second function is to minimize the overall movement of the handheld device while maintaining a small scale of movement. This is intended to address the problem that some human users tend to move the handheld device relatively much away from the original direction or position (e.g., at a start time or a direction or position corresponding to a reference image) when capturing an image. In this case, the deviation may be large (e.g., greater than the pixel pitch). And/or the handheld device does not return, or only occasionally or slowly returns to the original orientation. In this case, it is difficult to obtain any image that is a matching image within a reasonable period of time or only some such images, since the deviations may be outside the pixel pitch or correspond only to the relative displacement of certain sensing cells.
In one implementation, the image capture method further comprises:
the first function and/or the second function of the sighting indication is obtained from gyroscope data and/or accelerometer data or by performing a recognition analysis on an image of the plurality of images.
In one implementation, the image capture method includes:
when there is a match between the deviation and the relative displacement, the image at the point in time is a matching image, the method further comprising:
combining the one or more matching images with the first image to form a composite image of relatively higher resolution.
In another aspect, an apparatus is provided for performing any of the methods and/or implementations disclosed herein. The steps of the method may be performed by various functions of the apparatus, and may be performed by modules or units of the apparatus having the corresponding configuration functions.
In another aspect, a processor and non-volatile memory for a handheld device are provided, wherein the memory stores instructions that, when executed by the processor, cause the processor to perform any of the methods and/or implementations disclosed herein.
In another aspect, a computer program product is provided, comprising computer program code which, when run, instructs a computer to perform any of the methods and/or implementations disclosed herein.
Herein, "and/or" means three possibilities for each option alone or both together, e.g. a and/or B may mean any one of A, B or both a and B.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus and methods may be implemented in other ways. For example, the described apparatus embodiment is merely one example. For example, the cell division is only one logical function division, and there may be another division manner in actual implementation. For example, various elements or components may be combined or integrated in another system or some features may be omitted, or not implemented. Further, the mutual coupling or direct coupling or communicative connection shown or discussed may be an indirect coupling or communicative connection between certain interfaces, devices and units, and may be implemented electronically, mechanically or otherwise.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed in a plurality of positions. Some or all of the units may be selected according to actual needs to achieve the objectives of the embodiment solutions.
In addition, the functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist separately physically, or two or more units may be integrated into one unit. The integrated unit may be implemented in a hardware form, or may be implemented in a hardware form other than a software functional unit.
When the above-described integration unit is implemented in the form of a software functional unit, the integration unit may be stored in a computer-readable storage medium. The software functional unit is stored in a storage medium and comprises instructions for instructing a computer device (which may be a personal computer, a server, a network device, etc.) or a processor to execute some steps of the method described in the embodiments of the present invention. The storage medium includes any medium that can store program codes, such as a USB flash memory, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It will be clearly understood by those skilled in the art that the above-described division of the functional blocks is illustrated for convenience and brevity of description. In practical applications, the functions described above may be distributed to different functional modules and implemented as required, that is, the internal structure of the device is divided into different functional modules to implement all or part of the functions described above. The specific working process of the device may refer to the corresponding process in the above method embodiment, and is not described herein again.
Finally, it should be noted that the above embodiments are only used for describing the technical solutions of the present invention, and do not limit the present invention. Although the present invention has been described in detail with reference to the above embodiments, those skilled in the art will recognize that they may still make modifications to the technical solutions described in the above embodiments, or make equivalents to some or all of the technical features thereof, without departing from the scope of the technical solutions of the embodiments of the present invention.
Drawings
Fig. 1 is a schematic diagram of a bayer filter array.
Fig. 2 is a diagram of movement of an image sensor using a pixel shift multi-shot technique.
Fig. 3 is a flow chart of an image capture method disclosed herein.
Fig. 4 is an illustration of how movement of the handheld device results in a deviation from matching the position of multiple sensing cells or only one sensing cell.
Fig. 5 is an illustration of four different ways of obtaining a matching image from sensing cells surrounding a central reference sensing cell.
Fig. 6 is an illustration of aiming indication.
Fig. 7 is an illustration of an apparatus for performing the image capture methods disclosed herein.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments of the present invention without any inventive step are within the scope of the present invention.
As described in the background, fig. 1 is a conventional bayer filter array.
Fig. 2 is an illustration of how the bayer filter array of fig. 1 undergoes pixel-shift multi-shot. In the upper half of fig. 2, an exemplary indication of movement is shown as a circular arrow about four adjacent sensing cells. The lower part of fig. 2 is a step-by-step sequence from left to right of the controlled movement of the image sensor in pixel shifting multiple steps, applied to a group of four adjacent sensing cells. The starting sensing cell can be arbitrarily selected. The dashed circles around the sensing cells indicate where the light forming rays from the same source point fall. The arrows indicate the direction of movement that will result in subsequent steps in the sequence.
Fig. 3 is a flow chart of an embodiment of an image capture method disclosed herein. Not all steps are shown. These steps may be performed by a processor or a sub-unit or module thereof, as dictated by instructions stored in memory. The processor and memory may be on a handheld device that captures a plurality of images. The handheld device has an image sensor with an array of sensing cells and an array of bayer filters. When an image is captured, the handheld device is moving, which is the camera shake movement and/or the orientation movement provided in response to the aiming indication. Hardware provided anti-shake functionality, such as movement of the image sensor relative to the handheld device, may not be used.
A plurality of images are captured and made available during a time period t.
Within the time period t, a deviation is obtained for each corresponding point in time t at which an image is captured. The deviation is compared to the known relative displacement between the positions of the sensing cells. It is determined whether the deviation matches the relative displacement between the sensing cells.
If a match is determined to exist, the image from the point in time when the match occurred is determined to be suitable for further processing. As shown in fig. 3, in an optional step, data from the matching image may be combined with the reference image (the reference image may also be considered as the matching image, since there is zero offset, so the offset matches the origin, which is also the location of the sensing cell). As described herein, by moving the handheld device, imaging radiation from the same scene object point passes through adjacent sensing cells of the image sensor. By combining data from these separate images, each image corresponds to a different sensing cell, the resolution of the combined image is increased.
In an optional step, or in a more detailed implementation of how the movement is obtained, real-time aiming indications are obtained by calculation and displayed. The sighting guide indicates the target position or direction of movement. By iteratively updating the real-time aiming indication and then determining if there is a match after the user has responded to the provided movement, a match for all of the desired sensing cells can be obtained, thereby increasing the efficiency of the process by providing a feedback loop and enabling more matches to be obtained.
The upper part of fig. 4 shows a set of 4 sensing cells perfectly matched with respect to time. The sensing cells follow a spatial path, shown as 2 projections on the horizontal and vertical planes behind the group of 4 sensing cells, due to hand-held camera shake or directional motion in response to the aiming indication. In the example shown, the value of the green (G) filtered pixel is recorded as a first frame at a first time. The first frame serves as a reference frame for subsequent biases. Other frames or sensing cells may be selected. The deviations of the sensing cells are the X and Y deviations of the established position on the vertical axis relative to the reference frame. The X and Y offsets are calculated from the gyroscope data of the handheld device. Each X Y locus collectively truncates the horizontal reference line of the reference frame at different points in time labeled match R, match G, and match B. These points have been matched to their corresponding sensing cells.
The lower half of fig. 4 is similar to the upper half, but shows the case of an incomplete match, since only one point in time is marked as match B, where the X and Y trajectories together truncate the reference line. The other sensing cells (R, G) do not have a bias to intercept the reference line because the movement of the handheld device does not pass the location of the adjacent 4 pixels.
Fig. 5 shows 4 different ways in which movements can be compared. There are 4 possible ways for three sensing cells adjacent to the central reference sensing cell (here, the green G filter sensing cell). Any combination of these 4 possibilities may be used.
Fig. 6 shows an embodiment of one possible implementation of the aiming indication. Three images are displayed, which may be displayed on a display screen of a smartphone, for example.
In the uppermost image, a reference point (a) at the time when the user starts the image capturing method is displayed on the screen. This corresponds to the start time of image capture.
Then, at a subsequent point in time, the deviation of the angle view from the start time of image capture is calculated by any method. The aiming indication is displayed superimposed on the screen, and the first reference point (A) is used as an aiming point (B). The human user manipulates the smartphone to match the reference point (a) with the aiming point (B).
Four different ways or combinations of the ones disclosed in fig. 5 can be used.
During image capture, the camera image may be displayed in real time, the camera image may be displayed as a still image at the time of first shutter depression, and the camera image may be displayed by superimposing both the image displayed in real time and the still image at the time of first shutter depression.
Fig. 7 is a diagram of hardware that may be used or provided in a handheld device, such as a smartphone having a display screen (not shown), for performing the functions of the method. The hardware includes a processor that can calculate the deviation, determine whether the deviation matches the relative displacement of the sensing cell's position, and combine the data of the matching images, as disclosed herein. The memory may store images captured by the image sensor and provide the images to a processor and/or display screen (not shown). The processor, image sensor and memory are connected by a communication channel labeled as a bus. The gyroscope/accelerometer is shown in dashed lines, as this function is optional for this embodiment.
The above disclosure discloses only exemplary embodiments and is not intended to limit the scope of the invention. It will be understood by those skilled in the art that the above described embodiments and all or some of the other embodiments and modifications that may be derived from the scope of the claims of the present invention are within the scope of the present invention.

Claims (17)

1. An image capture method for a handheld device, the handheld device comprising an image sensor and a color filter array, the method comprising:
obtaining a plurality of images over a time period t;
determining a time at which a deviation matches a relative displacement between positions of two or more sensing cells of the image sensor, the deviation corresponding to a deviation between a first image at a time point t1 and a second image at a time point t2 over a time period t;
wherein the deviation is due to movement of the handheld device.
2. The image capture method of claim 1, wherein the movement of the handheld device is due to a force externally applied to the handheld device.
3. The image capture method of claim 1 or 2, wherein the movement of the handheld device is a handheld shaking vibration.
4. The image capturing method according to any one of claims 1 to 3, characterized by further comprising:
obtaining an aiming indication, a first function of which is to indicate a target position or direction of movement of the handheld device, such that movement of the handheld device in or towards the indicated direction of movement should result in a deviation that matches the relative displacement between the positions of the two or more sensing cells.
5. The image capturing method according to any one of claims 1 to 4, characterized by further comprising:
determining whether movement of the handheld device in response to the aiming indication results in a deviation that matches the relative displacement between the locations of the two or more sensing cells.
6. The image capturing method according to any one of claims 1 to 5, characterized by further comprising:
obtaining the bias from sensor data of the handheld device, the sensor data including gyroscope data and/or accelerometer data;
and/or obtaining the deviation by performing image recognition analysis on the plurality of images.
7. The image capturing method according to any one of claims 1 to 6, characterized by further comprising:
the aiming indication is obtained and/or updated so as to obtain an image having a corresponding deviation that matches a relative displacement between any of the two or more sensing cells for which a match has not been determined.
8. The image capturing method according to any one of claims 1 to 7, wherein when the performing the determination is for more than two sensing cells, the sensing cells include any three sensing cells selected from eight sensing cells surrounding a center reference sensing cell, such that the center reference sensing cell and the three selected sensing cells include two green filter sensing cells, one blue filter sensing cell, and one red filter sensing cell.
9. The image capture method of claim 8, wherein the three sensing cells selected from the eight sensing cells comprise any three adjacent sensing cells.
10. The image capturing method according to any one of claims 1 to 9, characterized by further comprising:
the aiming indication is provided in a manner perceptible to a human user, the manner including display on a display screen, output to any of a graphical user interface and a directional indicator.
11. The image capture method of any of claims 1 to 10, wherein the real-time aiming indication is an overlay, a crosshair, a target indicator relative to a current direction, an overlay image relative to a current image of the plurality of images.
12. The image capturing method according to any one of claims 1 to 11, characterized by further comprising:
obtaining and/or updating the aiming indication, wherein a second function of the aiming indication is to indicate a target position or direction of movement of the handheld device, such that movement in or towards the indicated direction of movement should cause the handheld device to return towards an original direction or position of the handheld device.
13. The image capturing method according to any one of claims 4 to 12, characterized by comprising:
the first function and/or the second function of the sighting indication is obtained from gyroscope data and/or accelerometer data or by performing a recognition analysis on an image of the plurality of images.
14. The image capturing method according to any one of claims 1 to 13, characterized by comprising:
when there is a match between the deviation and the relative displacement, the image at the point in time is a matching image, the method further comprising:
combining the one or more matching images with the first image to form a composite image of relatively higher resolution.
15. An apparatus configured to perform the method of any one of the preceding claims.
16. A processor and non-volatile memory for a handheld device, wherein the memory stores instructions that, when executed by the processor, cause the processor to perform the method of any one of claims 1 to 14.
17. A computer program product, characterized in that it comprises computer program code which, when run, instructs a computer to perform the method according to any of claims 1 to 14.
CN201980099166.4A 2019-08-20 2019-08-20 Multi-shot image capture without anti-shake Active CN114208153B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/101473 WO2021031096A1 (en) 2019-08-20 2019-08-20 Multi-shot image capture without image stabilization

Publications (2)

Publication Number Publication Date
CN114208153A true CN114208153A (en) 2022-03-18
CN114208153B CN114208153B (en) 2023-03-10

Family

ID=74659810

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980099166.4A Active CN114208153B (en) 2019-08-20 2019-08-20 Multi-shot image capture without anti-shake

Country Status (2)

Country Link
CN (1) CN114208153B (en)
WO (1) WO2021031096A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010060362A (en) * 2008-09-02 2010-03-18 Nikon Corp Apparatus and method of measuring displacement
US20100103294A1 (en) * 2008-10-24 2010-04-29 Samsung Electronics Co., Ltd. Image pickup devices and image processing methods using the same
US20110205423A1 (en) * 2009-09-09 2011-08-25 Nikon Corporation Focus detection device, photographic lens unit, image-capturing apparatus and camera system
CN103597811A (en) * 2011-06-09 2014-02-19 富士胶片株式会社 Image pickup device imaging three-dimensional moving image and two-dimensional moving image, and image pickup apparatus mounting image pickup device
CN105185302A (en) * 2015-08-28 2015-12-23 西安诺瓦电子科技有限公司 Correction method for light point position deviation among monochromatic images and application thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004054884A (en) * 2002-05-31 2004-02-19 Sanyo Electric Co Ltd Image processing device
JP4869218B2 (en) * 2007-12-28 2012-02-08 オリンパス株式会社 Imaging display device
US8917327B1 (en) * 2013-10-04 2014-12-23 icClarity, Inc. Method to use array sensors to measure multiple types of data at full resolution of the sensor
CN104079904A (en) * 2014-07-17 2014-10-01 广东欧珀移动通信有限公司 Color image generating method and device
WO2016167814A1 (en) * 2015-04-17 2016-10-20 Pelican Imaging Corporation Systems and methods for performing high speed video capture and depth estimation using array cameras

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010060362A (en) * 2008-09-02 2010-03-18 Nikon Corp Apparatus and method of measuring displacement
US20100103294A1 (en) * 2008-10-24 2010-04-29 Samsung Electronics Co., Ltd. Image pickup devices and image processing methods using the same
US20110205423A1 (en) * 2009-09-09 2011-08-25 Nikon Corporation Focus detection device, photographic lens unit, image-capturing apparatus and camera system
CN103597811A (en) * 2011-06-09 2014-02-19 富士胶片株式会社 Image pickup device imaging three-dimensional moving image and two-dimensional moving image, and image pickup apparatus mounting image pickup device
CN105185302A (en) * 2015-08-28 2015-12-23 西安诺瓦电子科技有限公司 Correction method for light point position deviation among monochromatic images and application thereof

Also Published As

Publication number Publication date
CN114208153B (en) 2023-03-10
WO2021031096A1 (en) 2021-02-25

Similar Documents

Publication Publication Date Title
KR102105189B1 (en) Apparatus and Method for Selecting Multi-Camera Dynamically to Track Interested Object
CN109544620B (en) Image processing method and apparatus, computer-readable storage medium, and electronic device
KR101034109B1 (en) Image capture apparatus and computer readable recording medium storing with a program
JP4957759B2 (en) Imaging apparatus and imaging method
US9282312B2 (en) Single-eye stereoscopic imaging device, correction method thereof, and recording medium thereof
US20120300051A1 (en) Imaging apparatus, and display method using the same
JP5469258B2 (en) Imaging apparatus and imaging method
US9462252B2 (en) Single-eye stereoscopic imaging device, imaging method and recording medium
CN102959943A (en) Stereoscopic panoramic image synthesis device, image capturing device, stereoscopic panoramic image synthesis method, recording medium, and computer program
JP4894939B2 (en) Imaging apparatus, display method, and program
CN103685925A (en) Imaging apparatus and imaging processing method
JP6222514B2 (en) Image processing apparatus, imaging apparatus, and computer program
JP5833254B2 (en) Image processing apparatus, imaging apparatus, image processing method, and image processing program
JP2005328497A (en) Image pickup apparatus and image pickup method
US9288472B2 (en) Image processing device and method, and image capturing device
JP2014164172A (en) Imaging apparatus and program
CN104580921A (en) Imaging apparatus and its control method
US10893223B2 (en) Systems and methods for rolling shutter compensation using iterative process
JP2011035642A (en) Multiple eye photographing device, and adjustment method and program thereof
CN114208153B (en) Multi-shot image capture without anti-shake
WO2015141185A1 (en) Imaging control device, imaging control method, and storage medium
CN111279352B (en) Three-dimensional information acquisition system through pitching exercise and camera parameter calculation method
KR100736565B1 (en) Method of taking a panorama image and mobile communication terminal thereof
JP6833772B2 (en) Image processing equipment, imaging equipment, image processing methods and programs
JP2014011639A (en) Imaging device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant