US8223223B2 - Image sensing apparatus and image sensing method - Google Patents
Image sensing apparatus and image sensing method Download PDFInfo
- Publication number
- US8223223B2 US8223223B2 US12/636,052 US63605209A US8223223B2 US 8223223 B2 US8223223 B2 US 8223223B2 US 63605209 A US63605209 A US 63605209A US 8223223 B2 US8223223 B2 US 8223223B2
- Authority
- US
- United States
- Prior art keywords
- image
- sensing
- image sensing
- sensing apparatus
- exposure time
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related, expires
Links
- 238000000034 method Methods 0.000 title claims description 28
- 238000004519 manufacturing process Methods 0.000 claims description 19
- 238000001514 detection method Methods 0.000 claims 2
- 230000035945 sensitivity Effects 0.000 description 19
- 230000003287 optical effect Effects 0.000 description 16
- 238000006073 displacement reaction Methods 0.000 description 15
- 230000006835 compression Effects 0.000 description 13
- 238000007906 compression Methods 0.000 description 13
- 230000005236 sound signal Effects 0.000 description 11
- 238000004364 calculation method Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 8
- 230000006837 decompression Effects 0.000 description 6
- 239000000203 mixture Substances 0.000 description 6
- 230000009467 reduction Effects 0.000 description 6
- 239000000284 extract Substances 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
Definitions
- the present invention relates to an image sensing apparatus that obtains an image by image sensing and applies electrical blur correction processing to the image, and an image sensing method for the image sensing apparatus.
- An example of electrical blur correction processing is one that uses image merging.
- image merging In the blur correction processing using image merging, a plurality of images are merged together to obtain an image containing less blur.
- the amount of blur is reduced by merging an image that is sensed with a longer exposure time (hereinafter referred to as second image) and an image that is sensed with a shorter exposure time (hereinafter referred to as first image).
- the second image contains blur but contains less noise.
- the first image contains less blur but contains more noise.
- an image containing less blur and less noise can be obtained by merging these images together.
- an image sensing apparatus is provided with: an image sensing portion that obtains an image by image sensing; a merging processing portion that produces a merged image by merging together first and second images obtained by the image sensing portion; and a control portion that controls image-sensing timing of the image sensing portion.
- the control portion controls the image sensing portion such that exposure time for the first image is shorter than exposure time for the second image, and such that the image sensing portion senses the second image after the first image.
- an image sensing method is provided with: a first image production step in which a first image is obtained by image sensing; second image production step in which a second image is obtained by image sensing; and a merging step in which a merged image is produced by merging together the first image obtained in the first image production step and the second image obtained in the second image production step.
- an exposure time for image sensing performed in the first image production step is shorter than an exposure time for image sensing in the second image production step, and the first image production step is performed before the second image production step.
- FIG. 1 is a block diagram showing the basic structure of an image sensing apparatus according to an embodiment of the present invention
- FIG. 2 is a block diagram showing an example of the configuration of a merging processing portion of an image sensing apparatus according to an embodiment of the present invention
- FIG. 3 is a schematic diagram for illustrating a block matching method
- FIG. 4 is a graph showing an example of how merging is performed by a first merging portion
- FIG. 5 is a graph showing an example of how merging is performed by a second merging portion
- FIG. 6 is a flow chart showing an example of the operation of an image sensing apparatus according to an embodiment of the present invention.
- FIG. 7 is a graph showing an example of how camera shake of an image sensing apparatus occurs after an image-sensing-start instruction is inputted
- FIG. 8 is a perspective view of an image sensing apparatus illustrating yaw, pitch, and roll directions
- FIG. 9A shows different types of the image sensing apparatuses
- FIG. 9B shows different shapes for a grip portion of a vertical-type image sensing apparatus.
- FIG. 9C shows different arrangements of a shutter button of a vertical-type image sensing apparatus.
- FIG. 1 is a block diagram showing the basic structure of an image sensing apparatus according to an embodiment of the present invention.
- the image sensing apparatus 1 is provided with an image sensing portion 2 that is provided with: an image sensor 3 that is a solid-state image sensor such as a CCD (charge coupled device) image sensor or a CMOS (complementary metal-oxide semiconductor) image sensor that converts light incident thereon into an electric signal; and a lens portion 4 that focuses an optical image of a subject on the image sensor 3 and adjusts light intensity and the like.
- an image sensor 3 that is a solid-state image sensor such as a CCD (charge coupled device) image sensor or a CMOS (complementary metal-oxide semiconductor) image sensor that converts light incident thereon into an electric signal
- CMOS complementary metal-oxide semiconductor
- the image sensing apparatus 1 is provided with: an AFE (analog front end) 5 that converts an image signal in the form of an analog signal outputted from the image sensor 3 into a digital signal; an image processing portion 6 that applies various kinds of image processing, including tone correction processing, to the image signal outputted in the form of a digital signal from the AFE 5 ; a sound collection portion 7 that converts sound it receives into an electric signal; a sound processing portion 8 that converts a sound signal fed thereto in the form of an analog signal from the sound collection portion 7 into a digital signal, and applies various kinds of sound processing, including noise reduction, to the sound signal; a compression processing portion 9 that applies compression/encoding processing for moving images, such as by an MPEG (Moving Picture Experts Group) compression method, to both an image signal outputted from the image processing portion 6 and a sound signal outputted from the sound processing portion 8 , and that applies compression/encoding processing for still images, such as by a JPEG (Joint Photographic Experts Group) compression method,
- the image sensing apparatus 1 is also provided with: an image output circuit portion 13 that converts an image signal resulting from the decoding by the decompression processing portion 12 into an analog signal to be displayed on a display portion (not shown) such as a display; and a sound output circuit portion 14 that converts a sound signal resulting from the decoding by the decompression processing portion 12 into an analog signal to be played back through a playback portion (not shown) such as a speaker.
- the image sensing apparatus 1 is further provided with: a CPU (control portion) 15 that controls the entire operation performed in the entire image sensing apparatus 1 ; a memory 16 that stores programs for different operations and that temporarily stores data during execution of programs; an operation portion 17 including buttons, for example, for starting image sensing and for adjusting image sensing conditions via which a user inputs an instruction; a timing generator (TG) portion 18 that outputs a timing control signal for synchronizing timings of operations of different portions; a bus line 19 across which the CPU 15 exchanges data with different portions; and a bus line 20 across which the memory 16 exchange data with different portions.
- a CPU control portion
- a memory 16 that stores programs for different operations and that temporarily stores data during execution of programs
- an operation portion 17 including buttons, for example, for starting image sensing and for adjusting image sensing conditions via which a user inputs an instruction
- TG timing generator
- the image processing portion 6 is provided with a merging processing portion 60 that merges together a plurality of image signals fed thereto to output the resulting merged signal as one image signal.
- the configuration of the merging processing portion 60 will be described later in detail.
- the image sensing apparatus 1 dealt with in the above description is capable of producing both moving-image and still-image signals
- the image sensing apparatus 1 may instead be capable of producing still image signals alone.
- the image sensing apparatus 1 may be structured without portions such as the sound collection portion 7 , the sound processing portion 8 , and the sound output circuit portion 14 .
- the external memory 10 may be of any kind, as long as image and sound signals can be recorded therein.
- a semiconductor memory such as an SD (secure digital) card, an optical disk such as a DVD, or a magnetic disk such as a hard disk may be used as the external memory 10 .
- the external memory 10 may be detachable from the image sensing apparatus 1 .
- an image signal in the form of an electric signal is obtained as a result of photoelectrical conversion that the image sensor 3 performs on light it receives through the lens portion 4 . Then, the image sensor 3 feeds the image signal to the AFE 5 at a predetermined timing in synchronism with a timing control signal fed from the TG portion 18 .
- the AFE 5 converts the image signal from an analog signal into a digital signal, and feeds the resulting digital image signal to the image processing portion 6 .
- the image signal which has R (red), G (green), and B (blue) components, is converted into an image signal having components such as a luminance signal (Y) and color-difference signals (U, V), and is also subjected to various kinds of image processing, including tone correction and edge enhancement.
- the memory 16 operates as a frame memory that temporarily stores the image signal when the image processing portion 6 performs its processing.
- positions of various lenses are adjusted to adjust focus, the degree of aperture of an aperture stop is adjusted to adjust exposure, and the sensitivity (e.g., ISO (International Organization for Standardization) sensitivity) of the image sensor 3 is adjusted according to the image signal fed to the image processing portion 6 .
- the adjustments of focus, exposure and sensitivity are either automatically performed according to a predetermined program or manually performed according to the user's instruction to achieve optimal states of focus, exposure and sensitivity.
- the image processing portion 6 a plurality of images are merged together by the merging processing portion 60 .
- the detail of the operation of the merging processing portion 60 will be described later.
- an image signal of a moving image sound is collected by the sound collection portion 7 .
- the sound collected by the sound collection portion 7 is converted into an electric signal to be fed to the sound processing portion 8 .
- the sound signal fed thereto is converted into a digital signal and is subjected to various kinds of sound processing, including noise reduction and intensity control.
- an image signal outputted from the image processing portion 6 and a sound signal outputted from the sound processing portion 8 are both fed to the compression processing portion 9 , where the signals are compressed by a predetermined compression method.
- the image signal and the sound signal are associated with each other in terms of time such that there is no time lag between the image and sound in playback.
- a compressed/encoded signal compressed/encoded by and outputted from the compression processing portion 9 is recorded in the external memory 10 via the driver potion 11 .
- an image signal outputted from the image processing portion 6 is fed to the compression processing portion 9 , where the image signal is compressed by a predetermined compression method. Subsequently, the compression processing portion 9 outputs a compressed/encoded signal, which is then recorded in the external memory 10 via the driver potion 11 .
- the compressed/encoded signal of the moving image recorded in the external memory 10 is read out by the decompression processing portion 12 based on the user's instruction.
- the decompression processing portion 12 decompresses and thereby decodes the compressed/encoded signal, and produces image and sound signals, and then feeds the image signal and the sound signal to the image output circuit portion 13 and the sound output circuit portion 14 , respectively.
- the image and sound signals are converted into formats that allow them to be played back on the display device and through the speaker, respectively, and are then outputted from the image output circuit portion 13 and the sound output circuit portion 14 , respectively.
- the compressed/encoded signal of the still image recorded in the external memory 10 is fed to the decompression processing portion 12 , which produces an image signal from the compressed/encoded signal. Then, the image signal is fed to the image output circuit portion 13 , where the image signal is converted into a format that can be played back on the display device.
- the display device and the speaker may be integrally formed with the image sensing apparatus 1 , or may instead be separate from the image sensing apparatus 1 and each connected with a cable or the like to a terminal provided in the image sensing apparatus 1 .
- the image signal outputted from the image processing portion 6 may be fed to the image output circuit portion 13 without being compressed.
- the image signal is compressed by the compression processing portion 9 and recorded in the external memory 10 ; here, the image signal may be simultaneously fed to a display device or the like via the image output circuit portion 13 .
- FIG. 2 is a block diagram showing an example of the configuration of the merging processing portion of an image sensing apparatus according to an embodiment of the present invention.
- the merging processing portion 60 is provided with: displacement correction portion 61 that corrects displacement of the second image based on the first image; an LPF (low pass filter) portion 62 that filters out a high-frequency component having a frequency equal to or higher than a predetermined frequency from a spatial frequency of the first image to produce and output a third image; a difference value calculation portion 63 that finds difference between the second image subjected to the displacement correction and the third image, and calculates a difference value; a first merging portion 64 that merges together the second image subjected to the displacement correction by the displacement correction portion 61 and the third image based on the difference value to produce and output a fourth image; an edge intensity value calculation portion 65 that extracts an edge (such as the outline of an object) from the third image to calculate an edge intensity value; and a second merging portion 66 that merges the first and fourth
- the exposure time for the first image is preferably shorter than a camera-shake-limit exposure time (1/f sec where the focal length is “f” mm, a time period during which hardly any camera shake occurs, “1” being a value in terms of 35 mm film cameras).
- the exposure time for the second image should at least be longer than the exposure time for the first image.
- the second image may be sensed with a correct exposure time (an exposure time leading to correct brightness of an image produced at the image sensing portion 2 ).
- the correct exposure time may be automatically set based on values such as the focal length, the aperture value or the sensitivity, and the exposure value (indicating the brightness of the subject) to satisfy a predetermined relationship.
- the merging processing portion 60 First, as shown in FIG. 2 , the first and second images obtained through image sensing by the image sensing portion 2 are fed to the merging processing portion 60 .
- the second image is an image sensed with an exposure time that is longer than a exposure time for the first image.
- the sensitivity of the image sensor 3 is set lower when the second image is sensed than when the first image is sensed. Furthermore, the exposure time and the sensitivity are so set as to adjust the first and second images to be substantially equal in brightness.
- the timings with which the first and second images are sensed are controlled by the CPU 15 .
- the detail of the image-sensing timings for the first and second images will be described later.
- the first image sensed with shorter exposure time, contains less blur due to camera shake, and thus has sharp edges. However, since the sensitivity of the image sensor 3 is set high, the first image is likely to contain much noise.
- the second image is sensed with longer exposure time and lower sensitivity of the image sensor 3 , and thus contains less noise than the first image.
- the second image is likely to contain blur due to camera shake, and thus is likely to have blurred edges.
- the first and second images are basically consecutively sensed and produced, and compositions of the first and second images are substantially equal. However, since the first and second images are not sensed perfectly simultaneously, there may be slight displacement between their compositions. Thus, the displacement correction portion 61 is provided to correct such displacement between the compositions of the first and second images.
- the displacement correction portion 61 detects “displacement” by, for example, searching for a part at which the first and second images are substantially equal. Then, according to the thus found “displacement”, coordinates of pixels in the second image are converted, and thereby correction is performed such that coordinates of a pixel in the first image and those of a pixel in the second image are substantially equal when the pixels indicate the same object. That is, processing is performed to specify pixel-to-pixel correspondence between the first and second images.
- Examples of the method of searching for a part at which the first and second images are substantially equal include various methods of finding an optical flow and a representative point matching method.
- description will be given of a case in which is adopted a block matching method, which is a method of finding an optical flow, with reference to FIG. 3 .
- FIG. 3 is a schematic diagram for illustrating a block matching method.
- reference numeral 100 denotes the first image and reference numeral 101 denotes an attention-focused block on which attention is focused in the first image.
- reference numeral 110 denotes the second image and reference numeral 111 denotes a candidate block in the second image on which attention is focused.
- Reference numeral 112 denotes a search block in which the candidate block 111 can be located.
- the displacement correction portion 61 calculates a correlation value between the attention-focused block 101 and the candidate block 111 .
- the candidate block 111 is moved within the search block 112 by one pixel at a time in the horizontal or the vertical direction, and a correlation value is calculated each time the candidate block 111 is moved.
- the correlation value may be, for example, a sum of all the absolute values of differences in brightness between pairs of corresponding pixels in the attention-focused block 101 and the candidate block 111 .
- Such a correlation value is generally called SAD (sum of absolute difference).
- a sum of squared values of such differences in brightness may be used as the correlation value.
- the location of the candidate block 111 at which the correlation value is the smallest there can be found not only the location of a block in the second image that shows an image substantially equal to the image that is shown by the attention-focused block 101 , but also motion vectors of the attention-focused block 101 between the first and second images (i.e., an optical flow, indicating the direction and magnitude of displacement between the first and second images). Based on the thus obtained motion vectors, the “displacement” can be found.
- LPF portion 62 produces the third image by applying LPF processing to the first image.
- the LPF portion 62 filters out high-spatial-frequency part of the first image (that is, perform noise reduction).
- a cutoff frequency of the LPF portion 62 is set to such a value that does not cause edges to be extremely blurred along with noise reduction (specifically, to such a value that edges can be sharply extracted by the edge intensity value calculation portion 65 , which will be described later).
- the difference value calculation portion 63 performs difference processing between the second and third images, and calculates a difference value.
- the difference value is a value indicating difference in color or brightness between corresponding pixels in the two images, and the difference value between pixels(x, y) is described as a difference value D(x, y).
- P2(x, y) indicates a signal value of the pixel(x, y) in the second image
- P3(x, y) indicates a signal value of the pixel(x, y) in the third image
- D(x, y) indicates a difference value obtained, as described above, from these signal values.
- the difference value D(x, y) may be calculated according to formula (1a) below.
- Formula (1a) deals with a case in which RGB values are used as the signal values P2(x, y) and P3(x, y) of the pixels (x, y) in formula (1).
- the values of the R, G, and B components of the signal value of the pixel (x, y) in the second image are represented by P2 R (x, y), P2 G (x, y), and P2 B (x, y), respectively; and the values of the R, G, and B components of the signal value of the pixel(x, y) in the third image are represented by P3 R (x, y), P3 G (x, y), and P3 B (x, y), respectively.
- the difference value D(x, y) is obtained by separately calculating absolute values of the differences of the R, G, and B components and adding up the separately calculated absolute values.
- D ⁇ ( x , y ) ⁇ P ⁇ ⁇ 2 R ⁇ ( x , y ) - P ⁇ ⁇ 3 R ⁇ ( x , y ) ⁇ + ⁇ P ⁇ ⁇ 2 G ⁇ ( x , y ) - P ⁇ ⁇ 3 G ⁇ ( x , y ) ⁇ + ⁇ P ⁇ ⁇ 2 B ⁇ ( x , y ) - P ⁇ ⁇ 3 B ⁇ ( x , y ) ⁇ ( 1 ⁇ a )
- the difference value D(x, y) may be calculated according to formula (1b) below.
- Formula (1b) deals with the case where the RGB values are used, and the values of the components of the signal values are indicated in the same manner as in formula (1a).
- the difference value D(x, y) is obtained by squaring the difference of each of the R, G, and B components, adding up the results to a sum, and raising the sum to the one-half power.
- D ⁇ ( x , y ) [ ⁇ P ⁇ ⁇ 2 R ⁇ ( x , y ) - P ⁇ ⁇ 3 R ⁇ ( x , y ) ⁇ 2 + ⁇ P ⁇ ⁇ 2 G ⁇ ( x , y ) - P ⁇ ⁇ 3 G ⁇ ( x , y ) ⁇ 2 + ⁇ P ⁇ ⁇ 2 B ⁇ ( x , y ) - P ⁇ ⁇ 3 B ⁇ ( x , y ) ⁇ 2 ] 1 2 ( 1 ⁇ b )
- the difference value D(x, y) may be calculated by using other methods.
- the difference value D(x, y) may be calculated by using YUV values instead of the RGB values in the method in which the RGB values are used (that is, by substituting the YUV values for the RGB values).
- the difference value D(x, y) may be calculated based merely on values of the Y components of the signal values of the second and third images.
- the first merging portion 64 merges the second and third images together to produce a fourth image. For example, weighted addition of the second and third images is performed to merge the images together. An example of the case where the merging by weighted addition is performed will be described with reference to FIG. 4 .
- FIG. 4 is a graph showing an example of how merging is performed by the first merging portion 64 .
- a mixing ratio ⁇ (x, y) is set based on the difference value D(x, y), and according to the mixing ratio ⁇ (x, y), weighted addition is performed.
- the second and third images are merged together according to the following formula (2).
- P 4( x,y ) ⁇ ( x,y ) ⁇ P 2( x,y )+ ⁇ 1 ⁇ ( x,y ) ⁇ P 3( x,y ) (2)
- the mixing ratio ⁇ (x, y) indicates an addition ratio (a merging ratio) that is used in performing weighted addition of the signal value P2(x, y) of a pixel at a position (x, y) in the second image and the signal value P3(x, y) of a pixel at a position (x, y) in the third image.
- the mixing ratio ⁇ (x, y) is the addition ratio for the second image, and thus the addition ratio for the third image is “1 ⁇ (x, y).”
- the mixing ratio ⁇ (x, y) is “1” when the difference value D(x, y) is smaller than a threshold value Th 1 _L, and “0” when the difference value D(x, y) is equal to or larger than a threshold value Th 1 _H.
- the mixing ratio ⁇ (x, y) is “1 ⁇ (D(x, y) ⁇ Th 1 _L/(Th 1 _H ⁇ Th 1 ⁇ L)” when the difference value D(x, y) is equal to or larger than the threshold value Th 1 _L and smaller than the threshold value Th 1 _H.
- the mixing ratio ⁇ (x, y) is linearly reduced from 1 to 0.
- the mixing ratio ⁇ (x, y) may be non-linearly reduced, it is preferable that the mixing ratio ⁇ (x, y) be monotonously reduced.
- a signal value P4(x, y) of a pixel at a position (x, y) in the fourth image is obtained.
- the signal values P2(x, y) and P3(x, y) of the pixels in the second and third images each include YUV values
- calculation may be separately performed for each of the Y, U, and V components to obtain the signal value P4(x, y) of the pixel in the fourth image.
- signal values including YUV values signal values including RGB values may be used.
- the edge intensity value calculation portion 65 applies edge extraction processing to the third image, and calculates an edge intensity value.
- the edge intensity value is a value that indicates an amount of variation of a pixel (the amount of variation relative to ambient pixels).
- the edge intensity value at a pixel(x, y) is indicated by E(x, y).
- the edge intensity value E(x, y) at the pixel(x, y) can be obtained by using formula (3) described below.
- P3 Y (x, y) indicates the Y component value of the signal value of the pixel(x, y) in the third image.
- a value corresponding to the Y component may be calculated by using the RGB components, and the value may be used as the Y component value.
- Fx(i, j) and Fy(i, j) each indicate a filter that extracts edges, that is to say, a filter that enhances edges in an image.
- a filter for example, a differential filter such as a Sobel and a Prewitt filter can be used.
- formula (3) described above deals with, as an example, a case in which a 3 ⁇ 3 filter is employed.
- Fx(i, j) is a filter that extracts edges in the x direction (horizontal direction)
- Fy(i, j) is a filter that extracts edges in the y direction (vertical direction).
- the edge intensity value E(x, y) of the pixel(x, y) as shown by formula (3) can be obtained by adding up the absolute values of the following values: a value obtained by multiplying the Y component values of pixels in the 3-by-3 region around the pixel(x, y) each by a value of a corresponding element of the 3-by-3 filter Fx(i, j), and then adding up resulting products; and a value obtained by multiplying the Y component values pixels in the 3-by-3 region around the pixel (x, y) each by a value of a corresponding element of the 3-by-3 filter Fy(i, j), and then adding up resulting products.
- the method shown in formula (3) is just an example, and thus the edge intensity value E(x, y) may be obtained by using other methods.
- the second merging portion 66 merges the first and fourth images together to produce a merged image.
- This merging is performed, for example, by performing weighted addition of the first and fourth images.
- An example of the case in which merging is performed by weighted addition will be described with reference to FIG. 5 .
- FIG. 5 is a graph showing an example of how merging is performed by the second merging portion.
- a mixing ratio ⁇ (x, y) is set based on the edge intensity value E(x, y), and weighted addition is performed according to the mixing ratio ⁇ (x, y).
- the first and fourth images are merged together according to the following formula (4).
- P ( x,y ) ⁇ ( x,y ) ⁇ P 1( x,y )+ ⁇ 1′′ ⁇ ( x,y ) ⁇ P 4( x,y ) (4)
- the mixing ratio ⁇ (x, y) indicates an addition ratio (a merging ratio) that is used in performing weighted addition of the signal value P1(x, y) of a pixel at a position (x, y) in the first image and the signal value P4(x, y) of a pixel at a position (x, y) in the fourth image.
- the mixing ratio ⁇ (x, y) is the addition ratio for the first image, and thus the addition ratio for the fourth image is “1 ⁇ (x, y).”
- the mixing ratio ⁇ (x, y) is “0” when the edge intensity value E(x, y) is smaller than a threshold value Th 2 _L, and “1” when the edge intensity value E(x, y) is equal to or larger than a threshold value Th 2 _H.
- the mixing ratio ⁇ (x, y) is (E(x, y) ⁇ Th 2 _L)/(Th 2 _H ⁇ Th 2 _L) when the edge intensity value E(x, y) is equal to or larger than the threshold value Th 2 _L and smaller than the threshold value Th 2 _H.
- the mixing ratio ⁇ (x, y) is linearly increased from 0 to 1.
- the mixing ratio ⁇ (x, y) may be non-linearly increased, it is preferable that the mixing ratio ⁇ (x, y) be increased monotonously.
- a signal value P(x, y) of a pixel at a position (x, y) in a merged image is obtained.
- the values of the YUV components are included in the signal values P2(x, y) and P3(x, y) of the pixels in the second and third images, calculation may be separately performed for each of the YUV components to obtain the signal value P(x, y) of the pixel in the merged image.
- those of the RGB components may be used instead of the signal values of the YUV components.
- the merging of the second and third images is performed by using a difference value obtained from the second and third images, it is possible to prevent subject motion blur (blur caused by the subject moving during the exposure time) from the second image and noise from the first image from being reflected in the fourth image.
- the merging of the first and fourth images is performed by using an edge intensity value obtained from the third image, the sharpness of the edges of the first image containing less blur can be effectively reflected in the merged image, and also, noise from the first image can be prevented from being reflected in the merged image.
- the merged image can be obtained as an image that has sharp edges, containing less blur due to camera shake or due to subject motion, and that contains less noise.
- the third image produced by applying LPF processing to the first image is used for calculating the edge intensity value, it is possible to prevent noise in the first image from causing the edge intensity value E(x, y) to be large in parts other than edges.
- the configuration of the merging processing portion 60 is merely an example, and thus the merging processing portion 60 may be otherwise configured.
- the merging processing portion 60 may be configured such that no intermediate images such as the third and fourth images (that is, images produces by converting, for example, the first and second images) are produced, but the first and second images are directly merged together.
- FIG. 6 is a flow chart showing an example of the operation of an image sensing apparatus according to an embodiment of the present invention.
- a user inputs an instruction to set image sensing conditions (STEP 1 ).
- the instruction is inputted to the image sensing apparatus 1 by, for example, half-pressing a shutter release button which is part of the operation portion 17 as shown in FIG. 1 .
- a preview may be performed in which an image inputted and stored in the image sensing apparatus 1 is displayed on a display portion or the like.
- the user can check the composition of an image to be produced by viewing the image displayed on the display portion, and according to the check result, the user inputs an instruction to set the image sensing conditions to the image sensing apparatus 1 .
- image sensing conditions for the second image are set based on the image inputted in the image sensing apparatus 1 (STEP 2 ). For example, image sensing conditions such as focus, exposure, and white balance are controlled by the image processing portion 6 checking the image as described above. At this time, based on the focus, the exposure value, etc., second sensitivity is set as the sensitivity for the second image and a second exposure time is set as the exposure time for the second image. Incidentally, the second exposure time may be the correct exposure time as described above.
- the image sensing conditions for the first image are set based on the image sensing conditions for the second image that is set in STEP 2 (STEP 3 ).
- image sensing conditions for the first image a first exposure time is set as the exposure time for the first image, and first sensitivity is set as the sensitivity for the first image.
- the first exposure time is set shorter than the second exposure time, to thereby prevent the first image from containing blur.
- the first sensitivity is set higher than the second sensitivity, to thereby make the first and second images substantially equal in brightness. For example, if the first exposure time is set to 1 ⁇ 4 of the second exposure time, the first sensitivity is set four times as high as the second sensitivity. That is, the exposure time and the sensitivity are set such that the product of the exposure time and the sensitivity is constant.
- image-sensing standby time the time from the input of an instruction to start image sensing until the start of the sensing of the first image. Detail of the image-sensing standby time will be described later.
- image-sensing-start instruction an instruction to start image sensing (hereinafter, referred to as “image-sensing-start instruction”) (STEP 4 ).
- image-sensing-start instruction is inputted to the image sensing apparatus 1 by, for example, fully pressing the shutter release button, which is part of the operation portion 17 as shown in FIG. 1 ,
- the image sensing apparatus 1 senses the first image and the second image in this order.
- the image sensing apparatus 1 waits for the image-sensing standby time to elapse after the image-sensing-start instruction is inputted in STEP 4 (STEP 5 ), and then starts sensing the first image (STEP 6 ). Then, after the image sensing of the first image, the image sensing apparatus 1 consecutively senses the second image (STEP 7 ), and then finishes the image sensing.
- FIG. 7 is a graph showing an example of how camera shake of an image sensing apparatus occurs after an image-sensing-start instruction is inputted.
- the graph of FIG. 7 illustrates experimentally obtained temporal variation of the camera shake occurring when the user uses a certain image sensing apparatus 1 .
- the camera shake occurrence pattern unique to the certain image sensing apparatus 1 shown in this graph will also be referred to as “camera shake pattern.”
- the point 0 along the axis that indicates time is the time point when the image-sensing-start instruction is inputted.
- the amount of camera shake is expressed in terms of angular velocity.
- the angular velocity in the yaw direction is indicated by a broken line
- the angular velocity in the pitch direction is indicated by a dash-dot line
- the angular velocity in the roll direction is indicated by a thin line
- the mean value of the angular velocities is indicated by a thick line.
- FIG. 8 is a perspective view of the image sensing apparatus illustrating the yaw, pitch, and roll directions.
- an optical axis O is substantially parallel to the horizontal plane
- a vertical axis V is substantially perpendicular to the horizontal plane
- a horizontal axis H is substantially perpendicular to the optical axis O and the vertical axis V.
- the yaw direction is a direction of rotation around the vertical axis V
- the pitch direction is a direction of rotation around the horizontal axis H
- the roll direction is a direction of rotation around the optical axis O.
- the direction (indicated by an arrow in the figure) that is clockwise with respect to the image sensing apparatus 1 is the forward direction.
- camera shake is large in all the three directions immediately after the image-sensing-start instruction is inputted.
- Such camera shake occurs (for example, during the period from 0 to 0.08 seconds after the input of the instruction) when the whole image sensing apparatus 1 is moved due to, for example, the shutter release button being fully pressed by the user in order to input the image-sensing-start instruction.
- the camera shake gradually becomes larger with time (for example, during a period starting at 0.13 seconds after the input of the instruction: hereinafter, referred to as “camera-shake-increase period”) due to causes such as a reaction occurring when the user takes his/her finger off the shutter release button and gradual reduction of the user's tension with which the user holds the image sensing apparatus 1 .
- the image sensing apparatus 1 senses the first image before sensing the second image that is to be sensed with a longer exposure time. Thus, the first image is sensed before the camera-shake-increase period starts.
- the image sensing apparatus 1 having the tendency of camera shake shown in FIG. 7 sense the first image during a period before the camera-shake-increase period in which overall camera shake is smaller (for example, the absolute value of the mean value of camera shake is 0.5 or less during the period between 0.08 and 0.13 seconds after the input of the image-sensing-start instruction).
- the image-sensing standby time is set to approximately 0.8 seconds. If the user wants to reduce camera shake in a particular direction, he/she may set the image-sensing standby time according to a period during which camera shake is small in the direction. For example, in the case shown in FIG. 7 , if the user wants to reduce to the minimum the amount of blur due to camera shake in the roll direction, which is particularly difficult to correct, it is preferable that the image-sensing standby time be set to approximately 0.1 seconds.
- the second image starts to be sensed immediately after the first image is sensed. More specifically, sensing of the second image starts when the image sensing portion 2 has finished sensing the first image and is ready for the next image sensing. In this way, the image-sensing standby time can be reduced to the minimum, to thereby reduce the amounts of blur in the first and second images.
- the image sensing of the first image with a shorter exposure time is performed before the image sensing of the second image with a longer exposure time. This helps securely prevent the image sensing of the first image from being performed during the camera-shake-increase period. As a result, the amount of blur in the first image can be reduced, and thus the amount of blur in the merged image can be effectively reduced.
- the provision of the image-sensing standby time makes it possible for the image sensing of the first image to be performed during a period when the amount of camera shake is particularly small (for example, in the period between 0.08 and 0.13 seconds in FIG. 7 ).
- the first image can be prevented from containing blur, and thus the amount of blur in the merged image can be reduced more effectively.
- the image-sensing standby time is set based on the tendency of camera shake attributable to the structural features of the image sensing apparatus such as the shape and the weight of the image sensing apparatus, there is no need of providing the image sensing apparatus with a sensor (for example, an angular velocity sensor) for detecting camera shake.
- a sensor for example, an angular velocity sensor
- the tendency of camera shake shown in FIG. 7 is merely an example.
- the tendency of camera shake differs depending on structural features of the image sensing apparatus such as the shape (that is, the shape of the portion that the user holds, the arrangement of the shutter release button, the position of the lens (optical axis)) and the weight of the image sensing apparatus.
- tendency of camera shake be experimentally found beforehand (for example, before shipping) with respect to each model of the image sensing apparatus to set the image-sensing standby time with respect to each model of the image sensing apparatus.
- FIGS. 9A to 9C Typical variations of the shape of the image sensing apparatus are shown in FIGS. 9A to 9C .
- FIG. 9A shows different types of the image sensing apparatus.
- the left one provided with a grip portion G to be held by one hand is a vertical type image sensing apparatus, and the right one whose whole body is held by two hands is a lateral type image sensing apparatus.
- FIG. 9B shows different shapes of the grip portion G of the vertical type image sensing apparatus.
- the grip portion G of the left image sensing apparatus protrudes in a direction substantially perpendicular to the image sensing direction, and the protruding grip portion G is inclined in a direction opposite to the image sensing direction such that the grip portion G is further away from the image sensing direction.
- FIG. 9C shows different positions of the shutter release button of the vertical type image sensing apparatus.
- the shutter release button S of the left image sensing apparatus is provided on the optical axis O.
- the shutter release button S of the right image sensing apparatus is provided at a position off the optical axis (specifically, above the optical axis O).
- a set first exposure time is extremely shorter than a predetermined standard exposure time (for example, the camera shake limit exposure time), that is, if the relation “the first exposure time ⁇ the standard exposure time ⁇ k (where k ⁇ 1)” is satisfied, it may be judged that the first image is less likely to contain blur.
- the image-sensing standby time may be set to 0.
- the sensing of the first image can be performed as soon as possible. This makes it possible to prevent the usability of the image sensing apparatus from becoming so degraded that the user misses a photo opportunity, the composition of an obtained image is displaced from a desired composition, or the user suspects that the image-sensing-start instruction has not been correctly inputted.
- the image-sensing standby time may be set to 0.
- the image sensing of the first image may be started at the time point when the image sensing of the first image becomes ready to be started.
- the image-sensing standby time may be set such that the first image is sensed during a period of small camera shake that exists after the time at which the image sensing apparatus becomes ready to sense the first image.
- a limit value may be provided for the image-sensing standby time. This helps prevent the image sensing of the first image from starting too late. This helps prevent deterioration of the operability of the image sensing apparatus.
- the image-sensing standby time may also be provided between the sensing of the first and second images.
- the user may be allowed to change the image-sensing standby time. More specifically, the image-sensing standby time may be set according not only to a tendency of camera shake attributable to the structural features of the image sensing apparatus, but also to a tendency of camera shake attributable to the user.
- portions such as an image processing portion 6 and a merging processing portion 60 may each be operated by a control device such as a microcomputer. Furthermore, all or part of the functions realized by such a control device may be prepared in the form of a computer program so that those functions—all or part—are realized as the computer program is executed on a program execution apparatus (for example, a computer).
- a control device such as a microcomputer.
- the image sensing apparatus 1 shown in FIG. 1 and the merging processing portion 60 shown in FIG. 2 can be realized in hardware or in a combination of hardware and software.
- a block diagram showing the blocks realized with software serves as a functional block diagram of those blocks.
- the present invention relates to an image sensing apparatus that obtains an image by image sensing and applies electrical blur correction processing to the image, and an image sensing method of the image sensing apparatus.
- the present invention relates to an image sensing apparatus that performs correction by sensing a plurality of images and merging them together, and its image sensing method.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
Description
D(x,y)=|P2(x,y)−P3(x,y)| (1)
P4(x,y)=α(x,y)×P2(x,y)+{1−α(x,y)}×P3(x,y) (2)
P(x,y)=β(x,y)×P1(x,y)+{1″β(x,y)}×P4(x,y) (4)
Claims (4)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008316708 | 2008-12-12 | ||
JP2008-316708 | 2008-12-12 | ||
JP2008316708A JP5261765B2 (en) | 2008-12-12 | 2008-12-12 | Imaging apparatus and imaging method |
Publications (2)
Publication Number | Publication Date |
---|---|
US20100149350A1 US20100149350A1 (en) | 2010-06-17 |
US8223223B2 true US8223223B2 (en) | 2012-07-17 |
Family
ID=42240035
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/636,052 Expired - Fee Related US8223223B2 (en) | 2008-12-12 | 2009-12-11 | Image sensing apparatus and image sensing method |
Country Status (3)
Country | Link |
---|---|
US (1) | US8223223B2 (en) |
JP (1) | JP5261765B2 (en) |
CN (1) | CN101753816A (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9213883B2 (en) * | 2012-01-10 | 2015-12-15 | Samsung Electronics Co., Ltd. | Method and apparatus for processing depth image |
EP3125527A4 (en) * | 2014-03-28 | 2017-05-03 | FUJIFILM Corporation | Image processing device, photography device, image processing method, and image processing program |
CN105472263B (en) * | 2014-09-12 | 2018-07-13 | 聚晶半导体股份有限公司 | Image acquisition method and the image capture equipment for using the method |
GB2537886B (en) * | 2015-04-30 | 2022-01-05 | Wsou Invest Llc | An image acquisition technique |
JP6729574B2 (en) * | 2015-06-18 | 2020-07-22 | ソニー株式会社 | Image processing apparatus and image processing method |
CN108650472B (en) * | 2018-04-28 | 2020-02-04 | Oppo广东移动通信有限公司 | Method and device for controlling shooting, electronic equipment and computer-readable storage medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5790490A (en) * | 1996-05-10 | 1998-08-04 | Olympus Optical Co., Ltd. | Anti-shake camera |
JP2001346093A (en) | 2000-05-31 | 2001-12-14 | Matsushita Electric Ind Co Ltd | Blurred image correction device, blurred image correction method, and recording medium for recording blurred image correction program |
US20020122133A1 (en) | 2001-03-01 | 2002-09-05 | Nikon Corporation | Digital camera and image processing system |
JP2002258351A (en) | 2001-03-01 | 2002-09-11 | Nikon Corp | Electronic camera and image processing system |
US6487369B1 (en) * | 1999-04-26 | 2002-11-26 | Olympus Optical Co., Ltd. | Camera with blur reducing function |
WO2007010891A1 (en) | 2005-07-19 | 2007-01-25 | Sharp Kabushiki Kaisha | Imaging device |
US7212230B2 (en) * | 2003-01-08 | 2007-05-01 | Hewlett-Packard Development Company, L.P. | Digital camera having a motion tracking subsystem responsive to input control for tracking motion of the digital camera |
US20070122139A1 (en) | 2005-11-29 | 2007-05-31 | Seiko Epson Corporation | Controller, photographing equipment, control method of photographing equipment, and control program |
JP2007324770A (en) | 2006-05-30 | 2007-12-13 | Kyocera Corp | Imaging apparatus and imaging method |
US20080166115A1 (en) * | 2007-01-05 | 2008-07-10 | David Sachs | Method and apparatus for producing a sharp image from a handheld device containing a gyroscope |
US7460773B2 (en) * | 2005-12-05 | 2008-12-02 | Hewlett-Packard Development Company, L.P. | Avoiding image artifacts caused by camera vibration |
US20080316334A1 (en) | 2007-06-25 | 2008-12-25 | Core Logic, Inc. | Apparatus and method for processing image |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006345338A (en) * | 2005-06-10 | 2006-12-21 | Seiko Epson Corp | Image pickup device and imaging method |
JP2007194945A (en) * | 2006-01-19 | 2007-08-02 | Seiko Epson Corp | Imaging apparatus, and control method and program |
CN101601276B (en) * | 2006-12-22 | 2011-05-18 | 国立大学法人电气通信大学 | Jiggle measuring system and jiggle measuring method |
-
2008
- 2008-12-12 JP JP2008316708A patent/JP5261765B2/en not_active Expired - Fee Related
-
2009
- 2009-12-11 CN CN200910253286A patent/CN101753816A/en active Pending
- 2009-12-11 US US12/636,052 patent/US8223223B2/en not_active Expired - Fee Related
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5790490A (en) * | 1996-05-10 | 1998-08-04 | Olympus Optical Co., Ltd. | Anti-shake camera |
US6487369B1 (en) * | 1999-04-26 | 2002-11-26 | Olympus Optical Co., Ltd. | Camera with blur reducing function |
JP2001346093A (en) | 2000-05-31 | 2001-12-14 | Matsushita Electric Ind Co Ltd | Blurred image correction device, blurred image correction method, and recording medium for recording blurred image correction program |
US20020122133A1 (en) | 2001-03-01 | 2002-09-05 | Nikon Corporation | Digital camera and image processing system |
JP2002258351A (en) | 2001-03-01 | 2002-09-11 | Nikon Corp | Electronic camera and image processing system |
US7212230B2 (en) * | 2003-01-08 | 2007-05-01 | Hewlett-Packard Development Company, L.P. | Digital camera having a motion tracking subsystem responsive to input control for tracking motion of the digital camera |
WO2007010891A1 (en) | 2005-07-19 | 2007-01-25 | Sharp Kabushiki Kaisha | Imaging device |
US20080259175A1 (en) | 2005-07-19 | 2008-10-23 | Sharp Kabushiki Kaisha | Imaging Device |
US20070122139A1 (en) | 2005-11-29 | 2007-05-31 | Seiko Epson Corporation | Controller, photographing equipment, control method of photographing equipment, and control program |
JP2007150802A (en) | 2005-11-29 | 2007-06-14 | Seiko Epson Corp | Control apparatus, photographing apparatus, control method of photographing apparatus, and control program |
US7460773B2 (en) * | 2005-12-05 | 2008-12-02 | Hewlett-Packard Development Company, L.P. | Avoiding image artifacts caused by camera vibration |
JP2007324770A (en) | 2006-05-30 | 2007-12-13 | Kyocera Corp | Imaging apparatus and imaging method |
US20080166115A1 (en) * | 2007-01-05 | 2008-07-10 | David Sachs | Method and apparatus for producing a sharp image from a handheld device containing a gyroscope |
US20080316334A1 (en) | 2007-06-25 | 2008-12-25 | Core Logic, Inc. | Apparatus and method for processing image |
Also Published As
Publication number | Publication date |
---|---|
CN101753816A (en) | 2010-06-23 |
US20100149350A1 (en) | 2010-06-17 |
JP5261765B2 (en) | 2013-08-14 |
JP2010141657A (en) | 2010-06-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8154634B2 (en) | Image processing device that merges a plurality of images together, image shooting device provided therewith, and image processing method in which a plurality of images are merged together | |
US8290356B2 (en) | Imaging device with image blurring reduction function | |
US8872937B2 (en) | Image capture apparatus and image capturing method | |
JP4872797B2 (en) | Imaging apparatus, imaging method, and imaging program | |
US10410061B2 (en) | Image capturing apparatus and method of operating the same | |
US8488840B2 (en) | Image processing device, image processing method and electronic apparatus | |
EP2323374A1 (en) | Image pickup apparatus, image pickup method, and program | |
JP4935302B2 (en) | Electronic camera and program | |
US20080101710A1 (en) | Image processing device and imaging device | |
JP2011530208A (en) | Improved image formation using different resolution images | |
US8223223B2 (en) | Image sensing apparatus and image sensing method | |
US20100073546A1 (en) | Image Processing Device And Electric Apparatus | |
US8441554B2 (en) | Image capturing apparatus capable of extracting subject region from captured image | |
US8451366B2 (en) | Image capturing device with automatic focus function | |
US20080181506A1 (en) | Imaging apparatus | |
US20110109770A1 (en) | Imaging apparatus, imaging method, and program | |
JP2010062952A (en) | Imaging device, image processing device, method for processing image, program, and recording medium | |
TWI492618B (en) | Image pickup device and computer readable recording medium | |
KR101469544B1 (en) | Image processing method and apparatus, and digital photographing apparatus | |
US8243154B2 (en) | Image processing apparatus, digital camera, and recording medium | |
US20080100724A1 (en) | Image processing device and imaging device | |
JP2008172395A (en) | Imaging apparatus and image processing apparatus, method, and program | |
JP6024135B2 (en) | Subject tracking display control device, subject tracking display control method and program | |
JP2011155582A (en) | Imaging device | |
JP2010183253A (en) | Information display device and information display program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SANYO ELECTRIC CO., LTD.,JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FUKUMOTO, SHIMPEI;HATANAKA, HARUO;IIJIMA, YASUHIRO;REEL/FRAME:023642/0982 Effective date: 20091207 Owner name: SANYO ELECTRIC CO., LTD., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:FUKUMOTO, SHIMPEI;HATANAKA, HARUO;IIJIMA, YASUHIRO;REEL/FRAME:023642/0982 Effective date: 20091207 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: XACTI CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SANYO ELECTRIC CO., LTD.;REEL/FRAME:032467/0095 Effective date: 20140305 |
|
AS | Assignment |
Owner name: XACTI CORPORATION, JAPAN Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE TO CORRECT THE INCORRECT PATENT NUMBER 13/446,454, AND REPLACE WITH 13/466,454 PREVIOUSLY RECORDED ON REEL 032467 FRAME 0095. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SANYO ELECTRIC CO., LTD.;REEL/FRAME:032601/0646 Effective date: 20140305 |
|
REMI | Maintenance fee reminder mailed | ||
LAPS | Lapse for failure to pay maintenance fees | ||
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20160717 |