WO2009139123A1 - Processeur d'image et dispositif d’imagerie utilisant ce processeur - Google Patents

Processeur d'image et dispositif d’imagerie utilisant ce processeur Download PDF

Info

Publication number
WO2009139123A1
WO2009139123A1 PCT/JP2009/001913 JP2009001913W WO2009139123A1 WO 2009139123 A1 WO2009139123 A1 WO 2009139123A1 JP 2009001913 W JP2009001913 W JP 2009001913W WO 2009139123 A1 WO2009139123 A1 WO 2009139123A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
frame image
unit
resolution
super
Prior art date
Application number
PCT/JP2009/001913
Other languages
English (en)
Japanese (ja)
Inventor
岡田茂之
石井裕夫
Original Assignee
三洋電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三洋電機株式会社 filed Critical 三洋電機株式会社
Publication of WO2009139123A1 publication Critical patent/WO2009139123A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/21Intermediate information storage
    • H04N1/2104Intermediate information storage for one or a few pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/21Intermediate information storage
    • H04N1/2104Intermediate information storage for one or a few pictures
    • H04N1/2112Intermediate information storage for one or a few pictures using still video cameras
    • H04N1/212Motion video recording combined with still video recording

Definitions

  • the present invention relates to an image processing apparatus having a function of processing both moving images and still images, and an imaging apparatus equipped with the image processing apparatus.
  • Digital movie cameras that can capture both moving and still images are becoming popular. Such a digital movie camera is generally designed such that an image captured in the moving image shooting mode has a lower resolution than an image captured in the still image shooting mode. In the moving image shooting mode, reading of pixel data from the image sensor and subsequent various signal processing must be executed at high speed, and the load is higher than in the still image shooting mode.
  • a technique is used to reduce the load by reducing the number of pixels of each frame image. For example, there are a method of using only pixel data output from the central pixel region of the effective pixel region of the image sensor and a method of thinning out pixel data of a captured frame image. Of course, when these methods are used, the resolution is lower than the frame image captured in the still image shooting mode.
  • Patent Document 1 discloses an image reading method for reading a mixed image and a thinned image used for super-resolution from a photographed input image. JP 2008-33914 A
  • Some digital movie cameras are equipped with a function that can capture still images during video recording. With this function, a favorite still image can be recorded as an independent file without editing from a moving image file.
  • the resolution of the frame image is set low during moving image capturing, the resolution of the still image captured during moving image capturing is also low. Therefore, it is conceivable to increase the resolution of the still image afterwards. For example, it is conceivable to increase the resolution by interpolating pixel data in the still image, but the image is blurred because it does not restore the high frequency components.
  • the present invention has been made in view of such circumstances, and an object of the present invention is to provide an image processing apparatus that can make a still image captured during moving image capturing high-definition and an imaging apparatus equipped with the image processing apparatus. It is in.
  • An image processing apparatus includes an encoding unit that encodes a frame image continuously captured by an imaging element as a moving image, a holding unit that holds a frame image captured by the imaging element, When an image capture instruction for a single still image is received while a moving image is being captured, at least one adjacent frame image adjacent to the frame image in the time direction is exceeded along with the target frame image corresponding to the image capture instruction. And a control unit that controls to register in the holding unit as a frame image to be used for the resolution processing.
  • FIG. 1 is a diagram illustrating a configuration of an imaging device equipped with an image processing apparatus according to Embodiment 1.
  • FIG. It is a figure for demonstrating the detailed structure of an encoding part. It is a figure which shows the pixel area
  • FIG. 3 is a diagram illustrating a configuration of an imaging apparatus equipped with an image processing apparatus according to a second embodiment.
  • FIG. 6 is a diagram illustrating a configuration of an imaging apparatus equipped with an image processing apparatus according to a third embodiment.
  • 10 is a diagram for explaining frame image thinning processing by a thinning processing unit according to Embodiment 3.
  • FIG. 1 is a diagram illustrating a configuration of an imaging apparatus 500 equipped with the image processing apparatus 200 according to the first embodiment.
  • the imaging device 500 includes an imaging unit 100, an image processing device 200, a recording unit 300, and an operation unit 400.
  • the image processing apparatus 200 includes an encoding unit 20, a holding unit 30, a control unit 40, and a super-resolution processing unit 50.
  • the configuration of the image processing apparatus 200 can be realized in terms of hardware by a CPU, memory, or other LSI of any computer, and in terms of software, it can be realized by a program loaded in the memory. Describes functional blocks realized through collaboration. Therefore, those skilled in the art will understand that these functional blocks can be realized in various forms by hardware only, software only, or a combination thereof.
  • the imaging unit 100 converts incident light into an electrical signal and supplies it to the image processing apparatus 200.
  • the imaging unit 100 includes an imaging element 10 and a signal processing unit 12.
  • a CCD (Charge-Coupled Devices) sensor, a CMOS (Complementary Metal-Oxide Semiconductor) image sensor, or the like can be employed as the image sensor 10.
  • the signal processing unit 12 converts the analog three primary color signals R, G, and B output from the image sensor 10 into digital luminance signals Y and color difference signals Cr and Cb.
  • the encoding unit 20 encodes the frame images continuously captured by the image sensor 10 as a moving image.
  • H.M. H.264 / AVC, MPEG-2, MPEG-4, etc. are compressed and encoded.
  • the detailed configuration of the encoding unit 20 will be described later.
  • the compressed and encoded moving image file is recorded in the recording unit 300.
  • the holding unit 30 temporarily holds the frame image captured by the image sensor 10.
  • a part of a frame buffer (not shown) provided in the encoding unit 20 may be used.
  • the control unit 40 receives an instruction to capture one still image from the operation unit 400 while capturing a moving image
  • the control unit 40 together with one frame image corresponding to the image capturing instruction (hereinafter referred to as a target frame image)
  • Control is performed so that at least one frame image adjacent to the target frame image in the time direction (hereinafter referred to as an adjacent frame image) is registered in the holding unit 30 as a frame image to be used for the super-resolution processing.
  • the target frame image and the adjacent frame image registered in the holding unit 30 may be a frame image that has been compression-encoded by the encoding unit 20 or may be a frame image that has not been compression-encoded. Good.
  • the target frame image may be an image obtained by extracting a frame image captured at a timing closest to the timing of the imaging instruction from frame images continuously generated during imaging of a moving image.
  • the adjacent frame image may be one or more frame images adjacent in the past direction than the extracted frame image, one or more frame images adjacent in the future direction, or both.
  • the super-resolution processing unit 50 performs super-resolution processing using the target frame image and its adjacent frame image held in the holding unit 30, and generates a high-resolution still image.
  • Super-resolution processing is a technique for generating an image having a resolution higher than the resolution of a plurality of images having a slight positional deviation, and can restore a high-frequency component.
  • the frame image held in the holding unit 30 is compression-encoded
  • the frame image after being decoded by a decoding unit is supplied to the super-resolution processing unit 50.
  • the still image generated by the super-resolution processing unit 50 is recorded in the recording unit 300.
  • the super-resolution processing unit 50 uses, for example, the super-resolution processing disclosed in (Shin Aoki, “Super-resolution processing using a plurality of digital image data”, Ricoh Technical Report No.24, NOVEMBER, 1998). Can do.
  • This super-resolution processing includes three steps of a position estimation step, a broadband interpolation step, and a weighted sum step.
  • the position estimation step the shift of the sampling position of each image data is estimated from the given plurality of image data itself.
  • each image data is densified using a wideband low-pass filter.
  • This low-pass filter is a filter that transmits all high-frequency components of the original signal including the aliasing component.
  • the weighted sum step calculates a weighted sum using a weight corresponding to the sampling position of each densified data. Thereby, the aliasing distortion component is canceled and the high frequency component of the original signal is restored.
  • the super-resolution processing unit 50 executes the super-resolution processing after the moving image is captured. In particular, it is preferable to execute it immediately after the end of moving image capturing. Note that when the super-resolution processing is executed immediately after the moving image is captured, the control unit 40 maintains the power-on state until the super-resolution processing is completed even when the power of the imaging apparatus 500 is turned off by the user. Then, control is performed so that the power is turned off after the super-resolution processing is completed.
  • the recording unit 300 includes a recording medium such as a memory card, a hard disk, and an optical disk.
  • the operation unit 400 includes various buttons and keys and accepts operation instructions from the user.
  • the operation unit 400 transmits the operation instruction to the control unit 40.
  • the operation unit 400 receives a moving image capturing instruction and a still image capturing instruction, and transmits them to the control unit 40.
  • FIG. 2 is a diagram for explaining a detailed configuration of the encoding unit 20.
  • the encoding unit 20 includes a screen dividing unit 21, an orthogonal transform unit 22, a quantization unit 23, an intra-screen prediction encoding unit 24, an inter-screen prediction encoding unit 25, a variable length encoding unit 26, a buffer 27, and a quantization step.
  • a determination unit 28 and a stream generation unit 29 are included.
  • the screen dividing unit 21 divides each frame image into a plurality of areas. Hereinafter, in this embodiment, it is assumed to be divided into macro blocks.
  • the orthogonal transform unit 22 performs orthogonal transform on each frame image in units of macro blocks.
  • the luminance signal Y and the color difference signals Cr and Cb are DCT transformed to generate DCT coefficients.
  • the quantization unit 23 quantizes the DCT coefficient generated by the orthogonal transform unit 22 with reference to a predetermined quantization table.
  • the quantization table defines the quantization steps that should be used to divide each DCT coefficient.
  • the quantization step corresponding to the low frequency component of the DCT coefficient is set small, and the quantization step corresponding to the high frequency component is set large. Thereby, omission is enlarged as the high frequency component.
  • all or some of the quantization steps defined in the quantization table are adaptively variably controlled by the quantization step determination unit 28. Thereby, the compression rate can be kept within a certain range.
  • the quantization unit 23 supplies the DCT coefficients quantized in units of macroblocks to the intra prediction encoding unit 24 or the inter prediction encoding unit 25. Specifically, data to be encoded as an I picture is supplied to the intra prediction encoding unit 24, and data to be encoded as a P picture or B picture is supplied to the inter prediction encoding unit 25. When the data to be encoded as the P picture or the B picture is the data of the target frame image and the adjacent frame image, not only the inter-screen prediction encoding unit 25 but also the intra-screen prediction encoding unit 24 Also supply.
  • the intra prediction encoding unit 24 performs intra prediction encoding of the DCT coefficient of the I picture.
  • intra prediction encoding defined in the MPEG series can be used.
  • the inter-screen predictive encoding unit 25 performs inter-screen predictive encoding of the DCT coefficient of the P picture or B picture.
  • inter-picture predictive coding defined in the MPEG series can be used. That is, a motion compensation technique can be used.
  • the inter-picture prediction encoding unit 25 searches for a prediction region (hereinafter referred to as an optimal prediction block) with the smallest error from the past or future reference pictures for each motion compensation block, and each motion compensation block. To determine a motion vector indicating a deviation from the optimal prediction block. Then, motion compensation is performed for each motion compensation block using the motion vector, and a prediction error signal is generated.
  • the size of the motion compensation block can be appropriately selected from among 16 ⁇ 16, 16 ⁇ 8, 8 ⁇ 16, 8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8, and 4 ⁇ 4. Note that the motion compensation processing is performed not on the DCT coefficient but on the pixel value, but in order to simplify the description here, the system for performing inverse quantization and inverse orthogonal transformation is omitted in FIG.
  • variable length encoding unit 26 entropy encodes the DCT coefficient, the prediction error signal, the motion vector, and other parameters generated by the intra prediction encoding unit 24, the inter prediction encoding unit 25, and the like.
  • the buffer 27 temporarily stores the encoded data encoded by the variable length encoding unit 26.
  • the buffer 27 supplies the code amount of each encoded picture or the buffer occupation amount to the quantization step determination unit 28.
  • the quantization step determination unit 28 adaptively changes the quantization step based on the code amount of each picture or the buffer occupation amount.
  • the quantization step determination unit 28 can change the quantization step based on a predetermined conversion table or the like. This conversion table is set so that the quantization step increases as the code amount or buffer occupancy of each picture increases, and the quantization step decreases as the number decreases.
  • the stream generation unit 29 converts the encoded data of each picture stored in the buffer 27 into a stream. When streamed, header information can be added. The encoded data streamed by the stream generation unit 29 is recorded in the recording unit 300.
  • the control unit 40 controls the encoded data of the target frame image and the adjacent frame image stored in the buffer 27 to be output to the holding unit 30.
  • These encoded data are obtained by intra prediction encoding by the intra prediction encoding unit 24 for super-resolution processing.
  • the encoded data of the target frame image and the adjacent frame image encoded by the intra prediction encoding unit 24 or the inter prediction encoding unit 25 is not a holding unit 30 but a stream generation unit. 29 is output.
  • FIG. 3 is a diagram illustrating a pixel region of the image sensor 10.
  • the effective pixel area 11b is an area that is actually used for recording.
  • a frame image captured in the effective pixel area 11b is recorded.
  • a frame image captured in a part of the effective pixel area 11b is recorded.
  • a frame image captured in the central pixel region 11c of the effective pixel region 11b is recorded.
  • the number of pixels in the central pixel area 11c is about half that of the effective pixel area 11b.
  • the data area can be reduced to about half by using only the central pixel area 11c. Can do.
  • FIG. 4 is a diagram illustrating a plurality of frame images captured by the image sensor 10 in the moving image shooting mode, a target frame image and an adjacent frame image for generating a still image.
  • a hatched frame image indicates a frame image to be subjected to intra-screen prediction encoding
  • a non-drawn frame image indicates a frame image to be inter-screen prediction encoded.
  • a total of three frames for the moving image are the target frame image F1, the first adjacent frame image F0 adjacent to the frame image F1 in the past direction, and the second adjacent frame image F2 adjacent in the future direction. Extracted from the captured frame image group.
  • the first adjacent frame image F0, the target frame image F1, and the second adjacent frame image F2 are all subjected to inter-frame prediction encoding for moving images, but are subjected to intra-screen prediction encoding for still images.
  • a high-resolution still image captured during moving image capturing is performed by performing super-resolution processing using the target frame image and its adjacent frame image. Can do. Further, when the target frame image and the adjacent frame image are temporarily stored during the moving image capturing, and the super-resolution processing is performed using the stored target frame image and the adjacent frame image after the moving image capturing ends, an increase in circuit scale and An increase in power consumption can be suppressed. On the other hand, when super-resolution processing is executed in parallel during video capture, super-resolution processing is a high-load process, so it is necessary to use high-spec hardware resources and increase power consumption. To do.
  • FIG. 5 is a diagram illustrating a configuration of an imaging apparatus 500 equipped with the image processing apparatus 200 according to the second embodiment.
  • the image processing apparatus 200 according to the second embodiment has a configuration in which an interpolation processing unit 60 is added to the image processing apparatus 200 according to the first embodiment illustrated in FIG.
  • an interpolation processing unit 60 is added to the image processing apparatus 200 according to the first embodiment illustrated in FIG.
  • processing different from the first embodiment will be described.
  • the interpolation processing unit 60 can increase the resolution by spatially interpolating predetermined pixel data into the target frame image.
  • the image data is generated by a simple linear interpolation process or an interpolation process using an FIR filter.
  • the control unit 40 determines that it is not suitable for the super-resolution processing by referring to at least one of the motion of the subject and the correlation between the target frame image and the adjacent frame image, the control unit 40 The resolution of the target frame image is increased.
  • the super-resolution processing unit 50 increases the resolution of the target frame image. In this case, the processing is the same as that described in the first embodiment.
  • control unit 40 causes the interpolation processing unit 60 to increase the resolution of the target frame image when determining that all the adjacent frame images are not suitable for the super-resolution processing. If there is an adjacent frame image suitable for the super-resolution processing, the super-resolution processing unit 50 is caused to execute the super-resolution processing using the adjacent frame image.
  • the control unit 40 specifies at least one of the movement of the subject and the degree of correlation between the target frame image and the adjacent frame image. When there are a plurality of adjacent frame images, each adjacent frame image is specified.
  • the control unit 40 acquires a motion vector for each motion compensation block between the target frame image and its adjacent frame image, and calculates a cumulative value or an average value of the plurality of motion vectors. When this cumulative value or average value exceeds a predetermined first threshold, it is determined that the adjacent frame image is not suitable for super-resolution processing. On the other hand, when the value is equal to or less than the first threshold, it is determined that the super-resolution processing is appropriate.
  • the control unit 40 can acquire the motion vector of each motion compensation block from the inter-screen prediction encoding unit 25 illustrated in FIG. If a frame image to be compression-encoded as an I picture is also designed to be supplied to the inter-screen predictive encoding unit 25, the motion vector can be acquired.
  • the control unit 40 acquires a pixel difference value from the optimal prediction block for each motion compensation block between the target frame image and the adjacent frame image, and calculates a cumulative value of the plurality of pixel difference values. When this accumulated value exceeds a predetermined second threshold, it is determined that the adjacent frame image is not suitable for super-resolution processing. On the other hand, when the value is equal to or smaller than the second threshold value, it is determined that the super-resolution processing is appropriate.
  • the control unit 40 can acquire the pixel difference value of each motion compensation block from the inter-screen prediction encoding unit 25 illustrated in FIG. If a frame image to be compression-encoded as an I picture is also designed to be supplied to the inter-screen prediction encoding unit 25, the pixel difference value can be acquired.
  • the control unit 40 may determine whether or not the super-resolution processing of the adjacent frame image is appropriate while capturing a moving image, or may determine after completion of capturing the moving image.
  • the control unit 40 deletes the adjacent frame image determined to be inappropriate for the super-resolution processing from the holding unit 30. Further, when the determination result can be calculated before being registered in the holding unit 30, control is performed so that the adjacent frame image is not registered in the holding unit 30.
  • the first threshold value and the second threshold value are set to values obtained by experiments or simulations by the designer.
  • the target frame image is increased in resolution by a spatial interpolation process, so that the still frame including a lot of noise is obtained.
  • the situation where a picture is generated can be suppressed.
  • super-resolution processing if an image that does not satisfy the premise of an image having a slight positional deviation is used, the image will contain a lot of noise.
  • the resolution is increased by spatial interpolation processing as the next best measure.
  • FIG. 6 is a diagram illustrating a configuration of an imaging apparatus 500 equipped with the image processing apparatus 200 according to the third embodiment.
  • the image processing apparatus 200 according to the third embodiment has a configuration in which a thinning processing unit 70 is added to the image processing apparatus 200 according to the first embodiment illustrated in FIG. Hereinafter, processing different from the first embodiment will be described.
  • the thinning processing unit 70 thins out pixel data of frame images continuously captured for moving images and supplies the thinned pixel data to the encoding unit 20. This thinning process is executed in the moving image shooting mode and is not executed in the normal still image shooting mode. Since moving image compression coding is a high-load process, the thinning-out processing unit 70 reduces the resolution of the frame image and reduces the load on the coding unit 20 in the moving image shooting mode. In FIG. 3, the number of pixels read from the pixel area of the image sensor 10 is reduced in the moving image shooting mode. However, the thinning processing unit 70 thins out the pixel data of the frame image read from the image sensor 10, thereby Reduce the number of pixels. Of course, both can be used together.
  • the control unit 40 controls the thinning processing unit 70 so as not to thin out the pixel data of the target frame image and the adjacent frame image.
  • FIG. 7 is a diagram for explaining frame image thinning processing by the thinning processing unit 70 according to the third embodiment.
  • a frame image captured in the central pixel region 11 c illustrated in FIG. 3 is input to the thinning processing unit 70.
  • the thinning processing unit 70 thins pixel data of this frame image in a staggered pattern.
  • pixel data of pixels indicated by diagonal lines is thinned out.
  • the number of pixels of the frame image can be reduced to half.
  • the pixel data of the frame image may be thinned out every other row or every other column, instead of in a staggered pattern.
  • the number of pixels of the frame image after the thinning is 1 ⁇ 4 compared to the number of pixels in the effective pixel region 11b shown in FIG. Since the pixel data is not thinned out by the thinning processing unit 70, the number of pixels of the target frame image and the adjacent frame image is 1 ⁇ 2 compared to the number of pixels in the effective pixel region 11b.
  • the third embodiment it is possible to reduce the load on the encoding unit by thinning out the pixel data of the frame image for moving image and performing compression encoding. At that time, by controlling so that the pixel data of the target frame image and the adjacent frame image is not thinned out, it is possible to avoid a decrease in resolution of a still image captured during moving image capturing. That is, it is possible to achieve both the reduction of the load on the encoding unit and the avoidance of the resolution reduction of the still image.
  • FIG. 2 the example in which the target frame image and the adjacent frame image are subjected to intra-frame prediction encoding by the intra-frame prediction encoding unit 24 has been described.
  • the target frame image and the adjacent frame image are not subjected to compression encoding, and are retained as they are. You may register with.
  • the data amount of the frame image registered in the holding unit 30 does not decrease, the occurrence of distortion due to compression coding can be avoided.
  • the intraframe prediction encoding unit 24 performs intraframe prediction encoding on the target frame image and the adjacent frame image, the quantization processing by the preceding quantization unit 23 may be skipped. In this case, it is possible to avoid irreversibly removing a part of the high frequency component.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Studio Devices (AREA)

Abstract

Un encodeur (20) encode une image de trame imagée en continu par un élément d’imagerie (10) en tant que vidéo. Une unité de maintien (30) maintient temporairement l'image de trame imagée par l'élément d’imagerie (10). Lorsqu'une commande pour imager une image fixe a été reçue pendant l’imagerie d'une vidéo, un contrôleur (40) exécute la commande pour enregistrer l'image de trame cible pour la commande d’imagerie et au moins une image de trame adjacente à cette image dans la direction temporelle en tant qu'images de trame à utiliser dans un traitement d'image à ultra haute résolution. Un processeur d'image à ultra haute résolution (50) effectue le traitement d'image à ultra haute résolution en utilisant l’image de trame cible et la trame adjacente qui sont enregistrées dans une unité de maintien (30) pour créer une image fixe de résolution plus élevée.
PCT/JP2009/001913 2008-05-15 2009-04-27 Processeur d'image et dispositif d’imagerie utilisant ce processeur WO2009139123A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2008128917A JP2011151430A (ja) 2008-05-15 2008-05-15 画像処理装置およびそれを搭載した撮像装置
JP2008-128917 2008-05-15

Publications (1)

Publication Number Publication Date
WO2009139123A1 true WO2009139123A1 (fr) 2009-11-19

Family

ID=41318500

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2009/001913 WO2009139123A1 (fr) 2008-05-15 2009-04-27 Processeur d'image et dispositif d’imagerie utilisant ce processeur

Country Status (2)

Country Link
JP (1) JP2011151430A (fr)
WO (1) WO2009139123A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011160299A (ja) * 2010-02-02 2011-08-18 Konica Minolta Holdings Inc 三次元画像撮影システムおよび三次元画像撮影システム用カメラ

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8554579B2 (en) 2008-10-13 2013-10-08 Fht, Inc. Management, reporting and benchmarking of medication preparation
AU2013335277B2 (en) * 2012-10-26 2015-07-16 Baxter Corporation Englewood Improved image acquisition for medical dose preparation system
CA2889352C (fr) 2012-10-26 2021-12-07 Baxter Corporation Englewood Station de travail amelioree pour systeme de preparation de doses de medicaments
JP6253338B2 (ja) * 2013-10-17 2017-12-27 キヤノン株式会社 映像処理装置、映像処理装置の制御方法
CA2953392A1 (fr) 2014-06-30 2016-01-07 Baxter Corporation Englewood Echange d'informations medicales gerees
US11575673B2 (en) 2014-09-30 2023-02-07 Baxter Corporation Englewood Central user management in a distributed healthcare information management system
US11107574B2 (en) 2014-09-30 2021-08-31 Baxter Corporation Englewood Management of medication preparation with formulary management
EP3937116A1 (fr) 2014-12-05 2022-01-12 Baxter Corporation Englewood Analyse de données de préparation de dose
JP2018507487A (ja) 2015-03-03 2018-03-15 バクスター・コーポレーション・イングルウッドBaxter Corporation Englewood アラート統合を伴う薬局ワークフロー管理
JP2019212138A (ja) 2018-06-07 2019-12-12 コニカミノルタ株式会社 画像処理装置、画像処理方法及びプログラム

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6489870A (en) * 1987-09-30 1989-04-05 Nippon Denki Home Electronics Television receiver
JP2002135793A (ja) * 2000-10-20 2002-05-10 Victor Co Of Japan Ltd カラー撮像装置
JP2006119843A (ja) * 2004-10-20 2006-05-11 Olympus Corp 画像生成方法およびその装置
JP2007151080A (ja) * 2005-10-27 2007-06-14 Canon Inc 画像処理装置及び画像処理方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6489870A (en) * 1987-09-30 1989-04-05 Nippon Denki Home Electronics Television receiver
JP2002135793A (ja) * 2000-10-20 2002-05-10 Victor Co Of Japan Ltd カラー撮像装置
JP2006119843A (ja) * 2004-10-20 2006-05-11 Olympus Corp 画像生成方法およびその装置
JP2007151080A (ja) * 2005-10-27 2007-06-14 Canon Inc 画像処理装置及び画像処理方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011160299A (ja) * 2010-02-02 2011-08-18 Konica Minolta Holdings Inc 三次元画像撮影システムおよび三次元画像撮影システム用カメラ

Also Published As

Publication number Publication date
JP2011151430A (ja) 2011-08-04

Similar Documents

Publication Publication Date Title
WO2009139123A1 (fr) Processeur d'image et dispositif d’imagerie utilisant ce processeur
US20100215104A1 (en) Method and System for Motion Estimation
JP4804107B2 (ja) 画像符号化装置、画像符号化方法及びそのプログラム
WO2008153619A1 (fr) Compensation de durée d'obturation
JP2006157481A (ja) 画像符号化装置及びその方法
WO2009130886A1 (fr) Dispositif de codage d'image animée, dispositif d'imagerie et procédé de codage d'image animée
US20110255597A1 (en) Method and System for Reducing Flicker Artifacts
US8705628B2 (en) Method and device for compressing moving image
JP2010057166A (ja) 画像符号化装置、画像符号化方法、集積回路及びカメラ
JP2010183181A (ja) 画像処理装置、およびそれを搭載した撮像装置
JP2012175424A (ja) 符号化処理装置および符号化処理方法
JP2008244993A (ja) トランスコーディングのための装置および方法
JP2009218965A (ja) 画像処理装置、それを搭載した撮像装置、および画像再生装置
JP4911625B2 (ja) 画像処理装置、およびそれを搭載した撮像装置
JP2007214886A (ja) 画像処理装置
JP6313614B2 (ja) 動画像符号化装置及びその制御方法
JP6152642B2 (ja) 動画像圧縮装置、動画像復号装置およびプログラム
JP4700992B2 (ja) 画像処理装置
JP2018032909A (ja) 画像符号化装置及びその制御方法及び撮像装置及びプログラム
JP5507702B2 (ja) 動画像符号化方法および動画像符号化装置
JP2008289105A (ja) 画像処理装置およびそれを搭載した撮像装置
JP2011055023A (ja) 画像符号化装置及び画像復号化装置
JP2009278473A (ja) 画像処理装置、それを搭載した撮像装置、および画像再生装置
JP4880868B2 (ja) 性能向上のための動画像圧縮方法及び装置
JP5171675B2 (ja) 画像処理装置、およびそれを搭載した撮像装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09746330

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09746330

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP