WO2019065917A1 - Dispositif de compression d'image animée, appareil électronique, et programme de compression d'image animée - Google Patents

Dispositif de compression d'image animée, appareil électronique, et programme de compression d'image animée Download PDF

Info

Publication number
WO2019065917A1
WO2019065917A1 PCT/JP2018/036131 JP2018036131W WO2019065917A1 WO 2019065917 A1 WO2019065917 A1 WO 2019065917A1 JP 2018036131 W JP2018036131 W JP 2018036131W WO 2019065917 A1 WO2019065917 A1 WO 2019065917A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
unit
resolution
imaging
image area
Prior art date
Application number
PCT/JP2018/036131
Other languages
English (en)
Japanese (ja)
Inventor
大作 小宮
直樹 關口
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Publication of WO2019065917A1 publication Critical patent/WO2019065917A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to a video compression apparatus, an electronic device, and a video compression program.
  • an electronic device provided with an imaging device (hereinafter, referred to as a stacked imaging device) in which a back side illumination type imaging chip and a signal processing chip are stacked (see Patent Document 1).
  • the stacked imaging device is stacked such that the back side illumination type imaging chip and the signal processing chip are connected via the microbumps in each predetermined area.
  • frames imaged at a plurality of resolutions are output, such moving image compression of the frames is not conventionally considered.
  • a moving picture compression apparatus is a moving picture compression apparatus that compresses moving picture data including a plurality of frames generated from an output of an imaging device having a plurality of imaging areas in which different resolutions can be set.
  • a prediction processing unit for predicting the prediction target image area based on the resolution of the prediction target image area in the plurality of image areas corresponding to the plurality of imaging areas in the prediction target frame among the plurality of frames;
  • a setting unit configured to set, a prediction unit configured to predict the prediction target image area based on a prediction processing unit set by the setting unit, and a code encoding the prediction target frame using a prediction result by the prediction unit
  • a conversion unit is a conversion unit.
  • An electronic device which is one aspect of the technology disclosed in the present application, includes an imaging device having a plurality of imaging regions in which different resolutions can be set, and the plurality of frames generated from the output of the imaging device in the prediction target frame.
  • a setting unit configured to set a prediction processing unit for predicting the prediction target image region based on the resolution of a prediction target image region among a plurality of image regions corresponding to a plurality of imaging regions; and the prediction set by the setting unit
  • a prediction unit that predicts the prediction target image area based on a processing unit, and an encoding unit that encodes the prediction target frame using a prediction result of the prediction unit.
  • a moving picture compression program causes a processor to compress moving picture data including a plurality of frames generated from an output of an imaging element having a plurality of imaging areas in which different resolutions can be set.
  • the moving image compression program wherein the processor is configured to, based on a resolution of a prediction target image area in a plurality of image areas corresponding to the plurality of imaging areas in the prediction target frame among the plurality of frames, the prediction target image
  • FIG. 1 is a cross-sectional view of a stacked imaging device.
  • FIG. 2 is a diagram for explaining the pixel array of the imaging chip.
  • FIG. 3 is a circuit diagram of the imaging chip.
  • FIG. 4 is a block diagram showing an example of the functional configuration of the imaging device.
  • FIG. 5 is an explanatory view showing an example of the block configuration of the electronic device.
  • FIG. 6 is an explanatory view showing a configuration example of a moving image file.
  • FIG. 7 is an explanatory view showing the relationship between the imaging plane and the subject image.
  • FIG. 8 is an explanatory view showing a specific configuration example of a moving image file.
  • FIG. 9 is an explanatory view showing an example of imaging on an imaging plane in which different resolutions are set.
  • FIG. 9 is an explanatory view showing an example of imaging on an imaging plane in which different resolutions are set.
  • FIG. 10 is an explanatory view showing a prediction example of 16 ⁇ 16 prediction.
  • FIG. 11 is an explanatory view showing a prediction example of 4 ⁇ 4 prediction.
  • FIG. 12 is a block diagram showing a configuration example of the control unit shown in FIG.
  • FIG. 13 is a block diagram showing a configuration example of the compression unit.
  • FIG. 14 is a flowchart illustrating an example of a preprocessing procedure by the preprocessing unit.
  • FIG. 15 is a flowchart illustrating an example of an image processing procedure by the image processing unit.
  • FIG. 16 is a flowchart of an example of the intra-frame prediction processing procedure by the intra-frame prediction processing unit.
  • the layered imaging device is described in Japanese Patent Application No. 2012-139026 filed by the applicant of the present application.
  • the electronic device is, for example, an imaging device such as a digital camera or a digital video camera.
  • FIG. 1 is a cross-sectional view of a stacked imaging device 100.
  • FIG. A stacked imaging device (hereinafter simply referred to as “imaging device”) 100 processes a back-illuminated imaging chip (hereinafter simply referred to as “imaging chip”) 113 that outputs a pixel signal corresponding to incident light, and the pixel signal.
  • imaging chip a back-illuminated imaging chip
  • a signal processing chip 111 and a memory chip 112 for storing pixel signals are provided.
  • the imaging chip 113, the signal processing chip 111, and the memory chip 112 are stacked and electrically connected to each other by the bump 109 having conductivity such as Cu.
  • incident light is mainly incident in the Z-axis plus direction indicated by a white arrow.
  • the surface on which incident light is incident is referred to as the back surface.
  • the left direction in the drawing, which is orthogonal to the Z axis is taken as the plus direction of the X axis
  • the near direction in the drawing, which is orthogonal to the Z axis and the X axis is taken as the plus direction.
  • coordinate axes are displayed so that the orientation of each figure can be known with reference to the coordinate axes in FIG.
  • the imaging chip 113 is a backside illuminated MOS (Metal Oxide Semiconductor) image sensor.
  • the PD (photodiode) layer 106 is disposed on the back side of the wiring layer 108.
  • the PD layer 106 is two-dimensionally arranged, and includes a plurality of PDs 104 which store charges corresponding to incident light, and a transistor 105 provided corresponding to the PDs 104.
  • a color filter 102 is provided on the incident side of incident light in the PD layer 106 via a passivation film 103.
  • the color filter 102 has a plurality of types that transmit different wavelength regions, and has a specific arrangement corresponding to each of the PDs 104. The arrangement of the color filters 102 will be described later.
  • the combination of the color filter 102, the PD 104, and the transistor 105 forms one pixel.
  • a microlens 101 is provided on the color filter 102 on the incident side of the incident light corresponding to each pixel.
  • the microlenses 101 condense incident light toward the corresponding PDs 104.
  • the wiring layer 108 has a wiring 107 for transmitting the pixel signal from the PD layer 106 to the signal processing chip 111.
  • the wiring 107 may be a multilayer, and passive elements and active elements may be provided.
  • a plurality of bumps 109 are disposed on the surface of the wiring layer 108.
  • the plurality of bumps 109 are aligned with the plurality of bumps 109 provided on the opposite surface of the signal processing chip 111, and the imaging chip 113 and the signal processing chip 111 are aligned by pressure or the like.
  • the bumps 109 are joined to be electrically connected.
  • a plurality of bumps 109 are disposed on the surfaces facing each other of the signal processing chip 111 and the memory chip 112. These bumps 109 are aligned with each other, and the signal processing chip 111 and the memory chip 112 are pressurized or the like, whereby the aligned bumps 109 are joined and electrically connected.
  • the bonding between the bumps 109 is not limited to Cu bump bonding by solid phase diffusion, and micro bump bonding by solder melting may be employed. Also, for example, about one bump 109 may be provided for one block described later. Therefore, the size of the bumps 109 may be larger than the pitch of the PDs 104. Further, in the peripheral area other than the pixel area in which the pixels are arranged, bumps larger than the bumps 109 corresponding to the pixel area may be provided.
  • the signal processing chip 111 has TSVs (silicon through electrodes) 110 which mutually connect circuits respectively provided on the front and back surfaces.
  • the TSVs 110 are preferably provided in the peripheral area.
  • the TSV 110 may also be provided in the peripheral area of the imaging chip 113 and the memory chip 112.
  • FIG. 2 is a diagram for explaining the pixel arrangement of the imaging chip 113.
  • FIG. 2 is a diagram for explaining the pixel arrangement of the imaging chip 113.
  • FIG. In particular, a state in which the imaging chip 113 is observed from the back surface side is shown.
  • (A) is a top view which shows typically the imaging surface 200 which is the back surface of the imaging chip 113
  • (b) is the top view which expanded the partial area 200a of the imaging surface 200.
  • Each of the pixels 201 has a color filter (not shown).
  • the color filter consists of three types of red (R), green (G), and blue (B), and the notation “R”, “G”, and “B” in (b) is a color filter that the pixel 201 has Represents the type of As shown in (b), on the imaging surface 200 of the imaging element 100, the pixels 201 provided with such color filters are arranged according to a so-called Bayer arrangement.
  • the pixel 201 having a red filter photoelectrically converts light in the red wavelength band of incident light and outputs a light reception signal (photoelectric conversion signal).
  • the pixel 201 having a green filter photoelectrically converts light in the green wavelength band among incident light and outputs a light reception signal.
  • the pixel 201 having a blue filter photoelectrically converts light in the blue wavelength band among incident light and outputs a light reception signal.
  • the image sensor 100 is configured to be individually controllable for each unit group 202 including a total of four pixels 201 of adjacent 2 pixels ⁇ 2 pixels. For example, when charge storage is started simultaneously for two unit groups 202 different from each other, charge readout is performed 1/30 seconds after charge storage start in one unit group 202, that is, light reception signals are read, In the unit group 202, charge readout is performed 1/15 seconds after the start of charge accumulation. In other words, the imaging device 100 can set different exposure times (charge accumulation time, so-called shutter speed) for each unit group 202 in one imaging.
  • the imaging device 100 can make the amplification factor (so-called ISO sensitivity) of an imaging signal different for each unit group 202 besides the above-described exposure time.
  • the imaging device 100 can change the timing to start the charge accumulation and the timing to read out the light reception signal for each unit group 202. That is, the imaging element 100 can change the frame rate at the time of moving image capturing for each unit group 202.
  • the imaging device 100 is configured to be able to make the imaging conditions such as the exposure time, the amplification factor, the frame rate, and the resolution different for each unit group 202.
  • a reading line (not shown) for reading an imaging signal from a photoelectric conversion unit (not shown) of the pixel 201 is provided for each unit group 202, and the imaging signal can be read independently for each unit group 202.
  • the exposure time (shutter speed) can be made different for each unit group 202.
  • an amplification circuit (not shown) for amplifying an imaging signal generated by the photoelectrically converted charge is provided independently for each unit group 202, and the amplification factor by the amplification circuit can be controlled independently for each amplification circuit.
  • the amplification factor (ISO sensitivity) of the signal can be made different for each unit group 202.
  • the imaging conditions that can be varied for each unit group 202 include frame rate, gain, resolution (thinning rate), number of added rows or number of added columns for adding pixel signals, charge The storage time or number of storage, the number of bits for digitization, and the like.
  • the control parameter may be a parameter in image processing after acquisition of an image signal from a pixel.
  • a liquid crystal panel having sections that can be controlled independently for each unit group 202 (one section corresponds to one unit group 202) is provided in the imaging element 100, and a light reduction filter that can be turned on and off If it is used, it becomes possible to control the brightness (aperture value) for each unit group 202.
  • the number of pixels 201 constituting the unit group 202 may not be the 2 ⁇ 2 four pixels described above.
  • the unit group 202 may have at least one pixel 201, and conversely, may have more than four pixels 201.
  • FIG. 3 is a circuit diagram of the imaging chip 113. As shown in FIG. In FIG. 3, a rectangle surrounded by a dotted line representatively represents a circuit corresponding to one pixel 201. In addition, a rectangle surrounded by an alternate long and short dash line corresponds to one unit group 202 (202-1 to 202-4). Note that at least a part of each of the transistors described below corresponds to the transistor 105 in FIG.
  • the reset transistor 303 of the pixel 201 is turned on / off in unit group 202 units.
  • the transfer transistor 302 of the pixel 201 is also turned on / off in unit group 202 units.
  • reset wirings 300-1 for turning on / off the four reset transistors 303 corresponding to the upper left unit group 202-1 are provided, and four corresponding to the unit group 202-1 are provided.
  • a TX wire 307-1 for supplying a transfer pulse to the transfer transistor 302 is also provided.
  • a reset wiring 300-3 for turning on / off the four reset transistors 303 corresponding to the lower left unit group 202-3 is provided separately from the reset wiring 300-1.
  • a TX wiring 307-3 for supplying transfer pulses to the four transfer transistors 302 corresponding to the unit group 202-3 is provided separately from the TX wiring 307-1.
  • the reset wiring 300-2 and TX wiring 307-2 and the reset wiring 300-4 and TX wiring 307-4 are respectively unit groups It is provided in 202.
  • the 16 PDs 104 corresponding to each pixel 201 are connected to the corresponding transfer transistors 302, respectively.
  • a transfer pulse is supplied to the gate of each transfer transistor 302 via the TX wiring of each unit group 202.
  • the drain of each transfer transistor 302 is connected to the source of the corresponding reset transistor 303, and a so-called floating diffusion FD between the drain of the transfer transistor 302 and the source of the reset transistor 303 is connected to the gate of the corresponding amplification transistor 304.
  • Ru is
  • the drains of the reset transistors 303 are commonly connected to a Vdd wiring 310 to which a power supply voltage is supplied.
  • a reset pulse is supplied to the gate of each reset transistor 303 via the reset wiring of each unit group 202.
  • the drains of the respective amplification transistors 304 are commonly connected to a Vdd wiring 310 to which a power supply voltage is supplied.
  • the source of each amplification transistor 304 is connected to the drain of the corresponding selection transistor 305.
  • the gate of each selection transistor 305 is connected to a decoder wiring 308 to which a selection pulse is supplied.
  • the decoder wiring 308 is provided independently for each of the 16 selection transistors 305.
  • the source of each selection transistor 305 is connected to the common output wiring 309.
  • the load current source 311 supplies a current to the output wiring 309. That is, the output wiring 309 for the selection transistor 305 is formed by a source follower.
  • the load current source 311 may be provided on the imaging chip 113 side or may be provided on the signal processing chip 111 side.
  • each PD 104 converts incident light to be received into charge and accumulates it. Thereafter, when the transfer pulse is applied again in a state where the reset pulse is not applied, the accumulated charge is transferred to the floating diffusion FD, and the potential of the floating diffusion FD becomes a signal potential after charge accumulation from the reset potential. .
  • the reset wiring and the TX wiring are common. That is, the reset pulse and the transfer pulse are simultaneously applied to four pixels in the unit group 202, respectively. Therefore, all the pixels 201 forming a certain unit group 202 start charge accumulation at the same timing, and end charge accumulation at the same timing. However, pixel signals corresponding to the accumulated charges are selectively output from the output wiring 309 by sequentially applying selection pulses to the respective selection transistors 305.
  • the charge accumulation start timing can be controlled for each unit group 202.
  • different unit groups 202 can be imaged at different timings.
  • FIG. 4 is a block diagram showing a functional configuration example of the imaging device 100.
  • the analog multiplexer 411 selects 16 PDs 104 forming the unit group 202 in order, and outputs the respective pixel signals to the output wiring 309 provided corresponding to the unit group 202.
  • the multiplexer 411 is formed on the imaging chip 113 together with the PD 104.
  • the pixel signal output via the multiplexer 411 is subjected to CDS and A / A by the signal processing circuit 412 for performing correlated double sampling (CDS) and analog / digital (A / D) conversion, which is formed in the signal processing chip 111. D conversion is performed.
  • the A / D converted pixel signals are delivered to the demultiplexer 413 and stored in the pixel memory 414 corresponding to each pixel.
  • the demultiplexer 413 and the pixel memory 414 are formed in the memory chip 112.
  • the arithmetic circuit 415 processes the pixel signal stored in the pixel memory 414 and delivers it to the image processing unit in the subsequent stage.
  • the arithmetic circuit 415 may be provided in the signal processing chip 111 or in the memory chip 112.
  • FIG. 4 shows the connection of four unit groups 202, in reality, these units exist for each of the four unit groups 202 and operate in parallel.
  • the arithmetic circuit 415 may not be present for every four unit groups 202.
  • one arithmetic circuit 415 sequentially refers to the values of the pixel memory 414 corresponding to each of the four unit groups 202. It may be processed.
  • the output wirings 309 are provided corresponding to each of the unit groups 202. Since the imaging element 100 has the imaging chip 113, the signal processing chip 111, and the memory chip 112 stacked, by using the electrical connection between the chips using the bumps 109 for the output wiring 309, each chip is made in the surface direction The wiring can be routed without increasing the size.
  • FIG. 5 is an explanatory view showing an example of the block configuration of the electronic device.
  • the electronic device 500 is, for example, a lens-integrated camera.
  • the electronic device 500 includes an imaging optical system 501, an imaging element 100, a control unit 502, a liquid crystal monitor 503, a memory card 504, an operation unit 505, a DRAM 506, a flash memory 507, and a recording unit 508.
  • the control unit 502 includes a compression unit that compresses moving image data as described later. Therefore, the configuration including at least the control unit 502 in the electronic device 500 is a moving image compression apparatus.
  • the imaging optical system 501 is composed of a plurality of lenses, and forms an object image on the imaging surface 200 of the imaging element 100.
  • the imaging optical system 501 is illustrated as a single lens for the sake of convenience.
  • the imaging device 100 is, for example, an imaging device such as a complementary metal oxide semiconductor (CMOS) or a charge coupled device (CCD), captures an object image formed by the imaging optical system 501, and outputs an imaging signal.
  • CMOS complementary metal oxide semiconductor
  • CCD charge coupled device
  • the control unit 502 is an electronic circuit that controls each unit of the electronic device 500, and includes a processor and its peripheral circuits.
  • a predetermined control program is written in advance in the flash memory 507, which is a non-volatile storage medium.
  • the control unit 502 controls each unit by reading a control program from the flash memory 507 and executing it.
  • This control program uses a DRAM 506 which is a volatile storage medium as a work area.
  • the liquid crystal monitor 503 is a display device using a liquid crystal panel.
  • the control unit 502 causes the imaging device 100 to repeatedly capture a subject image at predetermined intervals (for example, 1/60 second). Then, the image pickup signal output from the image pickup element 100 is subjected to various image processing to create a so-called through image, which is displayed on the liquid crystal monitor 503. In addition to the above-described through image, a setting screen for setting an imaging condition is displayed on the liquid crystal monitor 503, for example.
  • the control unit 502 creates an image file to be described later based on the imaging signal output from the imaging element 100, and records the image file on a memory card 504, which is a portable recording medium.
  • the operation unit 505 includes various operation members such as a push button, and outputs an operation signal to the control unit 502 in response to the operation of the operation members.
  • the recording unit 508 is, for example, a microphone, converts environmental sound into an audio signal, and inputs the audio signal to the control unit 502.
  • the control unit 502 may record the moving image file in a recording medium (not shown) built in the electronic device 500 such as a hard disk instead of recording the moving image file in the memory card 504 which is a portable recording medium.
  • FIG. 6 is an explanatory view showing a configuration example of a moving image file.
  • the moving image file 600 is generated during compression processing in a compression unit 902 described later in the control unit 502, and is stored in the memory card 504, the DRAM 506, or the flash memory 507.
  • the moving image file 600 is composed of two blocks of a header portion 601 and a data portion 602.
  • the header unit 601 is a block located at the beginning of the moving image file 600.
  • a file basic information area 611, a mask area 612, and an imaging information area 613 are stored in the order described above.
  • the size and offset of each part (header section 601, data section 602, mask area 612, imaging information area 613, etc.) in the moving image file 600 are recorded.
  • the mask area 612 imaging condition information, mask information, and the like described later are recorded.
  • the imaging information area 613 information related to imaging such as a model name of the electronic device 500 or information of the imaging optical system 501 (for example, information regarding optical characteristics such as aberration) is recorded.
  • the data unit 602 is a block located behind the header unit 601, and stores image information, audio information, and the like.
  • FIG. 7 is an explanatory view showing the relationship between the imaging plane and the subject image.
  • (A) schematically shows an imaging surface 200 (imaging range) of the imaging element 100 and a subject image 701.
  • the control unit 502 captures a subject image 701 once before capturing in (c).
  • the imaging of (a) may also be performed, for example, for creating a live view image (so-called through image).
  • the control unit 502 executes predetermined image analysis processing on the subject image 701 obtained by the imaging in (a).
  • the image analysis process is a process of detecting the main subject area and the background area by, for example, a known subject detection technology (a technology for calculating a feature amount and detecting a range in which a predetermined subject is present).
  • a known subject detection technology a technology for calculating a feature amount and detecting a range in which a predetermined subject is present.
  • the imaging surface 200 is divided into a main subject region 702 in which a main subject is present and a background region 703 in which a background is present.
  • the main subject area 702 may have a shape along the outer shape of the subject image 701. That is, the main subject region 702 may be set so as to include as little as possible other than the subject image 701.
  • the control unit 502 sets different imaging conditions for each unit group 202 in the main subject region 702 and each unit group 202 in the background region 703. For example, in the former unit group 202, a shutter speed faster than that of the latter unit group 202 is set. In this way, in the imaging of (c) taken after the imaging of (a), image blurring is less likely to occur in the main subject region 702.
  • the control unit 502 makes the ISO relatively higher for each unit group 202 of the former. Set the sensitivity or set a slow shutter speed. Further, the control unit 502 sets a relatively low ISO sensitivity or sets a high shutter speed to each of the latter unit groups 202. In this way, in the imaging of (c), it is possible to prevent blackout of the main subject region 702 in a backlit state and overexposure of the background region 703 having a large amount of light.
  • the image analysis process may be different from the process of detecting the main subject area 702 and the background area 703 described above. For example, processing may be performed to detect a portion where the brightness is equal to or more than a predetermined level (a portion that is too bright) or a portion where the brightness is less than a predetermined level (a too dark portion).
  • the control unit 502 causes the exposure value (Ev value) to be lower for the unit group 202 included in the former region than for the unit group 202 included in the other region.
  • Shutter speed and ISO sensitivity are examples of the exposure value (Ev value) to be lower for the unit group 202 included in the former region than for the unit group 202 included in the other region.
  • control unit 502 sets the shutter speed and the ISO sensitivity so that the exposure value (Ev value) of the unit group 202 included in the latter region is higher than that of the unit group 202 included in the other region. . By doing this, the dynamic range of the image obtained by the imaging of (c) can be expanded beyond the original dynamic range of the imaging device 100.
  • FIG. 7 shows an example of the mask information 704 corresponding to the imaging surface 200 shown in (a). “1” is stored at the position of the unit group 202 belonging to the main subject area 702, and “2” is stored at the position of the unit group 202 belonging to the background area 703.
  • the control unit 502 executes an image analysis process on the image data of the first frame to detect the main subject area 702 and the background area 703.
  • the frame obtained by the imaging in (a) is divided into the main subject area 702 and the background area 703 as shown in (c).
  • the control unit 502 sets different imaging conditions for each unit group 202 in the main subject area 702 and each unit group 202 in the background area 703, performs imaging in (c), and creates image data. .
  • An example of the mask information 704 at this time is shown in (d).
  • the mask information 704 of (b) corresponding to the imaging result of (a) and the mask information 704 of (d) corresponding to the imaging result of (c) imaging is performed at different times (the time difference is Therefore, for example, when the subject is moving or when the user moves the electronic device 500, the two mask information 704 have different contents.
  • the mask information 704 is dynamic information that changes as time passes. Therefore, in a certain unit group 202, different imaging conditions are set for each frame.
  • FIG. 8 is an explanatory view showing a specific configuration example of the moving image file 600. As shown in FIG. In the mask area 612, identification information 801, imaging condition information 802, and mask information 704 are recorded in the order described above.
  • the identification information 801 indicates that the moving image file 600 is created by the multi-imaging condition moving image pickup function.
  • the multi-imaging condition moving image imaging function is a function of shooting a moving image with the imaging element 100 in which a plurality of imaging conditions are set.
  • the imaging condition information 802 is information indicating what use (purpose, role) exists in the unit group 202. For example, as described above, when the imaging plane 200 (FIG. 7A) is divided into the main subject area 702 and the background area 703, each unit group 202 belongs to the main subject area 702, or It belongs to the area 703.
  • the imaging condition information 802 uses the unit group 202, for example, “moving image shooting of main subject area at resolution A” and “moving image shooting of background area at resolution B” Is information that represents the unique number assigned to each of these uses. For example, the number 1 is assigned to "use moving image of main subject area at resolution A” and the number 2 is assigned to "use moving image at background B to resolution B".
  • the mask information 704 is information representing the use (purpose, role) of each unit group 202.
  • the mask information 704 is “information represented by a number assigned to the imaging condition information 802 in the form of a two-dimensional map in accordance with the position of the unit group 202”. That is, when the two-dimensional array of unit groups 202 is specified by two integers x and y at two-dimensional coordinates (x, y), the use of the unit group 202 at the position of (x, y) is It is expressed by the number existing at the position (x, y) of the mask information 704.
  • the unit group 202 located at the coordinates (3, 5) is “image main subject area” It can be seen that the application has been given. In other words, it can be understood that the unit group 202 located at the coordinates (3, 5) belongs to the main subject region 702.
  • the mask information 704 is dynamic information that changes for each frame, it is recorded during compression processing for each frame, that is, for each data block Bi described later (not shown).
  • Data blocks B1 to Bn are stored as moving image data in the order of imaging for each frame F (F1 to Fn).
  • Data block Bi (i is an integer of 1 ⁇ i ⁇ n) includes mask information 704, image information 811, Tv value map 812, Sv value map 813, Bv value map 814, and Av value information 815, Audio information 816 and additional information 817 are included.
  • the image information 811 is information obtained by recording an image pickup signal output from the image pickup element 100 by the image pickup of FIG. 7C in a form before performing various image processing, and is so-called RAW image data.
  • the Tv value map 812 is information in which a Tv value representing a shutter speed set for each unit group 202 is represented in the form of a two-dimensional map in accordance with the position of the unit group 202.
  • the shutter speed set to the unit group 202 located at the coordinates (x, y) can be determined by examining the Tv value stored at the coordinates (x, y) of the Tv value map 812.
  • the Sv value map 813 is information in which the Sv value representing the ISO sensitivity set for each unit group 202 is expressed in the form of a two-dimensional map, similarly to the Tv value map 812.
  • the Bv value map 814 is a Tv value map 812 for the subject brightness measured for each unit group 202 at the time of imaging in FIG. 7C, that is, the Bv value representing the brightness of the subject light incident on each unit group 202. And is information expressed in the form of a two-dimensional map.
  • the Av value information 815 is information representing the aperture value at the time of imaging in (c) of FIG. 7. Unlike the Tv value, the Sv value, and the Bv value, the Av value is not a value that exists for each unit group 202. Therefore, unlike the Tv value, the Sv value, and the Bv value, only a single value of the Av value is stored, and the information is not information obtained by mapping a plurality of values in a two-dimensional manner.
  • the audio information 816 is divided into information of one frame, easily multiplexed with the data block Bi, and stored in the data unit 602 so as to facilitate moving image reproduction.
  • the audio information 816 may be multiplexed not for one frame but for a predetermined number of frames. Note that the voice information 816 does not necessarily have to be included.
  • the additional information 817 is information representing, in the form of a two-dimensional map, the resolution set for each unit group 202 at the time of imaging in (c) of FIG. 7.
  • the additional information 817 may be held in the frame F, but may be held in a cache memory of the processor 1001 described later. In particular, when performing compression processing in real time, it is preferable to use a cache memory from the viewpoint of high-speed processing.
  • control unit 502 performs image pickup with such a moving image pickup function, and thereby, the image information 811 generated by the image pickup element 100 in which the image pickup condition can be set for each unit group 202, and A moving image file 600 associated with data relating to the imaging conditions (imaging condition information 802, mask information 704, Tv value map 812, Sv value map 813, Bv value map 814, etc.) is recorded in the memory card 504.
  • the moving picture compression apparatus of the present embodiment compresses moving picture data including a plurality of frames generated from the output of the imaging device 100.
  • the imaging element 100 has a plurality of imaging areas in which different resolutions can be set. Specifically, for example, according to the above setting, the imaging element 100 includes a first imaging area for imaging an object at a first resolution and a second imaging area for imaging an object at a second resolution different from the first resolution. Have.
  • the video compression apparatus applies different intra-frame prediction for each resolution to compress a frame.
  • the low resolution image area in the frame can be significantly compressed compared to the high resolution image area, and the load on the compression processing can be reduced.
  • FIG. 9 is an explanatory view showing an imaging example on the imaging plane 200 in which different resolutions are set.
  • two types of resolutions A and B are set on the imaging surface 200 as an example.
  • Resolution A is higher than resolution B.
  • the imaging device outputs 16 ⁇ 16 pixels of the imaging region 901A of resolution A in an image region 910A of 16 ⁇ 16 pixels.
  • the imaging device 100 thins out the 16 ⁇ 16 pixels of the imaging region 901B of the resolution B and outputs the thinned image in the image region 910b of 1 ⁇ 1 pixel.
  • the resolutions A and B are not limited to the above, and the resolution A may be higher than the resolution B.
  • the moving picture compression apparatus divides a block of 16 ⁇ 16 pixels into 16 blocks of 4 ⁇ 4 pixels in the image area 910A of resolution A in the frame F output from the imaging device 100, X4 Perform prediction. Since each of the 16 blocks is 4 ⁇ 4 pixels, 4 ⁇ 4 pixels are the prediction processing unit in 4 ⁇ 4 prediction.
  • the moving picture compression apparatus copies the image area 910b of one pixel output to the defect area 910c for the image area 910b of resolution B in the frame F output from the imaging element 100 and After generating an image area 910B to be a block, so-called 16 ⁇ 16 prediction is performed. Since the block generated by copying in this way is one block of 16 ⁇ 16 pixels, in 16 ⁇ 16 prediction, 16 ⁇ 16 pixels become a prediction processing unit.
  • scanning is performed rightward (white thick arrow) from the upper left block of frame F, and when reaching the right end block, it is shifted downward by one block and scanned from the left end block to the right end block (Raster scan).
  • the moving image compression apparatus compresses the image area of resolution B as compared to the image area of resolution A. Rate can be improved. That is, rather than 4 ⁇ 4 prediction of the entire image area of the frame F, it is possible to improve the compression rate and reduce the processing load of the compression process.
  • the image area 910A and the image area 910B may be hereinafter referred to as a block 910A and a block 910B, respectively.
  • FIG. 10 is an explanatory view showing a prediction example of 16 ⁇ 16 prediction.
  • A shows mode 0 (vertical prediction),
  • (b) shows mode 1 (horizontal prediction),
  • (c) shows mode 2 (average value prediction), and
  • (d) shows mode 3 (planar prediction).
  • a block of 16 ⁇ 16 pixels to be predicted is referred to as a target block 1000.
  • (A) Mode 0 is applied when there is a predicted block of the same resolution adjacent to the target block 1000 and no predicted block of the same resolution adjacent to the left.
  • (B) Mode 1 is applied when there is a predicted block of the same resolution adjacent to the left of the target block 1000 and there is no predicted block of the same resolution adjacent to the top.
  • (C) Mode 2 is applied when there is a predicted block of the same resolution adjacent above and to the left of the target block 1000.
  • Mode 3 is also applied when there is a predicted block of the same resolution adjacent above and to the left of the target block 1000. Which one of the mode 2 and the mode 3 is to be applied may be set in advance, or may be set by the user operating the operation unit 505.
  • FIG. 11 is an explanatory view showing a prediction example of 4 ⁇ 4 prediction.
  • A mode 0 (vertical prediction), (b) mode 1 (horizontal prediction), (c) mode 2 (average value prediction), (d) mode 3 (diagonal left lower prediction), (e) Mode 4 (diagonal lower right prediction), (f) mode 5 (vertical right prediction), (g) mode 6 (horizontal lower prediction), (h) mode 7 (vertical left prediction), (i) mode 8 (horizontal upper prediction) is shown.
  • a block of 4 ⁇ 4 pixels to be predicted is referred to as a target block 1100.
  • (A) Mode 0 is applied when there is a predicted block of the same resolution adjacent to the target block 1100 and no predicted block of the same resolution adjacent to the left.
  • Mode 1 and (i) Mode 8 is applied when there is a predicted block of the same resolution adjacent to the left of the target block 1100 and there is no predicted block of the same resolution adjacent above. Ru. Which one of the mode 1 and the mode 8 is applied may be set in advance, or may be set by the user operating the operation unit 505.
  • (C) mode 2, (e) mode 4, (f) mode 5 and (g) mode 6 are applied when there is a predicted block of the same resolution adjacent to the top and left of the target block 1100 .
  • Which one of mode 2, mode 4, mode 5 and mode 6 is to be applied may be set in advance, or may be set by the user operating the operation unit 505.
  • (D) Mode 3 and (h) Mode 7 are applied when there is a predicted block of the same resolution adjacent on the upper and upper right of the target block 1100. Which one of the mode 3 and the mode 7 is to be applied may be set in advance, or may be set by the user operating the operation unit 505.
  • FIG. 12 is a block diagram showing a configuration example of the control unit 502 shown in FIG.
  • the control unit 502 includes a preprocessing unit 1210, an image processing unit 1220, an acquisition unit 1230, and a compression unit 1240, and is configured by a processor 1201, a memory 1202, an integrated circuit 1203, and a bus 1204 connecting these. Be done.
  • the preprocessing unit 1210, the image processing unit 1220, the acquisition unit 1230, and the compression unit 1240 may be realized by causing the processor 1201 to execute a program stored in the memory 1202, and may be realized by an application specific integrated circuit (ASIC) or an FPGA (FPGA). It may be realized by an integrated circuit 1203 such as a field-programmable gate array). Also, the processor 1201 may use the memory 1202 as a work area. The integrated circuit 1203 may use the memory 1202 as a buffer that temporarily holds various data including image data.
  • ASIC application specific integrated circuit
  • FPGA FPGA
  • the processor 1201 may use the memory 1202 as a work area.
  • the integrated circuit 1203 may use the memory 1202 as a buffer that temporarily holds various data including image data.
  • the preprocessing unit 1210 executes preprocessing of image processing by the image processing unit 1220 on moving image data including a plurality of frames F from the imaging element 100. Specifically, for example, when moving image data (here, a set of RAW image data) is input from the imaging device 100, the preprocessing unit 1210 uses a known object detection technology to identify a specific subject such as a main subject. To detect.
  • moving image data here, a set of RAW image data
  • the preprocessing unit 1210 resolves the imaging area of the imaging element 100 which images the specific subject.
  • the image is output to the image sensor 100 so as to be A.
  • the imaging area of the specific subject is set to the resolution A, and the other imaging areas are set to the resolution B.
  • the preprocessing unit 1210 calculates, for example, the motion vector of the specific subject from the difference between the imaging area where the specific subject in the input frame is detected and the imaging area where the specific subject in the input completed frame is detected. It is possible to detect and specify an imaging region of a specific subject in the next input frame.
  • the preprocessing unit 900 outputs, to the imaging element 100, an instruction to change the identified imaging area to the resolution A.
  • the imaging area of the specific subject is set to the resolution A, and the other imaging areas are set to the resolution B.
  • the image processing unit 1220 performs image processing such as demosaicing processing, white balance adjustment, noise reduction, and debayering on the moving image data input from the imaging element 100. Specifically, for example, the image processing unit 1220 executes known image processing such as demosaicing processing and white balance adjustment. Further, as described with reference to FIG. 9, the image processing unit 1220 copies the image data of the image area 910 b output from the pixel of resolution B to generate the image area 910 B of resolution B.
  • the acquisition unit 1230 holds the moving image data output from the image processing unit 1220 in the memory 1202, and outputs a plurality of frames F included in the moving image data one frame at a time in chronological order to the compression unit 1240 at a predetermined timing.
  • the compression unit 1240 compresses the moving image data input from the acquisition unit 1230. Specifically, for example, compression section 1240 compresses frame F by inter-frame prediction and intra-frame prediction, for example. In the inter-frame prediction, the compression unit 1240 compresses the frame F by hybrid coding combining entropy coding with motion compensation inter-frame prediction (Motion Compensation: MC) and discrete cosine transform (DCT). . In intra-frame prediction, as shown in FIGS. 9 to 11, the compression unit 1240 compresses the image areas 910A and 910B of the resolution for each resolution.
  • inter-frame prediction the compression unit 1240 compresses the image areas 910A and 910B of the resolution for each resolution.
  • MC motion compensation inter-frame prediction
  • DCT discrete cosine transform
  • control unit 502 may execute compression processing of moving image data from the imaging element 100 in real time processing, or may execute it in batch processing.
  • control unit 502 temporarily stores moving image data from the imaging device 100, the pre-processing unit 1210, or the image processing unit 1220 in the memory card 504, the DRAM 506, or the flash memory 507, and automatically or by user operation.
  • moving image data may be read out and the compression unit 1240 may execute compression processing.
  • FIG. 13 is a block diagram showing a configuration example of the compression unit 1240.
  • the compression unit 1240 compresses the frame F by, for example, inter-frame prediction and intra-frame prediction.
  • the compression unit 1240 includes a subtraction unit 1301, a DCT unit 1302, a quantization unit 1303, an entropy coding unit 1304, a code amount control unit 1305, an inverse quantization unit 1306, an inverse DCT unit 1307, and a generation unit.
  • a frame memory 1309, a motion detection unit 1310, a motion compensation unit 1311, a determination unit 1320, and an intra-frame prediction processing unit 1330 are included.
  • the subtractor unit 1301 to the motion compensation unit 1311 and the determination unit 1320 have the same configuration as the existing compressor.
  • the DCT unit 1302, the quantization unit 1303, the entropy coding unit 1304, and the code amount control unit 1305 are referred to as a coding unit 1340.
  • the subtracting unit 1301 subtracts the prediction frame from the motion compensating unit 1311 that predicts the input frame from the input frame, and outputs difference data.
  • the DCT unit 1302 performs discrete cosine transform on the difference data from the subtracting unit 1301.
  • the quantization unit 1303 quantizes the discrete cosine transformed difference data.
  • the entropy coding unit 1304 entropy codes the quantized difference data, and also entropy codes the motion vector from the motion detection unit 1310.
  • the code amount control unit 1305 controls the quantization by the quantization unit 1303.
  • the inverse quantization unit 1306 inversely quantizes the difference data quantized by the quantization unit 1303 to obtain discrete cosine transformed difference data.
  • the inverse DCT unit 1307 inverse discrete cosine transforms the dequantized difference data.
  • the generation unit 1308 adds the inverse discrete cosine transformed difference data and the prediction frame from the motion compensation unit 1311 to generate a reference frame to which a frame input temporally after the input frame refers. .
  • the frame memory 1309 holds the reference frame obtained from the generation unit 1308.
  • the motion detection unit 1310 detects a motion vector by block matching, for example, using the input frame and the reference frame.
  • the motion compensation unit 1311 generates a predicted frame using the reference frame and the motion vector. Specifically, for example, the motion compensation unit 1311 performs motion compensation using a specific reference frame and a motion vector among the plurality of reference frames stored in the frame memory 1309.
  • the reference frame By making the reference frame a specific reference frame, it is possible to suppress high-load motion compensation using another reference frame other than the specific reference frame. Also, by setting a specific reference frame as one reference frame obtained from the temporally previous frame of the input frame, heavy processing of motion compensation is avoided, and processing load on motion compensation is reduced. Can be
  • the inter-frame prediction is realized by the subtraction unit 1301, the inverse quantization unit 1306, the inverse DCT unit 1307, the generation unit 1308, the frame memory 1309, the motion detection unit 1310, and the motion compensation unit 1311 described above.
  • the determination unit 1320 uses the input frame and the difference data from the subtraction unit 1301 to determine which of intra-frame prediction and inter-frame prediction is more efficient to select, thereby performing intra-frame prediction and intra-frame prediction. Select one of the inter-frame predictions. If intra-frame prediction is selected, the determination unit 1320 outputs the input frame to the intra-frame prediction processing unit 1330. Further, the determination unit 1320 may select intra-frame prediction at the insertion timing of the I picture. On the other hand, when inter-frame prediction is selected, determination section 1320 outputs differential data to DCT section 1302.
  • the intraframe prediction processing unit 1330 performs intraframe prediction of an input frame.
  • the in-frame prediction processing unit 1330 includes a setting unit 1331 and a prediction unit 1332.
  • the setting unit 1331 sets a prediction processing unit for predicting the prediction target image area based on the resolution of the prediction target image area in the plurality of image areas corresponding to the plurality of imaging areas in the prediction target frame among the plurality of frames. Do.
  • the prediction target frame is an input frame which is input to the compression unit 1240 and is a target of compression processing.
  • the imaging area is an area of pixels having a predetermined number of pixels in the imaging device 100. For example, in the example of FIG. 9, 16 ⁇ 16 pixels are taken as one imaging area. The size of the imaging area is not limited to 16 ⁇ 16 pixels, and may be an integral multiple of the unit group 202 (in this example, 4 ⁇ 4 pixels as an example). In the example of FIG. 9, the number of imaging areas 901A of resolution A is four, and the number of imaging areas 901B of resolution B is twenty-one.
  • the image area is an area of pixel data in the frame F corresponding to the imaging area. That is, the subject imaged in the imaging area is expressed as image data (set of pixel data) in the image area.
  • the image area 910A of resolution A corresponds to the imaging area 901A
  • the image area 910B of resolution B corresponds to the imaging area 901B.
  • the number of image areas 910A of resolution A is four
  • the number of image areas 910B of resolution B is twenty-one.
  • the prediction target image area is an image area which has not been predicted yet and is to be currently predicted among the plurality of image areas in the frame F.
  • any prediction of 4 ⁇ 4 prediction and 16 ⁇ 16 prediction is scanned from the upper left block of frame F to the right, and when it reaches the right end block, it is shifted downward by one block to the right end block from the left end block A raster scan is applied which is scanned to.
  • an image area having the same resolution as the prediction target image area and having the same resolution located on the left side, or an image area having the same resolution image area in the row above the prediction target image area has already been predicted (the above-described predicted block) It becomes. Since intra-frame prediction is performed, it is preferable that the prediction target image region and the predicted image region be closer. For example, the most preferable image region as a predicted image region is an adjacent image region of a prediction target image region.
  • the prediction processing unit is a processing unit for predicting a prediction target image area, and is the target blocks 1000 and 1100 shown in FIG. 10 and FIG.
  • a 16 ⁇ 16 pixel prediction target area is divided into 16 blocks. Since each of the 16 blocks is 4 ⁇ 4 pixels, 4 ⁇ 4 pixels are the prediction processing unit in 4 ⁇ 4 prediction.
  • a 16 ⁇ 16 pixel prediction target area is one block. Since this block is 16 ⁇ 16 pixels, in 16 ⁇ 16 prediction, 16 ⁇ 16 pixels become a prediction processing unit. That is, the higher the resolution, the smaller the prediction processing unit, and the lower the resolution, the larger the prediction processing unit.
  • the prediction unit 1332 predicts a prediction target image region based on the prediction processing unit set by the setting unit 1331. Specifically, for example, when the prediction processing unit is 16 ⁇ 16 pixels, the prediction unit 1332 performs 16 ⁇ 16 prediction as shown in FIG. 10, and the prediction processing unit is 4 ⁇ 4 pixels. In the case, as shown in FIG. 11, 4 ⁇ 4 prediction is performed.
  • the prediction unit 1332 outputs the prediction result to the DCT unit 1302 of the coding unit 1340. The output may be output as it is.
  • FIG. 14 is a flowchart illustrating an example of a preprocessing procedure by the preprocessing unit 1210.
  • the resolution B is set in advance in the imaging device 100, and an example in which the image area of the resolution A is tracked by the subject detection technology of the preprocessing unit 1210 and fed back to the imaging device 100 will be described.
  • the image areas of resolutions A and B may be fixed at all times.
  • the preprocessing unit 1210 waits for the input of the frame F constituting the moving image data (step S1401: No), and when the frame F is input (step S1401: Yes), the detection unit detects a specific subject such as a main subject. It is determined whether or not it is (step S1402). When the specific subject is not detected (step S1402: No), the process proceeds to step S1401.
  • the preprocessing unit 1210 compares the temporally previous frame (for example, a reference frame) with the input frame to detect a motion vector, An image area of resolution A in the next input frame is predicted and output to the imaging device 100 (step S1403), and the process proceeds to step S1401. Thereby, the imaging device 100 sets the resolution of the unit group 202 constituting the imaging area corresponding to the predicted image area to the resolution A, sets the resolution of the remaining unit groups 202 to the resolution B, and sets the object. Take an image.
  • the temporally previous frame for example, a reference frame
  • An image area of resolution A in the next input frame is predicted and output to the imaging device 100 (step S1403), and the process proceeds to step S1401.
  • the imaging device 100 sets the resolution of the unit group 202 constituting the imaging area corresponding to the predicted image area to the resolution A, sets the resolution of the remaining unit groups 202 to the resolution B, and sets the object. Take an image.
  • step S1401: No the input of all the frames constituting the moving image data is completed.
  • FIG. 15 is a flowchart showing an example of the image processing procedure by the image processing unit 1220.
  • the process of copying the image data of the image area 910b of the resolution B described above will be described.
  • the image processing unit 1220 determines whether there is an unselected block in the frame (step S1502).
  • the block is an image area of 16 ⁇ 16 pixels as an example. Unselected blocks are blocks that have not been selected in step S1503.
  • the image processing unit 1220 selects one unselected block (step S1503).
  • the selected block is referred to as a selected block.
  • the image processing unit 1220 determines whether the resolution of the selected block is the resolution B (step S1504). Specifically, for example, the image processing unit 1220 refers to the information of the resolution set in each unit group 202 of the imaging device 100 in the pre-processing unit 1210 to determine the resolution of the selected block by specifying.
  • step S1504 If the resolution of the selected block is not the resolution B (step S1504: NO), the image processing unit 1220 returns to step S1502. On the other hand, if the resolution of the selected block is the resolution B (step S1504: YES), the image processing unit 1220 duplicates the inside of the selected block with the image data of the image area 910b to generate a block 910B (step S1505) , And return to step S1502.
  • step S1502 If there is no unselected block in step S1502 (step S1502: NO), the process returns to step S1501.
  • step S1501: No the frame F is not input (step S1501: No) and the input of all the frames constituting the moving image data is completed, the series of processing is ended.
  • FIG. 16 is a flowchart of an example of the intra-frame prediction processing procedure by the intra-frame prediction processing unit 1330. If the frame F is input (step S1601: YES), the intra-frame prediction processing unit 1330 determines, by the setting unit 1331, whether there is an unselected block in the frame (step S1602).
  • the block is an image area of 16 ⁇ 16 pixels as an example. If there is an unselected block (step S1602: YES), the intra-frame prediction processing unit 1330 selects one unselected block using the setting unit 1331 (step S1603), and determines the resolution of the selected block (step S1604) . Specifically, for example, the image processing unit 1220 refers to the information of the resolution set in each unit group 202 of the imaging device 100 in the pre-processing unit 1210 to determine the resolution of the selected block by specifying.
  • the intra-frame prediction processing unit 1330 sets the prediction processing unit of the selected block to 4 ⁇ 4 pixels by the setting unit 1331 (step S1605).
  • the intra-frame prediction processing unit 1330 causes the prediction unit 1332 to divide the selected block in the set prediction processing unit (step S1606).
  • the selected block of 16 ⁇ 16 pixels is divided into 16 blocks of 4 ⁇ 4 pixels (hereinafter referred to as divided blocks).
  • the intra-frame prediction processing unit 1330 determines, with the prediction unit 1332, whether there is an undivided selected block (step S1607). If there is an unselected divided block (step S1608: YES), the intra-frame prediction processing unit 1330 causes the prediction unit 1332 to select one unselected divided block (step S1608). Then, the intra-frame prediction processing unit 1330 causes the prediction unit 1332 to determine the prediction mode of the selected divided block (step S1609). Specifically, for example, as shown in FIG. 11, the intra-frame prediction processing unit 1330 determines applicable prediction modes from a plurality of prediction modes 0 to 9 by the prediction unit 1332.
  • the intra-frame prediction processing unit 1330 generates a prediction block for predicting the selected divided block in the prediction mode determined by the prediction unit 1332 (step S1610).
  • the generated prediction block is the prediction result of the prediction unit 1332.
  • the process returns to step S1607. If there is no unselected divided block in step S1607 (step S1607: NO), the process returns to step S1602.
  • the intra-frame prediction processing unit 1330 sets the prediction processing unit of the selected block to 16 ⁇ 16 pixels by the setting unit 1331. (Step S1611).
  • the intra-frame prediction processing unit 1330 causes the prediction unit 1332 to determine the prediction mode of the selected divided block (step S1612). Specifically, for example, as shown in FIG. 10, the intra-frame prediction processing unit 1330 determines an applicable prediction mode from the plurality of prediction modes 0 to 3 by the prediction unit 1332.
  • the intra-frame prediction processing unit 1330 generates a prediction block for predicting the selected divided block in the prediction mode determined by the prediction unit 1332 (step S1613).
  • the generated prediction block is the prediction result of the prediction unit 1332.
  • the process returns to step S1602. If there is no unselected divided block in step S1607 (step S1607: NO), the process returns to step S1602.
  • step S1602 If there is no unselected block in step S1602 (step S1602: NO), the process returns to step S1601. If the frame F is not input in step S1601 (step S1601: NO) and input of all the frames constituting the moving image data is completed, the series of processing ends.
  • the frame predicted by the intraframe prediction processing unit 1330 is output to the coding unit 1340.
  • the above-described moving picture compression apparatus is a moving picture compression apparatus that compresses moving picture data including a plurality of frames generated from the output of the imaging device 100 having a plurality of imaging areas in which different resolutions can be set.
  • the video compression apparatus includes a setting unit 1331, a prediction unit 1332 and an encoding unit 1340.
  • the setting unit 1331 selects a prediction target image area based on the resolution of the prediction target image area (for example, blocks 910A and 910B) in the plurality of image areas corresponding to the plurality of imaging areas in the prediction target frame among the plurality of frames.
  • the prediction processing unit (for example, 4x4 pixels, 16x16 pixels) which predicts is set.
  • the prediction unit 1332 predicts a prediction target image region based on the prediction processing unit set by the setting unit 1331.
  • the encoding unit 1340 encodes a prediction target frame using the prediction result of the prediction unit 1332.
  • the setting unit 1331 sets the resolution (for example, resolution A) of the prediction target image area (for example, block 910A) to another image area other than the prediction target image area. If the resolution is higher than the resolution (for example, the block 910B) (for example, resolution B), the prediction processing unit for predicting the prediction target image area is more than the prediction processing unit for predicting another image area (for example, 16 ⁇ 16 pixels) Is set to a smaller prediction processing unit (for example, 4 ⁇ 4 pixels).
  • the setting unit 1331 sets the resolution (for example, resolution B) of the prediction target image area (for example, block 910B) to another image area other than the prediction target image area. If the resolution (for example, the resolution A) of the block 910A is lower than the resolution (for example, resolution A), the unit of prediction processing for predicting the prediction target image area is Set to a large prediction processing unit (for example, 16 ⁇ 16 pixels).
  • the setting unit 1331 uses the image area predicted by the prediction unit 1332 in the prediction target frame based on the position of the prediction processing unit in the prediction target frame.
  • a specific prediction mode to be applied to the prediction processing unit is set, and the prediction unit 1332 predicts a prediction target image region by applying the specific prediction mode to the prediction processing unit.
  • the setting unit 1331 sets a specific prediction mode to be applied to the prediction processing unit based on the resolution of the predicted image area.
  • the resolution of the predicted image area is the same resolution as the resolution of the unit of prediction processing.
  • this enables intra-frame prediction to be performed between image areas of the same resolution, and consistent compression processing can be realized. For example, if both the predicted image area and the prediction target image area have resolution A, 4 ⁇ 4 prediction is performed, and if both the predicted image area and the prediction target image area have resolution B, 16 ⁇ 16 prediction is performed. Is executed.
  • the prediction target image area is resolution A and the resolution of the predicted image area is resolution B
  • the resolution prediction mode leads to a reduction in prediction accuracy. Therefore, the prediction accuracy can be improved.
  • the prediction accuracy can be improved.
  • the resolution of the prediction target image area is resolution B and the resolution of the predicted image area is resolution A
  • the resolution refers to the predicted image area with fine resolution.
  • the prediction target image area is resolution B and the resolution of the predicted image area is resolution A
  • the resolution prediction mode leads to a reduction in prediction accuracy. Therefore, the prediction accuracy can be improved.
  • the setting unit 1331 uses the adjacent area of the prediction processing unit as the predicted image area.
  • the image processing unit 1220 receives image data (for example, image data of the image area 910b) from the corresponding imaging area in each of the plurality of frames. As for the missing area 910c which has not been output, a plurality of frames are output by duplicating based on the image data. Then, the setting unit 1331 selects a prediction target based on the resolution of the prediction target image area in the plurality of image areas corresponding to the plurality of imaging areas in the prediction target frame among the plurality of frames output from the image processing unit 1220 Set a prediction processing unit for predicting an image area.
  • image data for example, image data of the image area 910b
  • the missing area 910c which has not been output
  • a plurality of frames are output by duplicating based on the image data.
  • the setting unit 1331 selects a prediction target based on the resolution of the prediction target image area in the plurality of image areas corresponding to the plurality of imaging areas in the prediction target frame among the plurality of frames output from the image processing unit 1220
  • the electronic device described above includes the imaging device 100 having a plurality of imaging regions in which different resolutions can be set, a setting unit 1331, a prediction unit 1332, and an encoding unit 1340.
  • the imaging element 100 has a plurality of imaging areas in which different resolutions can be set.
  • the setting unit 1331 sets a prediction processing unit for predicting the prediction target image area based on the resolution of the prediction target image area in the plurality of image areas corresponding to the plurality of imaging areas in the prediction target frame among the plurality of frames. Do.
  • the prediction unit 1332 predicts a prediction target image region based on the prediction processing unit set by the setting unit 1331.
  • the encoding unit 1340 encodes a prediction target frame using the prediction result of the prediction unit 1332.
  • the electronic device 500 capable of optimizing compression processing according to the resolution can be realized.
  • Examples of the electronic device 500 described above include a digital camera, a digital video camera, a smartphone, a tablet, a surveillance camera, a drive recorder, and a drone.
  • the above-described moving picture compression program causes moving picture compression that causes the processor 1201 to compress moving picture data including a plurality of frames generated from the output of the imaging device 100 having a plurality of imaging areas in which different resolutions can be set. It is a program.
  • the moving picture compression program causes the processor 1201 to predict the prediction target image area based on the resolution of the prediction target image area in the plurality of image areas corresponding to the plurality of imaging areas in the prediction target frame among the plurality of frames.
  • An encoding process for encoding a frame to be predicted using a prediction process for predicting an image area to be predicted based on a setting process for setting a prediction process unit and a unit area set by the setting process Execute the processing.
  • the moving picture compression program may be recorded on a portable recording medium such as a CD-ROM, a DVD-ROM, a flash memory, or a memory card 504. Also, the moving picture compression program may be recorded in a moving picture compression apparatus or a server that can be downloaded to the electronic device 500.
  • Reference Signs List 100 imaging device, 200 imaging plane, 202 unit group, 500 electronic device, 502 control unit, 600 moving image file, 1210 pre-processing unit, 1220 image processing unit, 1230 acquisition unit, 1240 compression unit, 1310 motion detection unit, 1311 motion compensation Unit, 1320 determination unit, 1330 intraframe prediction processing unit, 1331 setting unit, 1332 prediction unit, 1340 encoding unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Studio Devices (AREA)

Abstract

L'invention concerne un dispositif de compression d'image animée pour compresser des données d'image animée comprenant une pluralité de trames générées par la sortie d'un élément d'imagerie ayant une pluralité de régions d'imagerie dans lesquelles différentes résolutions peuvent être définies. Le dispositif de compression d'image mobile comprend : une unité de définition pour définir, sur la base de la résolution d'une région d'image devant être prédite dans une pluralité de régions d'image correspondant à la pluralité de régions d'imagerie dans une trame devant être prédite parmi la pluralité de trames, une unité de traitement de prédiction dans laquelle la région d'image devant être prédite est prédite ; une unité de prédiction pour prédire la région d'image devant être prédite, sur la base de l'unité de traitement de prédiction qui a été définie par l'unité de définition ; et une unité de codage pour coder la trame devant être prédite, à l'aide du résultat de la prédiction par l'unité de prédiction.
PCT/JP2018/036131 2017-09-29 2018-09-27 Dispositif de compression d'image animée, appareil électronique, et programme de compression d'image animée WO2019065917A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017192109 2017-09-29
JP2017-192109 2017-09-29

Publications (1)

Publication Number Publication Date
WO2019065917A1 true WO2019065917A1 (fr) 2019-04-04

Family

ID=65901648

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/036131 WO2019065917A1 (fr) 2017-09-29 2018-09-27 Dispositif de compression d'image animée, appareil électronique, et programme de compression d'image animée

Country Status (1)

Country Link
WO (1) WO2019065917A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011129163A1 (fr) * 2010-04-16 2011-10-20 コニカミノルタホールディングス株式会社 Procédé de traitement de prédiction intra et programme de traitement de prédiction intra
WO2013164915A1 (fr) * 2012-05-02 2013-11-07 株式会社ニコン Dispositif de formation d'image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011129163A1 (fr) * 2010-04-16 2011-10-20 コニカミノルタホールディングス株式会社 Procédé de traitement de prédiction intra et programme de traitement de prédiction intra
WO2013164915A1 (fr) * 2012-05-02 2013-11-07 株式会社ニコン Dispositif de formation d'image

Similar Documents

Publication Publication Date Title
JPWO2017013806A1 (ja) 固体撮像装置
US11589059B2 (en) Video compression apparatus, electronic apparatus, and video compression program
JPWO2014133076A1 (ja) 撮像素子および電子機器
JP6282303B2 (ja) 撮像素子および撮像装置
US11785345B2 (en) Electronic device, imaging device, and imaging element for obtaining exposure of each area of image
JP6561428B2 (ja) 電子機器、制御方法、及び制御プログラム
US20240015406A1 (en) Image sensor and imaging device
JP6488545B2 (ja) 電子機器
WO2019065919A1 (fr) Dispositif d'imagerie, dispositif de traitement d'image, dispositif de compression d'image animée, programme de réglage, programme de traitement d'image et programme de compression d'image animée
JP2016225972A (ja) 撮像素子および撮像装置
JP6733159B2 (ja) 撮像素子、及び撮像装置
WO2019065917A1 (fr) Dispositif de compression d'image animée, appareil électronique, et programme de compression d'image animée
JP7167928B2 (ja) 動画圧縮装置、電子機器、および動画圧縮プログラム
US20230164329A1 (en) Video compression apparatus, electronic apparatus, and video compression program
JP7156367B2 (ja) 動画圧縮装置、伸張装置、電子機器、動画圧縮プログラム、および伸張プログラム
JP7247975B2 (ja) 撮像素子及び撮像装置
WO2019065918A1 (fr) Dispositif de traitement d'image, dispositif de compression d'image animée, programme de traitement d'image et programme de compression d'image animée
JP2020057877A (ja) 電子機器および設定プログラム
JP2019092220A (ja) 電子機器
JP2019092219A (ja) 撮像装置、撮像装置の制御方法、及び制御プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18863700

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18863700

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP