WO2019065919A1 - Dispositif d'imagerie, dispositif de traitement d'image, dispositif de compression d'image animée, programme de réglage, programme de traitement d'image et programme de compression d'image animée - Google Patents

Dispositif d'imagerie, dispositif de traitement d'image, dispositif de compression d'image animée, programme de réglage, programme de traitement d'image et programme de compression d'image animée Download PDF

Info

Publication number
WO2019065919A1
WO2019065919A1 PCT/JP2018/036134 JP2018036134W WO2019065919A1 WO 2019065919 A1 WO2019065919 A1 WO 2019065919A1 JP 2018036134 W JP2018036134 W JP 2018036134W WO 2019065919 A1 WO2019065919 A1 WO 2019065919A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
imaging
area
image
imaging area
Prior art date
Application number
PCT/JP2018/036134
Other languages
English (en)
Japanese (ja)
Inventor
大作 小宮
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Publication of WO2019065919A1 publication Critical patent/WO2019065919A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/77Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components

Definitions

  • the present invention relates to an imaging device, an image processing device, a moving image compression device, a setting program, an image processing program, and a moving image compression program.
  • an electronic device provided with an imaging device (hereinafter, referred to as a stacked imaging device) in which a back side illumination type imaging chip and a signal processing chip are stacked (see Patent Document 1).
  • the stacked imaging device is stacked such that the back side illumination type imaging chip and the signal processing chip are connected via the microbumps in each predetermined area.
  • the moving image compression of such frames is not conventionally considered. .
  • An imaging apparatus has a first imaging area for imaging a subject, and a second imaging area for imaging a subject, and the first frame rate is set in the first imaging area.
  • An imaging element that can be set and that can set a second frame rate faster than the first frame rate in the second imaging area, and a specific subject included in a frame generated by an output from the imaging element
  • a detection unit that detects an imaging area of the specific subject in the imaging device based on an image area; an imaging area of the specific subject used for generating the frame; an imaging area detected by the detection unit; And a setting unit configured to set a frame rate of a specific imaging area including the second frame rate.
  • An image processing apparatus has a first imaging area for imaging a subject, and a second imaging area for imaging a subject, and the first frame rate in the first imaging area.
  • An image processing apparatus that performs image processing on a frame generated by an output from an imaging element capable of setting a second frame rate faster than the first frame rate in the second imaging area.
  • the second imaging of the specific subject included in the first frame generated by the output from the first imaging area in which the first frame rate is set and the second imaging area in which the second frame rate is set A detection unit for detecting an imaging area of the specific subject in the imaging device based on a second image area corresponding to the area; and the detection unit used for generating the first frame
  • a setting unit configured to set a frame rate of a specific imaging region including the second imaging region of the imaging element and the imaging region detected by the detection unit to the second frame rate; Image data of a first image area corresponding to the first imaging area included, and the specific imaging area included in a second frame generated by imaging at the second frame rate set by the setting unit
  • a combining unit configured to combine the image data.
  • a moving picture compression apparatus which is one aspect of the technology disclosed in the present application, has a first imaging area for imaging a subject, and a second imaging area for imaging a subject, and the first frame rate in the first imaging area. And a moving image for compressing moving image data including a plurality of frames generated by an output from an imaging device capable of setting a second frame rate faster than the first frame rate in the second imaging region.
  • a setting unit configured to set, to the second frame rate, a frame rate of a specific imaging area including the second imaging area of the specific subject used for the imaging and the imaging area detected by the detection unit; Image data of a first image area corresponding to the first imaging area included in the first frame, and the second frame generated by imaging at the second frame rate set by the setting unit
  • a compression unit that compresses the first frame and the second frame after combination obtained by the combination unit.
  • the setting program as one aspect of the technology disclosed in the present application has a first imaging area for imaging a subject, and a second imaging area for imaging a subject, and the first frame rate is set in the first imaging area.
  • a setting program that causes a processor to execute control of an imaging element that can be set and that can set a second frame rate faster than the first frame rate in the second imaging area, and the processor can perform the imaging
  • a detection process for detecting an imaging area of the specific subject in the imaging device based on an image area of the specific subject included in a frame generated by an output from an element, and the specific subject used for generating the frame
  • the frame rate of a specific imaging area including the imaging area and the imaging area detected by the detection process is the second frame A setting process of setting the over bets, to execution.
  • An image processing program includes a first imaging area for imaging a subject, and a second imaging area for imaging a subject, and the first frame rate in the first imaging area.
  • Image processing that allows a processor to execute image processing of a frame generated by an output from an imaging element capable of setting a second frame rate faster than the first frame rate in the second imaging region.
  • a program is included in the processor in the first frame generated by the output from the first imaging area in which the first frame rate is set and the second imaging area in which the second frame rate is set.
  • An imaging area of the specific subject in the imaging device is detected based on a second image area corresponding to the second imaging area of the specific subject to be
  • the frame rate of the specific imaging area including the detection process, the second imaging area of the specific subject used for generating the first frame, and the imaging area detected by the detection process is By setting processing for setting two frame rates, image data of a first image area corresponding to the first imaging region included in the first frame, and imaging at the second frame rate set by the setting processing And combining processing for combining the image data of the specific imaging area included in the generated second frame.
  • a moving image compression program has a first imaging area for imaging a subject, and a second imaging area for imaging a subject, and the first frame rate in the first imaging area.
  • An imaging area of the specific subject in the imaging device is detected based on a second image area corresponding to the second imaging area of the specific subject included.
  • the frame rate of the specific imaging area including the detection process, the second imaging area of the specific subject used for generating the first frame, and the imaging area detected by the detection process is By setting processing for setting two frame rates, image data of a first image area corresponding to the first imaging region included in the first frame, and imaging at the second frame rate set by the setting processing Compositing processing for compositing the image data of the specific imaging area included in the generated second frame, the first frame, and the second frame after compositing obtained by the compositing processing Execute compression processing.
  • FIG. 1 is a cross-sectional view of a stacked imaging device.
  • FIG. 2 is a diagram for explaining the pixel array of the imaging chip.
  • FIG. 3 is a circuit diagram of the imaging chip.
  • FIG. 4 is a block diagram showing an example of the functional configuration of the imaging device.
  • FIG. 5 is an explanatory view showing an example of the block configuration of the electronic device.
  • FIG. 6 is an explanatory view showing a configuration example of a moving image file.
  • FIG. 7 is an explanatory view showing the relationship between the imaging plane and the subject image.
  • FIG. 8 is an explanatory view showing a specific configuration example of a moving image file.
  • FIG. 9 is an explanatory diagram of an example of moving image compression according to the first embodiment.
  • FIG. 9 is an explanatory diagram of an example of moving image compression according to the first embodiment.
  • FIG. 10 is an explanatory view of an image processing example 1 in the moving image compression shown in FIG.
  • FIG. 11 is an explanatory diagram of an image processing example 2 in the moving image compression illustrated in FIG.
  • FIG. 12 is a block diagram showing a configuration example of the control unit shown in FIG.
  • FIG. 13 is a block diagram showing a configuration example of the compression unit.
  • FIG. 14 is a sequence diagram illustrating an operation processing procedure example of the control unit.
  • FIG. 15 is a flowchart showing a detailed processing procedure example of the setting process (steps S1404 and S1410) shown in FIG.
  • FIG. 16 is a flowchart showing a detailed processing procedure example of the additional information setting process (step S1505) shown in FIG. FIG.
  • FIG. 17 is a flowchart illustrating an example of a detailed processing procedure of the moving image file generation processing.
  • FIG. 18 is a flowchart showing a detailed processing procedure example of the image processing (steps S1413 and S1415) shown in FIG.
  • FIG. 19 is a flowchart of an example of a compression control process procedure of the first compression control method by the compression control unit.
  • FIG. 20 is a flowchart illustrating an example of a motion detection processing procedure of the first compression control method by the motion detection unit.
  • FIG. 21 is a flowchart illustrating an example of a motion compensation processing procedure of the first compression control method by the motion compensation unit.
  • FIG. 22 is a flowchart of an example of a compression control process procedure of the second compression control method by the compression control unit.
  • FIG. 23 is a flowchart illustrating an example of a motion detection processing procedure of the second compression control method by the motion detection unit.
  • FIG. 24 is a flowchart illustrating an example of a motion compensation processing procedure of the second compression control method by the motion compensation unit.
  • FIG. 25 is an explanatory diagram of a specific processing flow of the moving image processing example 1 shown in FIG.
  • FIG. 26 is an explanatory diagram of a synthesis example 1 of the frame of 60 fps according to the second embodiment.
  • FIG. 27 is an explanatory diagram of a synthesis example 2 of the frame of 60 fps according to the second embodiment.
  • FIG. 28 is an explanatory diagram of a synthesis example 4 of the frame of 60 fps according to the second embodiment.
  • FIG. 29 is a flowchart of a procedure example 1 of combining processing according to a first example of combining frames by the image processing unit.
  • FIG. 30 is a flowchart of a procedure example 2 of combining processing according to the second example of combining frames by the image processing unit.
  • FIG. 31 is a flowchart of a procedure example 3 of the combining process according to the combining example 3 of the frame by the image processing unit.
  • FIG. 32 is a flowchart of a procedure example 4 of combining processing according to the fourth example of combining frames by the image processing unit.
  • FIG. 33 is an explanatory view of a synthesis example of the 60 [fps] frame according to the third embodiment.
  • FIG. 34 is an explanatory view showing the correspondence between the setting of the imaging area and the image area of the frame.
  • the layered imaging device is described in Japanese Patent Application No. 2012-139026 filed by the applicant of the present application.
  • the electronic device is, for example, an imaging device such as a digital camera or a digital video camera.
  • FIG. 1 is a cross-sectional view of a stacked imaging device 100.
  • FIG. A stacked imaging device (hereinafter simply referred to as “imaging device”) 100 processes a back-illuminated imaging chip (hereinafter simply referred to as “imaging chip”) 113 that outputs a pixel signal corresponding to incident light, and the pixel signal.
  • imaging chip a back-illuminated imaging chip
  • a signal processing chip 111 and a memory chip 112 for storing pixel signals are provided.
  • the imaging chip 113, the signal processing chip 111, and the memory chip 112 are stacked and electrically connected to each other by the bump 109 having conductivity such as Cu.
  • incident light is mainly incident in the Z-axis plus direction indicated by a white arrow.
  • the surface on which incident light is incident is referred to as the back surface.
  • the left direction in the drawing, which is orthogonal to the Z axis is taken as the plus direction of the X axis
  • the near direction in the drawing, which is orthogonal to the Z axis and the X axis is taken as the plus direction.
  • coordinate axes are displayed so that the orientation of each figure can be known with reference to the coordinate axes in FIG.
  • the imaging chip 113 is a backside illuminated MOS (Metal Oxide Semiconductor) image sensor.
  • the PD (photodiode) layer 106 is disposed on the back side of the wiring layer 108.
  • the PD layer 106 is two-dimensionally arranged, and includes a plurality of PDs 104 which store charges corresponding to incident light, and a transistor 105 provided corresponding to the PDs 104.
  • a color filter 102 is provided on the incident side of incident light in the PD layer 106 via a passivation film 103.
  • the color filter 102 has a plurality of types that transmit different wavelength regions, and has a specific arrangement corresponding to each of the PDs 104. The arrangement of the color filters 102 will be described later.
  • the combination of the color filter 102, the PD 104, and the transistor 105 forms one pixel.
  • a microlens 101 is provided on the color filter 102 on the incident side of the incident light corresponding to each pixel.
  • the microlenses 101 condense incident light toward the corresponding PDs 104.
  • the wiring layer 108 has a wiring 107 for transmitting the pixel signal from the PD layer 106 to the signal processing chip 111.
  • the wiring 107 may be a multilayer, and passive elements and active elements may be provided.
  • a plurality of bumps 109 are disposed on the surface of the wiring layer 108.
  • the plurality of bumps 109 are aligned with the plurality of bumps 109 provided on the opposite surface of the signal processing chip 111, and the imaging chip 113 and the signal processing chip 111 are aligned by pressure or the like.
  • the bumps 109 are joined to be electrically connected.
  • a plurality of bumps 109 are disposed on the surfaces facing each other of the signal processing chip 111 and the memory chip 112. These bumps 109 are aligned with each other, and the signal processing chip 111 and the memory chip 112 are pressurized or the like, whereby the aligned bumps 109 are joined and electrically connected.
  • the bonding between the bumps 109 is not limited to Cu bump bonding by solid phase diffusion, and micro bump bonding by solder melting may be employed. Also, for example, about one bump 109 may be provided for one block described later. Therefore, the size of the bumps 109 may be larger than the pitch of the PDs 104. Further, in the peripheral area other than the pixel area in which the pixels are arranged, bumps larger than the bumps 109 corresponding to the pixel area may be provided.
  • the signal processing chip 111 has TSVs (silicon through electrodes) 110 which mutually connect circuits respectively provided on the front and back surfaces.
  • the TSVs 110 are preferably provided in the peripheral area.
  • the TSV 110 may also be provided in the peripheral area of the imaging chip 113 and the memory chip 112.
  • FIG. 2 is a diagram for explaining the pixel arrangement of the imaging chip 113.
  • FIG. 2 is a diagram for explaining the pixel arrangement of the imaging chip 113.
  • FIG. In particular, a state in which the imaging chip 113 is observed from the back surface side is shown.
  • (A) is a top view which shows typically the imaging surface 200 which is the back surface of the imaging chip 113
  • (b) is the top view which expanded the partial area 200a of the imaging surface 200.
  • Each of the pixels 201 has a color filter (not shown).
  • the color filter consists of three types of red (R), green (G), and blue (B), and the notation “R”, “G”, and “B” in (b) is a color filter that the pixel 201 has Represents the type of As shown in (b), on the imaging surface 200 of the imaging element 100, the pixels 201 provided with such color filters are arranged according to a so-called Bayer arrangement.
  • the pixel 201 having a red filter photoelectrically converts light in the red wavelength band of incident light and outputs a light reception signal (photoelectric conversion signal).
  • the pixel 201 having a green filter photoelectrically converts light in the green wavelength band among incident light and outputs a light reception signal.
  • the pixel 201 having a blue filter photoelectrically converts light in the blue wavelength band among incident light and outputs a light reception signal.
  • the image sensor 100 is configured to be individually controllable for each unit group 202 including a total of four pixels 201 of adjacent 2 pixels ⁇ 2 pixels. For example, when charge storage is started simultaneously for two unit groups 202 different from each other, charge readout is performed 1/30 seconds after charge storage start in one unit group 202, that is, light reception signals are read, In the unit group 202, charge readout is performed 1/15 seconds after the start of charge accumulation. In other words, the imaging device 100 can set different exposure times (charge accumulation time, so-called shutter speed) for each unit group 202 in one imaging.
  • the imaging device 100 can make the amplification factor (so-called ISO sensitivity) of an imaging signal different for each unit group 202 besides the above-described exposure time.
  • the imaging device 100 can change the timing to start the charge accumulation and the timing to read out the light reception signal for each unit group 202. That is, the imaging element 100 can change the frame rate at the time of moving image capturing for each unit group 202.
  • the imaging device 100 is configured to be able to make imaging conditions such as exposure time, amplification factor, and frame rate different for each unit group 202.
  • a reading line (not shown) for reading an imaging signal from a photoelectric conversion unit (not shown) of the pixel 201 is provided for each unit group 202, and the imaging signal can be read independently for each unit group 202.
  • the exposure time (shutter speed) can be made different for each unit group 202.
  • an amplification circuit (not shown) for amplifying an imaging signal generated by the photoelectrically converted charge is provided independently for each unit group 202, and the amplification factor by the amplification circuit can be controlled independently for each amplification circuit.
  • the amplification factor (ISO sensitivity) of the signal can be made different for each unit group 202.
  • the imaging conditions that can be varied for each unit group 202 include frame rate, gain, resolution (thinning rate), number of added rows or number of added columns for adding pixel signals, charge The storage time or number of storage, the number of bits for digitization, and the like.
  • the control parameter may be a parameter in image processing after acquisition of an image signal from a pixel.
  • a liquid crystal panel having sections that can be controlled independently for each unit group 202 (one section corresponds to one unit group 202) is provided in the imaging element 100, and a light reduction filter that can be turned on and off If it is used, it becomes possible to control the brightness (aperture value) for each unit group 202.
  • the number of pixels 201 constituting the unit group 202 may not be the 2 ⁇ 2 four pixels described above.
  • the unit group 202 may have at least one pixel 201, and conversely, may have more than four pixels 201.
  • FIG. 3 is a circuit diagram of the imaging chip 113. As shown in FIG. In FIG. 3, a rectangle surrounded by a dotted line representatively represents a circuit corresponding to one pixel 201. In addition, a rectangle surrounded by an alternate long and short dash line corresponds to one unit group 202 (202-1 to 202-4). Note that at least a part of each of the transistors described below corresponds to the transistor 105 in FIG.
  • the reset transistor 303 of the pixel 201 is turned on / off in unit group 202 units.
  • the transfer transistor 302 of the pixel 201 is also turned on / off in unit group 202 units.
  • reset wirings 300-1 for turning on / off the four reset transistors 303 corresponding to the upper left unit group 202-1 are provided, and four corresponding to the unit group 202-1 are provided.
  • a TX wire 307-1 for supplying a transfer pulse to the transfer transistor 302 is also provided.
  • a reset wiring 300-3 for turning on / off the four reset transistors 303 corresponding to the lower left unit group 202-3 is provided separately from the reset wiring 300-1.
  • a TX wiring 307-3 for supplying transfer pulses to the four transfer transistors 302 corresponding to the unit group 202-3 is provided separately from the TX wiring 307-1.
  • the reset wiring 300-2 and TX wiring 307-2 and the reset wiring 300-4 and TX wiring 307-4 are respectively unit groups It is provided in 202.
  • the 16 PDs 104 corresponding to each pixel 201 are connected to the corresponding transfer transistors 302, respectively.
  • a transfer pulse is supplied to the gate of each transfer transistor 302 via the TX wiring of each unit group 202.
  • the drain of each transfer transistor 302 is connected to the source of the corresponding reset transistor 303, and a so-called floating diffusion FD between the drain of the transfer transistor 302 and the source of the reset transistor 303 is connected to the gate of the corresponding amplification transistor 304.
  • Ru is
  • the drains of the reset transistors 303 are commonly connected to a Vdd wiring 310 to which a power supply voltage is supplied.
  • a reset pulse is supplied to the gate of each reset transistor 303 via the reset wiring of each unit group 202.
  • the drains of the respective amplification transistors 304 are commonly connected to a Vdd wiring 310 to which a power supply voltage is supplied.
  • the source of each amplification transistor 304 is connected to the drain of the corresponding selection transistor 305.
  • the gate of each selection transistor 305 is connected to a decoder wiring 308 to which a selection pulse is supplied.
  • the decoder wiring 308 is provided independently for each of the 16 selection transistors 305.
  • the source of each selection transistor 305 is connected to the common output wiring 309.
  • the load current source 311 supplies a current to the output wiring 309. That is, the output wiring 309 for the selection transistor 305 is formed by a source follower.
  • the load current source 311 may be provided on the imaging chip 113 side or may be provided on the signal processing chip 111 side.
  • each PD 104 converts incident light to be received into charge and accumulates it. Thereafter, when the transfer pulse is applied again in a state where the reset pulse is not applied, the accumulated charge is transferred to the floating diffusion FD, and the potential of the floating diffusion FD becomes a signal potential after charge accumulation from the reset potential. .
  • the reset wiring and the TX wiring are common. That is, the reset pulse and the transfer pulse are simultaneously applied to four pixels in the unit group 202, respectively. Therefore, all the pixels 201 forming a certain unit group 202 start charge accumulation at the same timing, and end charge accumulation at the same timing. However, pixel signals corresponding to the accumulated charges are selectively output from the output wiring 309 by sequentially applying selection pulses to the respective selection transistors 305.
  • the charge accumulation start timing can be controlled for each unit group 202.
  • different unit groups 202 can be imaged at different timings.
  • FIG. 4 is a block diagram showing a functional configuration example of the imaging device 100.
  • the analog multiplexer 411 selects 16 PDs 104 forming the unit group 202 in order, and outputs the respective pixel signals to the output wiring 309 provided corresponding to the unit group 202.
  • the multiplexer 411 is formed on the imaging chip 113 together with the PD 104.
  • the pixel signal output via the multiplexer 411 is subjected to CDS and A / A by the signal processing circuit 412 for performing correlated double sampling (CDS) and analog / digital (A / D) conversion, which is formed in the signal processing chip 111. D conversion is performed.
  • the A / D converted pixel signals are delivered to the demultiplexer 413 and stored in the pixel memory 414 corresponding to each pixel.
  • the demultiplexer 413 and the pixel memory 414 are formed in the memory chip 112.
  • the arithmetic circuit 415 processes the pixel signal stored in the pixel memory 414 and delivers it to the image processing unit in the subsequent stage.
  • the arithmetic circuit 415 may be provided in the signal processing chip 111 or in the memory chip 112.
  • FIG. 4 shows the connection of four unit groups 202, in reality, these units exist for each of the four unit groups 202 and operate in parallel.
  • the arithmetic circuit 415 may not be present for every four unit groups 202.
  • one arithmetic circuit 415 sequentially refers to the values of the pixel memory 414 corresponding to each of the four unit groups 202. It may be processed.
  • the output wirings 309 are provided corresponding to each of the unit groups 202. Since the imaging element 100 has the imaging chip 113, the signal processing chip 111, and the memory chip 112 stacked, by using the electrical connection between the chips using the bumps 109 for the output wiring 309, each chip is made in the surface direction The wiring can be routed without increasing the size.
  • FIG. 5 is an explanatory view showing an example of the block configuration of the electronic device.
  • the electronic device 500 is, for example, a lens-integrated camera.
  • the electronic device 500 includes an imaging optical system 501, an imaging element 100, a control unit 502, a liquid crystal monitor 503, a memory card 504, an operation unit 505, a DRAM 506, a flash memory 507, and a recording unit 508.
  • the control unit 502 includes a compression unit that compresses moving image data as described later. Therefore, the configuration including at least the control unit 502 in the electronic device 500 is a moving image compression apparatus.
  • the imaging optical system 501 is composed of a plurality of lenses, and forms an object image on the imaging surface 200 of the imaging element 100.
  • the imaging optical system 501 is illustrated as a single lens for the sake of convenience.
  • the imaging device 100 is, for example, an imaging device such as a complementary metal oxide semiconductor (CMOS) or a charge coupled device (CCD), captures an object image formed by the imaging optical system 501, and outputs an imaging signal.
  • CMOS complementary metal oxide semiconductor
  • CCD charge coupled device
  • the control unit 502 is an electronic circuit that controls each unit of the electronic device 500, and includes a processor and its peripheral circuits.
  • a predetermined control program is written in advance in the flash memory 507, which is a non-volatile storage medium.
  • the control unit 502 controls each unit by reading a control program from the flash memory 507 and executing it.
  • This control program uses a DRAM 506 which is a volatile storage medium as a work area.
  • the liquid crystal monitor 503 is a display device using a liquid crystal panel.
  • the control unit 502 causes the imaging device 100 to repeatedly capture a subject image at predetermined intervals (for example, 1/60 second). Then, the image pickup signal output from the image pickup element 100 is subjected to various image processing to create a so-called through image, which is displayed on the liquid crystal monitor 503. In addition to the above-described through image, a setting screen for setting an imaging condition is displayed on the liquid crystal monitor 503, for example.
  • the control unit 502 creates an image file to be described later based on the imaging signal output from the imaging element 100, and records the image file on a memory card 504, which is a portable recording medium.
  • the operation unit 505 includes various operation members such as a push button, and outputs an operation signal to the control unit 502 in response to the operation of the operation members.
  • the recording unit 508 is, for example, a microphone, converts environmental sound into an audio signal, and inputs the audio signal to the control unit 502.
  • the control unit 502 may record the moving image file in a recording medium (not shown) built in the electronic device 500 such as a hard disk instead of recording the moving image file in the memory card 504 which is a portable recording medium.
  • FIG. 6 is an explanatory view showing a configuration example of a moving image file.
  • the moving image file 600 is generated during compression processing in a compression unit 902 described later in the control unit 502, and is stored in the memory card 504, the DRAM 506, or the flash memory 507.
  • the moving image file 600 is composed of two blocks of a header portion 601 and a data portion 602.
  • the header unit 601 is a block located at the beginning of the moving image file 600.
  • a file basic information area 611, a mask area 612, and an imaging information area 613 are stored in the order described above.
  • the size and offset of each part (header section 601, data section 602, mask area 612, imaging information area 613, etc.) in the moving image file 600 are recorded.
  • the mask area 612 imaging condition information, mask information, and the like described later are recorded.
  • the imaging information area 613 information related to imaging such as a model name of the electronic device 500 or information of the imaging optical system 501 (for example, information regarding optical characteristics such as aberration) is recorded.
  • the data unit 602 is a block located behind the header unit 601, and stores image information, audio information, and the like.
  • FIG. 7 is an explanatory view showing the relationship between the imaging plane and the subject image.
  • (A) schematically shows an imaging surface 200 (imaging range) of the imaging element 100 and a subject image 701.
  • the control unit 502 captures a subject image 701.
  • the imaging of (a) may also be performed, for example, for creating a live view image (so-called through image).
  • the control unit 502 executes predetermined image analysis processing on the subject image 701 obtained by the imaging in (a).
  • the image analysis process is a process of detecting the main subject area and the background area by, for example, a known subject detection technology (a technology for calculating a feature amount and detecting a range in which a predetermined subject is present).
  • a known subject detection technology a technology for calculating a feature amount and detecting a range in which a predetermined subject is present.
  • the imaging surface 200 is divided into a main subject region 702 in which a main subject is present and a background region 703 in which a background is present.
  • the main subject area 702 may have a shape along the outer shape of the subject image 701. That is, the main subject region 702 may be set so as to include as little as possible other than the subject image 701.
  • the control unit 502 sets different imaging conditions for each unit group 202 in the main subject region 702 and each unit group 202 in the background region 703. For example, in the former unit group 202, a shutter speed faster than that of the latter unit group 202 is set. In this way, in the imaging of (c) taken after the imaging of (a), image blurring is less likely to occur in the main subject region 702.
  • the control unit 502 makes the ISO relatively higher for each unit group 202 of the former. Set the sensitivity or set a slow shutter speed. Further, the control unit 502 sets a relatively low ISO sensitivity or sets a high shutter speed to each of the latter unit groups 202. In this way, in the imaging of (c), it is possible to prevent blackout of the main subject region 702 in a backlit state and overexposure of the background region 703 having a large amount of light.
  • the image analysis process may be different from the process of detecting the main subject area 702 and the background area 703 described above. For example, processing may be performed to detect a portion where the brightness is equal to or more than a predetermined level (a portion that is too bright) or a portion where the brightness is less than a predetermined level (a too dark portion).
  • the control unit 502 causes the exposure value (Ev value) to be lower for the unit group 202 included in the former region than for the unit group 202 included in the other region.
  • Shutter speed and ISO sensitivity are examples of the exposure value (Ev value) to be lower for the unit group 202 included in the former region than for the unit group 202 included in the other region.
  • control unit 502 sets the shutter speed and the ISO sensitivity so that the exposure value (Ev value) of the unit group 202 included in the latter region is higher than that of the unit group 202 included in the other region. . By doing this, the dynamic range of the image obtained by the imaging in (b) can be expanded beyond the original dynamic range of the imaging device 100.
  • FIG. 7 shows an example of the mask information 704 corresponding to the imaging surface 200 shown in (a). “1” is stored at the position of the unit group 202 belonging to the main subject area 702, and “2” is stored at the position of the unit group 202 belonging to the background area 703.
  • the control unit 502 executes an image analysis process on the image data of the first frame to detect the main subject area 702 and the background area 703.
  • the frame obtained by the imaging in (a) is divided into the main subject area 702 and the background area 703 as shown in (c).
  • the control unit 502 sets different imaging conditions for each unit group 202 in the main subject area 702 and each unit group 202 in the background area 703, performs imaging in (c), and creates image data. .
  • An example of the mask information 704 at this time is shown in (d).
  • the mask information 704 of (b) corresponding to the imaging result of (a) and the mask information 704 of (d) corresponding to the imaging result of (b) perform imaging at different times (the time difference is Therefore, for example, when the subject is moving or when the user moves the electronic device 500, the two mask information 704 have different contents.
  • the mask information 704 is dynamic information that changes as time passes. Therefore, in a certain unit group 202, different imaging conditions are set for each frame.
  • FIG. 8 is an explanatory view showing a specific configuration example of the moving image file 600. As shown in FIG. In the mask area 612, identification information 801, imaging condition information 802, and mask information 704 are recorded in the order described above.
  • the identification information 801 indicates that the moving image file 600 is created by the multi-frame rate moving image capturing function.
  • the multi-frame rate moving image capturing function is a function of capturing a moving image with the imaging device 100 in which a plurality of frame rates are set.
  • the imaging condition information 802 is information indicating what use (purpose, role) exists in the unit group 202. For example, as described above, when the imaging plane 200 (FIG. 7A) is divided into the main subject area 702 and the background area 703, each unit group 202 belongs to the main subject area 702, or It belongs to the area 703.
  • the imaging condition information 802 has two types of uses, such as “moving image pickup of the main subject area at 60 fps” and “moving image pickup of the background area at 30 fps” in the unit group 202, for example. It is information that represents what has been done and a unique number assigned to each of these uses. For example, the number 1 is assigned to “moving image capture of the main subject area at 60 fps”, and the number 2 is assigned to “moving image capturing of the background region at 30 fps”.
  • the mask information 704 is information representing the use (purpose, role) of each unit group 202.
  • the mask information 704 is “information represented by a number assigned to the imaging condition information 802 in the form of a two-dimensional map in accordance with the position of the unit group 202”. That is, when the two-dimensional array of unit groups 202 is specified by two integers x and y at two-dimensional coordinates (x, y), the use of the unit group 202 at the position of (x, y) is It is expressed by the number existing at the position (x, y) of the mask information 704.
  • the unit group 202 located at the coordinates (3, 5) is “60 [fps of main subject area] It can be seen that the application of “shooting with In other words, it can be understood that the unit group 202 located at the coordinates (3, 5) belongs to the main subject region 702.
  • the mask information 704 is dynamic information that changes for each frame, it is recorded during compression processing for each frame, that is, for each data block Bi described later (not shown).
  • Data blocks B1 to Bn are stored as moving image data in the order of imaging for each frame F (F1 to Fn).
  • Data block Bi (i is an integer of 1 ⁇ i ⁇ n) includes mask information 704, image information 811, Tv value map 812, Sv value map 813, Bv value map 814, and Av value information 815, Audio information 816 and additional information 817 are included.
  • the image information 811 is information obtained by recording an image pickup signal output from the image pickup element 100 by the image pickup of FIG. 7C in a form before performing various image processing, and is so-called RAW image data.
  • the Tv value map 812 is information in which a Tv value representing a shutter speed set for each unit group 202 is represented in the form of a two-dimensional map in accordance with the position of the unit group 202.
  • the shutter speed set to the unit group 202 located at the coordinates (x, y) can be determined by examining the Tv value stored at the coordinates (x, y) of the Tv value map 812.
  • the Sv value map 813 is information in which the Sv value representing the ISO sensitivity set for each unit group 202 is expressed in the form of a two-dimensional map, similarly to the Tv value map 812.
  • the Bv value map 814 is a Tv value map 812 for the subject brightness measured for each unit group 202 at the time of imaging in FIG. 7C, that is, the Bv value representing the brightness of the subject light incident on each unit group 202. And is information expressed in the form of a two-dimensional map.
  • the Av value information 815 is information representing the aperture value at the time of imaging in (c) of FIG. 7. Unlike the Tv value, the Sv value, and the Bv value, the Av value is not a value that exists for each unit group 202. Therefore, unlike the Tv value, the Sv value, and the Bv value, only a single value of the Av value is stored, and the information is not information obtained by mapping a plurality of values in a two-dimensional manner.
  • the audio information 816 is divided into information of one frame, easily multiplexed with the data block Bi, and stored in the data unit 602 so as to facilitate moving image reproduction.
  • the audio information 816 may be multiplexed not for one frame but for a predetermined number of frames. Note that the voice information 816 does not necessarily have to be included.
  • the additional information 817 is information in which a frame rate set for each unit group 202 at the time of imaging in (c) of FIG. 7 is expressed in the form of a two-dimensional map.
  • the setting of the additional information 817 will be described later with reference to FIGS. 14 and 15.
  • the additional information 817 may be held in the frame F, but may be held in a cache memory of the processor 1201 described later. In particular, when performing compression processing in real time, it is preferable to use a cache memory from the viewpoint of high-speed processing.
  • control unit 502 performs image pickup with the multi-frame rate moving image pickup function, thereby setting the image information 811 generated by the image pickup element 100 in which the image pickup condition can be set for each unit group 202.
  • a moving image file 600 associated with data relating to the imaging conditions (imaging condition information 802, mask information 704, Tv value map 812, Sv value map 813, Bv value map 814, etc.) is recorded in the memory card 504.
  • FIG. 9 is an explanatory diagram of an example of moving image compression according to the first embodiment.
  • the electronic device 500 includes the imaging device 100 described above and a control unit 502.
  • the control unit 502 includes an image processing unit 901 and a compression unit 902.
  • the imaging element 100 has a plurality of imaging areas for imaging a subject.
  • the imaging region is a pixel set of at least one pixel or more, and is, for example, one or more unit groups 202 described above.
  • a frame rate can be set for each unit group 202 in the imaging region.
  • the first frame rate (for example, 30 [fps]) is set in the first imaging area of the imaging areas, and the second imaging area other than the first imaging area is set to a level higher than the first frame rate. It is assumed that a fast second frame rate (for example, 60 fps) is set.
  • the values of the first frame rate and the second frame rate are an example, and any other value may be used as long as the second frame rate is faster than the first frame rate.
  • the imaging element 100 captures an image of a subject, and outputs an image signal (in FIG. 9, for convenience, referred to as first moving image data 910 including a plurality of frames) to the image processing unit 901.
  • An area of image data captured in an imaging area of the imaging element 100 in a frame is referred to as an image area.
  • the first image area a1 in which the entire frame (landscape) is imaged at the first frame rate (30 [fps]) Become.
  • a frame of only the second image area a2 imaged at the second frame rate (60 [fps]) set in the second imaging area is referred to as a second frame.
  • the size of the corresponding image region is also the size of the unit group 202.
  • the size of the corresponding image area is also the size of 2 ⁇ 2 unit groups 202.
  • a frame including the first image area a1 is a first frame
  • a frame including only a specific subject image in the second image area a2 is a second frame. It is a frame.
  • the imaging area may be three or more. In this case, it is possible to set a frame rate different from the first frame rate and the second frame rate for the third imaging region and thereafter.
  • the image processing unit 901 performs image processing on moving image data (hereinafter, referred to as first moving image data) 910 input from the imaging element 100. Specifically, for example, the image processing unit 901 refers to the temporally previous first frame of the second frame, and copies, that is, synthesizes the first frame to the second frame of the reference source. .
  • the combined frame is referred to as the third frame.
  • the third frame is a frame in which the specific subject image in the second frame is superimposed on the subject image in the first frame.
  • the image processing unit 901 outputs, to the compression unit 902, moving image data (hereinafter, second moving image data) 920 including the first frame imaged at 30 fps and the third frame which is a combined frame.
  • the compression unit 902 can not compress the first moving image data 910 as it is. Therefore, when the first frame and the second frame are mixed in the frame sequence, the image processing unit 901 generates the second moving image data 920 in which the compression unit 902 can operate. As a result, the general-purpose compression unit 902 enables the compression unit 902 to compress the second moving image data 920 in the same manner as normal moving image data.
  • the compression unit 902 compresses the second moving image data 920 input from the image processing unit 901.
  • the compression unit 902 compresses, for example, a motion compensation interframe prediction (Motion Compensation: MC) and a discrete cosine transform (DCT) by hybrid coding in which entropy coding is combined.
  • MC motion compensation interframe prediction
  • DCT discrete cosine transform
  • the compression unit 902 executes compression processing that does not require motion detection or compensation for the first image area a1 indicated by hatching among the first frame and the third frame that constitute the second moving image data 920,
  • the second image area a2 of the blackened specific subject image is compressed by the above-described hybrid coding.
  • the motion detection and the motion compensation are not performed for the first image area a1 other than the specific subject image, it is possible to reduce the processing load of the moving image compression.
  • FIG. 10 is an explanatory view of an image processing example 1 in the moving image compression shown in FIG.
  • the electronic device 500 shoots a running train as a specific subject during fixed-point shooting of a landscape including rice fields, mountains, and the sky.
  • the train which is a specific subject, is identified by the above-described known subject detection technology.
  • the captured frames are assumed to be frames F1, F2-60, F3, F4-60, and F5 in chronological order. Here, it is assumed that the train travels from right to left in the frames F1, F2-60, F3, F4-60, and F5.
  • the frames F1, F3, and F5 are the image data of the first image area a1 captured at the first frame rate of 30 fps in the first imaging area, and the second frame rate of 60 fps in the second imaging area. And the image data of the second image area a2 captured in the first frame.
  • Frames F2-60 and F4-60 are the second frames including the image data of the second image area a2 of which the second imaging area is imaged at the second frame rate of 60 [fps].
  • Frames F2-60 and F4-60 are frames in which a train is captured in the second image area a2. That is, in the frames F1, F2-60, F3, and F4-60, F5, the image data of the second image area a2 in which the train is captured is the image data captured in the second imaging area (60 fps). In the frames F1, F3, and F5, the image data of the first image area a1 in which the landscape is captured is the image data captured in the first imaging area (30 [fps]). Since the first imaging area is imaged at the first frame rate, nothing is imaged in the first image area a1 of the frames F2-60 and F4-60 imaged at the second frame rate.
  • the image processing unit 901 sets the image data of the first image area a1 of the frame F1 immediately preceding the frame F2-60 in the first image area a1 of the second image area a2 of the frame F2-60. Copy image data (train). Thus, the image processing unit 901 generates a frame F2, which is a third frame.
  • the image processing unit 901 sets the image data of the first image area a1 of the first image area a1 of the frame F3 immediately preceding the frame F4-60 (a landscape excluding a train) to the frame F4-4.
  • the image data (train) of the 60 second image area a2 is copied.
  • the image processing unit 901 generates a frame F4, which is a third frame.
  • the image processing unit 901 outputs the second moving image data 920 including the frames F1 to F5.
  • the first image area a1 is obtained by interpolating the image data of the first image area a1 of the frames F2-60 and F4-60 with the frames F1 and F3 of the immediately preceding first frame rate.
  • the difference between the frames F1 and F2 can be approximately zero, and the difference between the frames F3 and F4 can be approximately zero. Therefore, it is possible to compress a frame sequence in which the first frame and the second frame are mixed by the conventional compression unit 902. In addition, the processing load of the compression processing can be reduced.
  • the frame F2 In the frame F2, the image data of the first image area a1 of the frame F1 (the landscape excluding the train) is copied. Therefore, the part (end of train) of the second image area a2 of the frame F1 is not copied to the frame F2. For this reason, the frame F2 has a range Da1 in which nothing is output.
  • the frame F4 has a range Da3 in which nothing is output.
  • the image processing unit 901 may fill in a specific color (for example, white, black, or gray) for the ranges Da1 and Da3 and may perform demosaicing using peripheral pixels. Thereby, it is possible to reproduce the frames F2, F4,... Which are capable of moving picture compression and with less discomfort.
  • a specific color for example, white, black, or gray
  • FIG. 11 is an explanatory diagram of an image processing example 2 in the moving image compression illustrated in FIG.
  • the electronic device 500 is, for example, a drive recorder, and captures a car traveling ahead (preceding car) and a landscape.
  • the preceding vehicle is a specific subject to be tracked, and the landscape changes by self-travel.
  • the photographed frames are assumed to be frames F6, F7-60, F8, F9-60, and F10 in chronological order.
  • the frames F6, F8, and F10 are the image data of the first image area a1 captured at the first frame rate of 30 fps in the first imaging area, and the second frame rate of 60 fps in the second imaging area. And the image data of the second image area a2 captured in the first frame.
  • the frames F7-60 and F9-60 are second frames including image data of the second image area a2 of which the second imaging area is imaged at the second frame rate of 60 [fps].
  • the frames F6, F8, and F10 are the first frames in which the preceding vehicle is imaged in the first image area a1 and the landscape changing into the second image area a2 is imaged.
  • Frames F7-60 and F9-60 are frames in which a landscape is captured in the second image area a2. That is, in the frames F6, F7-60, F8, F9-60, and F10, the image data of the second image region a2 in which the landscape is captured is the image data captured in the second imaging region (60 fps).
  • the image data of the first image area a1 in which the preceding vehicle is imaged is the image data imaged in the first imaging area (30 [fps]). Since the first imaging area is imaged at the first frame rate, nothing is imaged in the first image area a1 of the frames F7-60 and F9-60 imaged at the second frame rate.
  • the image processing unit 901 applies the second image area a2 of the frame F7-60 to the image data of the first image area a1 of the first image area a1 of the frame F6 one frame before the frame F7-60 (the preceding vehicle excluding the landscape). Copy the image data (scenery) of Thereby, the image processing unit 901 generates a frame F7 which is a third frame.
  • the image processing unit 901 applies the frame F9-60 to the image data of the first image area a1 of the first image area a1 of the frame F8 immediately preceding the frame F9-60 (the preceding vehicle excluding the landscape). Copy the image data (landscape) of the second image area a2 of Thus, the image processing unit 901 generates a frame F9 which is a third frame. Then, the image processing unit 901 outputs the second moving image data 920 including the frames F6 to F10.
  • the image data of the first image area a1 of the frames F7-60 and F9-60 is temporally interpolated with the frames F6 and F8 of the first frame rate immediately before the first image area a1.
  • the difference between the frames F6 and F7 can be made zero, and the difference between the frames F8 and F9 can be made zero. Therefore, it is possible to compress a frame sequence in which the first frame and the second frame are mixed by the conventional compression unit 902. In addition, the processing load of the compression processing can be reduced.
  • the control unit 502 may execute compression processing of the second moving image data 920 in real time processing, or may execute it in batch processing.
  • the control unit 502 temporarily stores the first moving image data 910 and the second moving image data 920 from the imaging device 100, the preprocessing unit 900, or the image processing unit 901 in the memory card 504, the DRAM 506, or the flash memory 507. Automatically, or when there is a trigger by a user operation, the first moving image data 910 and the second moving image data 920 are read out (in the case of the first moving image data 910, the image processing unit 901 generates the second moving image data 920). After conversion), the compression unit 902 may execute compression processing.
  • FIG. 12 is a block diagram showing a configuration example of the control unit 502 shown in FIG.
  • the control unit 502 includes a preprocessing unit 1210, an image processing unit 901, an acquisition unit 1220, and a compression unit 902, and is configured by a processor 1201, a memory 1202, an integrated circuit 1203, and a bus 1204 connecting these. Be done.
  • the preprocessing unit 1210, the image processing unit 901, the acquisition unit 1220, and the compression unit 902 may be realized by causing the processor 1201 to execute a program stored in the memory 1202, and may be realized by an application specific integrated circuit (ASIC) or an FPGA (FPGA). It may be realized by an integrated circuit 1203 such as a field-programmable gate array). Also, the processor 1201 may use the memory 1202 as a work area. The integrated circuit 1203 may use the memory 1202 as a buffer that temporarily holds various data including image data.
  • the preprocessing unit 1210 performs preprocessing of image processing by the image processing unit 901 on the first moving image data 910 from the imaging element 100.
  • the preprocessing unit 1210 includes a detection unit 1211 and a setting unit 1212.
  • the detection unit 1211 detects a specific subject by the known subject detection technology described above.
  • the setting unit 1212 adds additional information 817 to each frame constituting the first moving image data 910 from the imaging device 100. Further, the setting unit 1212 changes the imaging area in which the specific subject is detected in the imaging surface 200 of the imaging element 100 from the first frame rate (for example, 30 fps) to the second frame rate (60 fps). change.
  • setting unit 1212 detects the motion vector of the specific subject from the difference between the imaging area in which the specific subject in the input frame is detected and the imaging area in which the specific subject in the input completed frame is detected. , The imaging area of the specific subject in the next input frame is predicted. The setting unit 1212 outputs, to the imaging element 100, an instruction to change the predicted imaging area to the second frame rate.
  • the image processing unit 901 performs image processing on each frame of the first moving image data 910 output from the preprocessing unit 1210. Specifically, for example, the image processing unit 901 includes a specifying unit 1213 and a combining unit 1214.
  • the identifying unit 1213 includes the first frame (for example, frame F1 in FIG. 10) and the second frame (for example, frame F2-60 in FIG. 10). And a difference area between the second image area a2 corresponding to the second imaging area in the first frame and the second image area a2 corresponding to the second imaging area in the second frame.
  • the difference area between the frame F1 and the frame F2-60 is the first image area a1 behind the train in the frame F2-60.
  • the synthesizing unit 1214 is, as shown in FIGS. 9 to 11, the first frame before the temporally one in the second frame (for example, the frame F2-60 in FIG. 10) of only the image data in the second image area a2.
  • the first frame (for example, frame F1 in FIG. 10) including the image data of the one image area a1 is copied and synthesized to generate a third frame (for example, frame F2 in FIG. 10).
  • the combining unit 1214 copies the image data (the end portion of the train) of the second image region a2 at the same position as the difference region of the first frame to the difference region (range Da1) specified by the specifying unit 1213 ( See the dotted circle in frame F2-60 of FIG. As a result, the difference between the temporally consecutive first and third frames can be made substantially zero.
  • the acquisition unit 1220 holds the second moving image data 920 output from the image processing unit 901 in the memory 1202 and compresses the plurality of frames included in the second moving image data 920 one frame at a time in chronological order at a predetermined timing. Output to 902.
  • the compression unit 902 compresses the input second moving image data 920 as shown in FIG. Specifically, for example, the compression unit 902 compresses the image data of the first image area a1 of the first frame and the third frame constituting the second moving image data 920 without requiring motion detection or motion compensation.
  • the processing is executed, and the image data of the second image region a2 in which the specific subject is captured is compressed by the above-described hybrid coding. As described above, the motion detection and the motion compensation are not performed for the area other than the specific subject image, so that the processing load of the moving image compression is reduced.
  • FIG. 13 is a block diagram showing a configuration example of the compression unit 902. As shown in FIG. As described above, the compression unit 902 compresses each frame of the second moving image data 920 by hybrid coding in which the motion compensation interframe prediction (MC) and the discrete cosine transform (DCT) are combined with the entropy coding. .
  • MC motion compensation interframe prediction
  • DCT discrete cosine transform
  • the compression unit 902 includes a subtraction unit 1301, a DCT unit 1302, a quantization unit 1303, an entropy coding unit 1304, a code amount control unit 1305, an inverse quantization unit 1306, an inverse DCT unit 1307, and a generation unit.
  • a frame memory 1309, a motion detection unit 1310, a motion compensation unit 1311 and a compression control unit 1312 are provided.
  • the subtractor unit 1301 to the motion compensation unit 1311 have the same configuration as the existing compressor.
  • the subtraction unit 1301 subtracts the prediction frame from the motion compensation unit 1311 that predicts the input frame from the input frame, and outputs difference data.
  • the DCT unit 1302 performs discrete cosine transform on the difference data from the subtracting unit 1301.
  • the quantization unit 1303 quantizes the discrete cosine transformed difference data.
  • the entropy coding unit 1304 entropy codes the quantized difference data, and also entropy codes the motion vector from the motion detection unit 1310.
  • the code amount control unit 1305 controls the quantization by the quantization unit 1303.
  • the inverse quantization unit 1306 inversely quantizes the difference data quantized by the quantization unit 1303 to obtain discrete cosine transformed difference data.
  • the inverse DCT unit 1307 inverse discrete cosine transforms the dequantized difference data.
  • the generation unit 1308 adds the inverse discrete cosine transformed difference data and the prediction frame from the motion compensation unit 1311 to generate a reference frame to which a frame input temporally after the input frame refers. .
  • the frame memory 1309 holds the reference frame obtained from the generation unit 1308.
  • the motion detection unit 1310 detects a motion vector using the input frame and the reference frame.
  • the motion compensation unit 1311 generates a predicted frame using the reference frame and the motion vector.
  • the motion compensation unit 1311 uses, for example, a specific reference frame and a motion vector among a plurality of reference frames stored in the frame memory 1309 to perform motion compensation of a frame captured at the second frame rate. Run.
  • a specific reference frame By making the reference frame a specific reference frame, it is possible to suppress high-load motion compensation using another reference frame other than the specific reference frame.
  • a specific reference frame by setting a specific reference frame as one reference frame obtained from the temporally previous frame of the input frame, heavy processing of motion compensation is avoided, and processing load on motion compensation is reduced.
  • the compression control unit 1312 controls the motion detection unit 1310 and the motion compensation unit 1311. Specifically, for example, the compression control unit 1312 sets a first compression control method for setting a specific motion vector indicating that there is no motion in the motion detection unit 1310, and a second compression control method for skipping the motion detection itself. Run.
  • the compression control unit 1312 controls the motion detection unit 1310 to set the motion vector of the first image area a1 captured at the first frame rate (for example, 30 fps).
  • the second image area a2 that is not detected but is set to indicate a specific motion vector indicating no motion, and is output to the motion compensation unit 1311 and captured at a second frame rate (for example, 60 fps) , And detects the motion vector, and outputs it to the motion compensation unit 1311.
  • the specific motion vector is a motion vector in which the direction is not defined and the amount of motion is zero.
  • the compression control unit 1312 controls the motion compensation unit 1311 to perform motion compensation on the image data of the first image area a1 based on the specific motion vector and the reference frame, and the second image area Motion compensation is performed on the image data of a 2 based on the motion vector detected by the motion detection unit 1310.
  • the compression control unit 1312 controls the motion detection unit 1310 to set the motion vector of the first image area a1 captured at the first frame rate (for example, 30 fps).
  • a motion vector is detected for the second image area a2 captured at the second frame rate (for example, 60 fps) without performing the detection.
  • the compression control unit 1312 controls the motion compensation unit 1311 to perform motion compensation on the image data of the first image area a1 based on the reference frame. That is, since there is no motion vector, the compression control unit 1312 controls the motion compensation unit 1311 to obtain a reference frame for the image data of the first image area a1 and a frame one frame after the input frame in time. Decide on the prediction frame to be predicted. Also, the compression control unit 1312 controls the motion compensation unit 1311 to perform motion compensation on the image data of the second image region a2 based on the reference frame and the motion vector detected by the motion detection unit 1310.
  • the motion vector is a specific motion vector
  • motion detection in the first image area a1 is simplified. Therefore, the processing load of moving image compression is reduced.
  • the motion detection itself is not executed for the first image area a1, so the processing load of moving image compression is reduced compared to the first compression control method.
  • FIG. 14 is a sequence diagram showing an operation processing procedure example of the control unit 502.
  • the acquisition unit 1220 is omitted for the convenience of description.
  • the preprocessing unit 1210 automatically operates the entire imaging surface 200 of the imaging device 100.
  • An imaging condition is set to a first frame rate (for example, 30 [fps]) (step S1401).
  • the preprocessing unit 1210 transmits a first frame rate setting instruction including the setting contents of step S1401 to the imaging element 100 (step S1402).
  • the imaging device 100 sets the imaging condition of the entire imaging surface 200 to the first frame rate, and the imaging device 100 images the subject at the first frame rate, and the first moving image data 910 is processed by the preprocessing unit 1210. (Step S1403).
  • step S1404 additional information 817 is set in each frame of the first moving image data 910.
  • the additional information 817 is, as described above, the frame rate set for the image area in the frame imaged in each imaging area.
  • the image area to which the first frame rate (for example, 30 fps) is added as the additional information 817 is recognized as the first image area a1
  • the second frame rate for example, 60 fps
  • An image area added as 817 is recognized as a second image area a2.
  • the preprocessing unit 1210 outputs, to the image processing unit 901, the first moving image data 910 in which the additional information 817 is added to each frame (step S1405).
  • step S1404 When the preprocessing unit 1210 does not detect the second frame rate image area of the next input frame in the setting process (step S1404) (step S1406: No), the input of the first moving image data 910 in step S1403 is performed. Wait for On the other hand, when the pre-processing unit 1210 detects an image area at the second frame rate of the next input frame in the setting process (step S1404) (step S1406: Yes), the pre-processing unit 1210 The setting is changed to 2 frame rates (for example, 60 [fps]) (step S1407).
  • 2 frame rates for example, 60 [fps]
  • the preprocessing unit 1210 transmits, to the imaging element 100, a second frame rate setting instruction including the setting change content of step S1407 (step S1408).
  • the imaging device 100 sets the imaging condition of the second imaging region in the entire imaging surface 200 to the second frame rate, and the imaging device 100 images the subject at the first frame rate in the first imaging region, In the second imaging region, imaging is performed at the second frame rate, and the first moving image data 910 is output to the preprocessing unit 1210 (step S1409).
  • the preprocessing unit 1210 executes additional information setting processing (step S1410).
  • the additional information setting process (step S1410) is the same process as the additional information setting process (step S1404). Details of the additional information setting process (step S1410) will be described later with reference to FIG.
  • the preprocessing unit 1210 outputs, to the image processing unit 901, the first moving image data 910 in which the additional information 817 is added to each frame (step S1411).
  • step S1412 If the specific subject is not detected (step S1412: YES), the preprocessing unit 1210 returns to step S1401 and changes the setting of the entire imaging surface 200 to the first frame rate (step S1401). On the other hand, if the specific subject continues to be detected (step S1412: NO), the process returns to step S1407, and changes the second image area a2 corresponding to the detection position of the specific subject to the second frame rate (step S1407). In this case, the preprocessing unit 1210 changes the setting to the first frame rate for an image area in which the specific subject is not detected.
  • the image processing unit 901 executes image processing with reference to the additional information 817 (step S1413).
  • the image processing unit 901 refers to the additional information 817 of each frame, and specifies that each frame of the first moving image data 910 is only the first frame.
  • the image processing unit 901 since the specific subject is not imaged, the image processing unit 901 does not generate the third frame. Details of the image processing (step S1413) will be described later with reference to FIG.
  • the image processing unit 901 outputs the first moving image data 910 to the compression unit 902 (step S1414).
  • the image processing unit 901 executes image processing with reference to the additional information 817 (step S1415).
  • the image processing unit 901 refers to the additional information 817 of each frame, and specifies that each frame of the first moving image data 910 includes the first frame and the second frame.
  • the image processing unit 901 since the specific subject is imaged in the first frame and the second frame, the image processing unit 901 generates a third frame. Details of the image processing (step S1415) will be described later with reference to FIG.
  • the image processing unit 901 outputs the second moving image data 920 including the first frame and the third frame to the compression unit 902 (step S1416).
  • the compression unit 902 executes compression processing of the first moving image data 910 (step S 1417). Since the first moving image data 910 includes only the first frame, the compression unit 902 executes compression encoding that does not require motion detection or compensation in the compression processing (step S1417). Details of the compression process (step S1417) will be described later with reference to FIGS.
  • the compression unit 902 executes a compression process of the second moving image data 920 (step S1418). Since the second moving image data 920 includes the first frame and the third frame, the compression unit 902 is a compression code that does not require motion detection or compensation for the first image area a1 in the compression process (step S1418). The second image area a2 is compressed by the normal hybrid coding. Details of the compression process (step S1418) will be described later with reference to FIGS.
  • FIG. 15 is a flowchart showing a detailed processing procedure example of the setting process (steps S1404 and S1410) shown in FIG.
  • the first frame rate for example, 30 fps
  • the second frame rate for example, 60 fps
  • the image area is tracked and fed back to the image sensor 100.
  • the image areas of the first frame rate and the second frame rate may always be fixed.
  • the preprocessing unit 1210 waits for input of a frame constituting the first moving image data 910 (step S1501: No), and when a frame is input (step S1501: Yes), the detection unit 1211 detects a specific subject such as a main subject. It is determined whether it has been detected (step S1502). When the specific subject is not detected (step S1502: No), the process proceeds to step S1504.
  • the preprocessing unit 1210 causes the detection unit 1211 to compare the temporally previous frame (for example, reference frame) with the input frame to move The vector is detected, the image area of the second frame rate in the next input frame is predicted, and the image area is output to the imaging element 100, and the process proceeds to step S1504 (step S1503).
  • the imaging element 100 sets the imaging condition of the unit group 202 constituting the imaging area corresponding to the predicted image area to the second frame rate, and sets the imaging condition of the remaining unit group 202 to the first frame rate. Set and image the subject.
  • step S1505 is a process of setting the above-described additional information, which will be described in detail with reference to FIG.
  • step S1501 If there is no frame input (step S1501: NO), since the input of the first moving image data 910 has ended, the preprocessing unit 1210 ends the setting process (steps S1404 and S1410).
  • FIG. 16 is a flowchart showing a detailed processing procedure example of the additional information setting process (step S1505) shown in FIG.
  • the preprocessing unit 1210 determines whether there is an unselected image area in the input frame (step S1602). If there is an unselected image area (step S1602: YES), the preprocessing unit 1210 selects one unselected image area (step S1603), and determines whether the detection flag of the specific subject is ON (step S1603).
  • the detection flag is information indicating the presence or absence of detection of a specific subject, and the default is OFF (non-detection).
  • step S1406 If a specific subject is detected in step S1406 in FIG. 14 (step S1406: YES), the preprocessing unit 1210 changes the detection flag from OFF to ON (during detection). If the specific subject is not detected in step S1412 (step S1412: YES), the preprocessing unit 1210 changes the detection flag from ON to OFF.
  • step S1604 determines whether the selected image area is an image area where a specific subject image is present (step S1606).
  • step S1606 If the specific subject image does not exist (step S1606: No), the process returns to step S1602. On the other hand, if the specific subject image exists (step S1606: YES), the preprocessing unit 1210 sets information indicating the second frame rate for the selected image area in the additional information 817 (step S1607), and returns to step S1602.
  • step S1602 If there is no unselected image area at step S1602 (step S1602: NO), the preprocessing unit 1210 ends the additional information setting process. Thereafter, the preprocessing unit 1210 transmits a frame rate setting instruction to the imaging element 100 (steps S1402 and S1408).
  • the preprocessing unit 1210 can specify which frame rate the imaging area of the imaging element 100 corresponding to which image area should be set to. it can.
  • the image processing unit 901 and the compression unit 902 can specify the frame rate of each image area of the input frame from the additional information 817.
  • FIG. 17 is a flowchart showing an example of a moving image file generation processing procedure.
  • the moving image file generation process is performed, for example, during the compression process of the compression unit 902, but may be performed after the image processing unit 901 when generating the moving image file 600 without compression.
  • control unit 502 generates identification information 801, imaging condition information 802, and mask information 704, and stores them in the mask area 612 in that order (step S1701).
  • the preprocessing unit 1210 stores the imaging information in the imaging information area 613 (step S1702).
  • control unit 502 generates the Tv value map 812, the Sv value map 813, the Bv value map 814, and the Av value information 815 (step S1703).
  • the preprocessing unit 1210 stores the mask information 704, the image information 811, the Tv value map 812, the Sv value map 813, the Bv value map 814, and the Av value information 815 in the data area in this order (step S1704).
  • control unit 502 generates file basic information, and stores the file basic information in the file basic information area 611 which is the head of the header section 601 (step S1705). Thus, the control unit 502 can generate the moving image file 600.
  • FIG. 18 is a flowchart showing a detailed processing procedure example of the image processing (steps S1413 and S1415) shown in FIG.
  • the image processing unit 901 inputs a frame (step S1801)
  • the image processing unit 901 refers to the additional information 817 (step S1802).
  • the image processing unit 901 determines whether the frame rate in the additional information 817 is only the second frame rate (step S1803).
  • the additional information 817 includes only the first frame rate or the first frame rate and the second frame rate. Therefore, the image processing unit 901 holds the image information of the input frame as a storage target, overwrites the buffer (step S1804), and shifts to step S1806.
  • step S1803 YES
  • the image information of the second frame is stored with the image information in the buffer overwritten in step S1804 and the image information of the input frame.
  • a target is generated (step S1805), and the process proceeds to step S1806.
  • the additional information 817 is only the first frame rate (step S1803: No).
  • the additional information 817 includes the first frame rate and the second frame rate (step S1803: NO) or only the second frame rate (step S1803: YES).
  • the image processing unit 901 causes the frame consisting only of the second image area a2 of the second frame rate to be the first image of the first frame rate one time before. It is possible to interpolate according to the area a1 and combine it into a frame including the first image area a1 and the second image area a2. Therefore, differences in frame rates within one frame can be absorbed.
  • FIG. 19 is a flowchart illustrating an example of a compression control process procedure of the first compression control method by the compression control unit 1312.
  • the compression control unit 1312 acquires an input frame (step S1901), and selects an unselected image area from the acquired input frame (step S1902). Then, the compression control unit 1312 refers to the frame rate of the selected image area from the additional information 817 (step S1903).
  • step S1903 the compression control unit 1312 outputs the image data of the selected image area to the motion detection unit 1310 (step S1904).
  • the motion detection unit 1310 detects a motion vector using the reference frame as usual for the selected image region of the second frame rate.
  • step S1903 when the frame rate of the selected image area is the first frame rate (step S1903: first FR), the compression control unit 1312 sets the skip flag in the selected image area of the first frame rate, and the motion detection unit 1310 (Step S1905).
  • the motion detection unit 1310 sets a specific motion vector indicating that there is no motion for the selected image area of the first frame rate.
  • step S1904 or S1905 the compression control unit 1312 determines whether there is an unselected image area in the acquired input frame (step S1906). If there is an unselected image area (step S1906: YES), the process returns to step S1902. On the other hand, if there is no unselected image area (step S1906: NO), the compression control unit 1312 ends the series of processes.
  • FIG. 20 is a flowchart illustrating an example of a motion detection processing procedure of the first compression control method by the motion detection unit 1310.
  • the motion detection unit 1310 obtains the reference frame one time before the input frame from the frame memory 1309 (step S2001), and waits for the input of the selected image area output in step S1904 or S1905 of FIG. Step S2002: No).
  • step S2002 If the selected image area is input (step S2002: YES), the motion detection unit 1310 acquires image data of an image area at the same location as the selected image area from the reference frame (step S2003). Then, the motion detection unit 1310 determines whether there is a skip flag in the selected image area (step S2004). When there is no skip flag (step S2004: No), the frame rate of the selected image area is the second frame rate. Therefore, the motion detection unit 1310 detects a motion vector using the image data of the selected image area and the image data of the image area of the reference frame acquired in step S2003 (step S2005).
  • step S2004 when there is a skip flag (step S2004: Yes), the motion detection unit 1310 sets a specific motion vector indicating that there is no motion (step S2006). As a result, the motion detection process in the motion detection unit 1310 always uses a specific motion vector indicating that there is no motion, so the processing load of motion detection is reduced for the selected image area of the first frame rate. Then, the motion detection unit 1310 outputs the motion vector obtained in step S2005 or S2006 to the motion compensation unit 1311 (step S2007), and ends the series of processing.
  • FIG. 21 is a flowchart illustrating an example of a motion compensation processing procedure of the first compression control method by the motion compensation unit 1311.
  • the motion compensation unit 1311 obtains a reference frame from the frame memory 1309 (step S2101).
  • the motion compensation unit 1311 obtains an image area at the same place as the selected image area from the reference frame (step S2102).
  • the motion compensation unit 1311 performs motion compensation using the motion vector for the selected image area from the motion detection unit 1310 and the image area of the reference frame acquired in step S2102 (step S2103). Thereby, the motion compensation unit 1311 can generate predicted image data in the selected image region.
  • the motion compensation unit 1311 determines whether or not motion compensation of all the selected image areas is completed (step S2104). Specifically, for example, when the compression control unit 1312 determines that there is an unselected image region in step S1906 (step S1907: Yes), the motion compensation unit 1311 ends the motion compensation of all selected image regions. It judges that there is not (Step S2104: No), it returns to Step S2102.
  • step S1906 determines that there is no unselected image area in step S1906 (step S1906: No).
  • the motion compensation unit 1311 determines that the motion compensation for all selected image areas is completed (step S2104). : Yes). Then, the motion compensation unit 1311 outputs the prediction frame obtained by combining the prediction image data of all the selected image regions to the subtraction unit 1301 and the generation unit 1308 (step S2105), and ends the series of processing.
  • FIG. 22 is a flowchart illustrating an example of a compression control process procedure of the second compression control method by the compression control unit 1312.
  • the compression control unit 1312 acquires an input frame (step S2201), and selects an unselected image region from the acquired input frame (step S2202). Then, the compression control unit 1312 refers to the frame rate of the selected image area from the additional information 817 (step S2203).
  • step S2203 If the frame rate of the selected image area is the second frame rate (step S2203: second FR), the compression control unit 1312 outputs the selected image area to the motion detection unit 1310 (step S2204). Thereby, the motion detection unit 1310 detects a motion vector using the reference frame as usual for the selected image region of the second frame rate.
  • step S2203 if the frame rate of the selected image area is the first frame rate (step S2203: first FR), the compression control unit 1312 sets the skip flag in the selected image area of the first frame rate, and the motion detection unit 1310 Output (step S2205). As a result, the motion detection unit 1310 does not execute motion detection for the selected image area of the first frame rate. Then, the compression control unit 1312 issues a motion compensation stop instruction for the selected image area, and outputs the motion compensation stop instruction to the motion compensation unit 1311 (step S2206). Thereby, the execution of the motion compensation can be stopped for the selected image area.
  • step S2207 determines whether there is an unselected image area in the acquired input frame. If there is an unselected image area (step S2207: YES), the process returns to step S2202. On the other hand, if there is no unselected image area (step S2207: NO), the compression control unit 1312 ends the series of processes.
  • FIG. 23 is a flowchart illustrating an example of a motion detection processing procedure of the second compression control method by the motion detection unit 1310.
  • the motion detection unit 1310 obtains the reference frame immediately preceding the input frame in time from the frame memory 1309 (step S2301), and waits for the input of the selected image area output in step S2204 or S2205 of FIG. Step S2302: No).
  • step S2302 If the selected image area is input (step S2302: YES), the motion detection unit 1310 acquires image data of an image area at the same location as the selected image area from the reference frame (step S2303). Then, the motion detection unit 1310 determines whether or not there is a skip flag in the selected image area (step S2304). When there is no skip flag (step S2304: No), the frame rate of the selected image area is the second frame rate. Therefore, the motion detection unit 1310 detects a motion vector using the image data of the selected image area and the image data of the image area of the reference frame acquired in step S2003 (step S2305).
  • the motion detection unit 1310 outputs the motion vector obtained in step S2305 to the motion compensation unit 1311 (step S2306), and ends the series of processing. On the other hand, if there is a skip flag (step S2304: YES), the motion detection unit 1310 ends the series of processing without executing motion detection.
  • FIG. 24 is a flowchart showing an example of a motion compensation processing procedure of the second compression control method by the motion compensation unit 1311.
  • the motion compensation unit 1311 obtains a reference frame from the frame memory 1309 (step S2401).
  • the motion compensation unit 1311 acquires an image area at the same location as the selected image area from the reference frame (step S2402).
  • the motion compensation unit 1311 determines whether the trigger input for motion compensation for the selected image area is either a motion vector or a motion compensation stop instruction (step S2403). If the trigger input is a motion vector (step S2403: motion vector), the motion compensation unit 1311 combines the motion vector for the selected image region from the motion detection unit 1310 and the image region of the reference frame acquired in step S2402. Using this, motion compensation is performed (step S2404). Thereby, the motion compensation unit 1311 can generate predicted image data in the selected image region.
  • step S2403 motion compensation stop instruction
  • the motion compensation unit 1311 determines the image data of the acquired image area as the image data of the predicted image area (predicted image data) (Ste S2405).
  • step S2404 or S2405 the motion compensation unit 1311 determines whether or not motion compensation of all the selected image areas has ended (step S2406). Specifically, for example, when the compression control unit 1312 determines that there is an unselected image region in step S2207 (step S2007: Yes), the motion compensation unit 1311 ends the motion compensation of all selected image regions. It judges that there is not (Step S2406: No), it returns to Step S2402.
  • step S2207 determines that there is no unselected image area in step S2207 (step S2207: No).
  • the motion compensation unit 1311 determines that the motion compensation for all selected image areas is completed (step S2406). : Yes). Then, the motion compensation unit 1311 outputs the prediction frame obtained by combining the prediction image data of all selected image regions to the subtraction unit 1301 and the generation unit 1308 (step S2407), and ends the series of processing.
  • the above-described moving picture compression apparatus includes the acquisition unit 1220 and the compression unit 902.
  • the acquisition unit 1220 has a first imaging area for imaging a subject and a second imaging area for imaging a subject, and can set a first frame rate (for example, 30 [fps]) in the first imaging area.
  • moving image data including a plurality of frames output from the imaging element 100 capable of setting a second frame rate (for example, 60 [fps]) faster than the first frame rate in the second imaging region is acquired.
  • the compression unit 902 compresses the moving image data (second moving image data 920) acquired by the acquisition unit 1220 based on the first frame rate and the second frame rate.
  • compression according to the frame rate can be realized by the general-purpose compression unit 902. Further, processing load can be reduced more than compression when one frame rate is set.
  • the image data of the first image area a1 in the frame imaged at the first frame rate is compressed based on the first frame rate
  • the image data of the second image area a2 in the frame imaged at the second frame rate is compressed based on the second frame rate.
  • the moving picture compression apparatus of (1-2) includes the motion detection unit 1310 and the motion compensation unit 1311.
  • the motion detection unit 1310 sets, for the image data of the first image area a1, a specific motion vector indicating that the object in the image data of the first image area a1 has no motion, and the image of the second image area a2 For data, motion vector detection is performed.
  • the motion compensation unit 1311 performs motion compensation for the image data of the first image area a1 based on a specific motion vector, and the image data of the second image area a2 , The motion compensation is performed based on the motion vector detected by the motion detection unit 1310.
  • the moving picture compression apparatus of (1-3) includes the generation unit 1308.
  • the generation unit 1308 is a reference frame referred to by a frame input temporally after the frame, based on difference data between the frame and a predicted frame to be predicted for each of the plurality of frames, and the predicted frame.
  • the motion compensation unit 1311 performs motion compensation for the image data of the first image area a1 based on the specific motion vector and the reference frame, and the motion vector for the image data of the second image area a2 Perform motion compensation based on and the reference frame.
  • the moving picture compression apparatus of (1-2) includes the generation unit 1308, the motion detection unit 1310, and the motion compensation unit 1311.
  • the generation unit 1308 is a reference frame referred to by a frame input temporally after the frame, based on difference data between the frame and a predicted frame to be predicted for each of the plurality of frames, and the predicted frame.
  • Generate The motion detection unit 1310 does not execute motion vector detection for the image data of the first image area a1, but performs motion vector detection for the image data of the second image area a2.
  • the motion compensation unit 1311 performs motion compensation on the image data of the first image area a1 based on the reference frame, and the image data of the second image area a2 is detected by the reference frame and the motion detection unit 1310 Motion compensation is performed based on the motion vector.
  • the load on the compression processing can be reduced by not performing the motion detection on the image data of the first image area a1.
  • the motion compensation unit 1311 sets a reference frame for the image data of the first image area a1 to a frame one frame later in time of the frame. To determine the prediction frame to predict.
  • the moving picture compression apparatus of (1-1) includes an image processing unit 901.
  • the image processing unit 901 updates the second frame to the third frame based on the first frame and the second frame among the plurality of frames.
  • the first frame is a frame captured by at least the first imaging region among the first imaging region in which the first frame rate is set and the second imaging region in which the second frame rate is set.
  • the second frame is a frame captured temporally after the first frame by the second imaging region.
  • the third frame is a frame obtained by combining the image data of the first image area a1 of the first frame with the image data of the second image area a2 of the second frame.
  • a compression unit 902 compresses the image data of the first image area a1 of the third frame updated by the image processing unit 901 based on the first frame rate, and the image data of the second image area a2 is a second frame Compress based on rates.
  • the image processing unit 901 controls the image of the first image area a1 of the first frame in the image data of the second image area a2 of the second frame.
  • the second frame is updated to the third frame by applying the image data of the second image area a2 in the second frame.
  • the image processing unit 901 performs frame F2 that is the second frame. Apply priority to the top part of the train. Therefore, it is possible to obtain an image (frame F2 which is the third frame) with less discomfort.
  • the image processing unit 901 sets the second image area a2 of the second frame and the first image area a1 of the first frame in the second frame.
  • the second frame is updated to the third frame by applying the image data of the second image area a2 in the first frame to the area not belonging to any of the above.
  • frame F1 for the image area between the end of the train of the second frame of frame F2-60, which is the second frame, and the background area of frame F1, which is the first frame, frame F1 that is the first frame.
  • image data (the end of the train) of the second image area a2 in the above is preferentially applied. Therefore, it is possible to obtain an image (frame F2 which is the third frame) with less discomfort.
  • another moving picture compression apparatus has a first imaging area for imaging a subject and a second imaging area for imaging a subject, and can set a first frame rate in the first imaging area. And compresses moving image data including a plurality of frames output from the imaging device 100 capable of setting a second frame rate faster than the first frame rate in the second imaging region.
  • the video compression apparatus includes a generation unit 1308 and a motion compensation unit 1311.
  • the generation unit 1308 is a reference frame referred to by a frame input temporally after the frame, based on difference data between the frame and a predicted frame to be predicted for each of the plurality of frames, and the predicted frame.
  • Generate The motion compensation unit 1311 performs motion compensation of a frame captured at the second frame rate among the plurality of frames using a specific reference frame among the plurality of reference frames generated by the generation unit 1308.
  • the reference frame can be fixed to a specific reference frame, and the motion compensation can be made more efficient.
  • each of the plurality of frames has a first frame rate and at least a first frame rate of a second frame rate faster than the first frame rate.
  • the image data of the first image area a1 corresponding to the set first imaging area is included.
  • the motion compensation unit 1311 performs motion compensation on the image data of the first image region a1 as a specific reference frame using a reference frame generated in a temporally previous frame of the frame.
  • the temporally previous reference frame a specific reference frame, it is possible to refer to the frame closest to the frame, and it is possible to improve the accuracy of the motion compensation while improving the efficiency of the motion compensation.
  • the electronic device 500 described above includes the imaging device 100 and the compression unit 902.
  • the imaging element 100 has a first imaging area for imaging a subject and a second imaging area for imaging a subject, and can set a first frame rate in the first imaging area, and a second imaging area It is possible to set a second frame rate faster than the first frame rate to capture an object at a frame rate set for each imaging region, and output a plurality of frames as moving image data.
  • the compression unit 902 compresses each of the plurality of frames imaged by the imaging device 100 based on the first frame rate and the second frame rate.
  • the electronic device 500 capable of compression according to the frame rate can be realized for one frame in which different frame rates are set, and the processing load is reduced compared to compression when one frame rate is set.
  • the other electronic device 500 has the first imaging area for imaging the subject and the second imaging area for imaging the subject, and can set the first frame rate in the first imaging area. And compresses moving image data including a plurality of frames output from the imaging device 100 capable of setting a second frame rate faster than the first frame rate in the second imaging region.
  • the electronic device 500 includes a generation unit 1308 and a motion compensation unit 1311.
  • the generation unit 1308 is a reference frame referred to by a frame input temporally after the frame, based on difference data between the frame and a predicted frame to be predicted for each of the plurality of frames, and the predicted frame.
  • Generate The motion compensation unit 1311 performs motion compensation of a frame captured at the second frame rate among the plurality of frames using a specific reference frame among the plurality of reference frames generated by the generation unit 1308.
  • the reference frame can be fixed to a specific reference frame, and the electronic device 500 can be realized to improve the efficiency of motion compensation.
  • Examples of the electronic devices 500 of (1-12) and (1-13) described above include digital cameras, digital video cameras, smartphones, tablets, surveillance cameras, drive recorders, drone and the like.
  • the above-described moving picture compression program has a first imaging area for imaging a subject and a second imaging area for imaging a subject, and can set the first frame rate in the first imaging area. And causes the processor 1201 to execute compression of moving image data including a plurality of frames output from the imaging device 100 capable of setting the second frame rate faster than the first frame rate in the second imaging region.
  • the moving image compression program causes the processor 1201 to execute acquisition processing for acquiring moving image data, and compression processing for compressing moving image data acquired by the acquisition processing based on the first frame rate and the second frame rate.
  • compression corresponding to a frame rate can be realized by software for one frame set to a different frame rate, and processing load is reduced compared to compression when one frame rate is set. be able to.
  • the other moving image compression program described above has a first imaging area for imaging a subject, and a second imaging area for imaging a subject, and the first frame rate is set in the first imaging area.
  • the processor 1201 causes compression of moving image data including a plurality of frames output from the imaging device 100 that can be set and that can set the second frame rate faster than the first frame rate in the second imaging region. .
  • This moving picture compression program is a frame that is input to the processor 1201 temporally after the frame based on the difference data between the frame and the predicted frame for predicting the frame for each of a plurality of frames and the predicted frame.
  • the reference frame can be fixed to a specific reference frame, and efficient motion compensation can be realized by software.
  • the above-described moving picture compression programs (1-14) and (1-15) may be recorded on a portable recording medium such as a CD-ROM, a DVD-ROM, a flash memory, and a memory card 504. Also, the moving picture compression programs of (1-14) and (1-15) described above may be recorded in a moving picture compression apparatus or a server that can be downloaded to the electronic device 500.
  • Example 2 will be described.
  • the image processing unit 901 fills in a specific color or performs demosaicing.
  • the image processing unit 901 generates frames F2, F4,... With less uncomfortable feeling without executing such image processing.
  • the compression unit 902 compresses a frame subjected to image processing of the image processing apparatus (image processing unit 901). However, the compression unit 902 need not necessarily compress and outputs the uncompressed image to the liquid crystal monitor 503. It is also good.
  • the same reference numerals as in the first embodiment denote the same parts as in the first embodiment, and a description thereof will be omitted.
  • the electronic device 500 has described the image processing example 1 in which a traveling train is shot as a specific subject during fixed point shooting of a landscape including rice fields, mountains, and the sky.
  • the flow of the process of the moving image processing example 1 will be specifically described.
  • FIG. 25 is an explanatory diagram of a specific processing flow of the moving image processing example 1 shown in FIG.
  • the imaging element 100 outputs frames F1, F2-60, F3,... In chronological order.
  • the train travels from right to left in frames F1, F2-60, and F3.
  • the branch numbers of the frames F1 to F3 indicate the frame rates of the frames F1 to F3.
  • the odd-numbered frame F1-30 indicates the image data of the first image region r1-30 captured at the frame rate of 30 [fps] of the frame F1
  • the frame F1-60 indicates 60 of the frame F1.
  • 13 shows image data of a second image region r1-60 captured at a frame rate of [fps].
  • the second image area r1-60 captured at a frame rate of 60 fps of the frame F1-60 has train image data, but there is no second image area r1-60 in the frame F1-30.
  • Such an area in the frame F1-30 is referred to as a non-image area n1-60.
  • the first image area r1-30 captured at the frame rate of 30 fps of the frame F1-30 has landscape image data, but in the frame F1-60, the second image area Not in r1-60.
  • Such an area in the frame F1-60 is referred to as a non-image area n1-30.
  • the frame F3-30 includes a first image area r3-30 in which the image data of the landscape is output and a non-image area n3-60 in which no image is output.
  • Reference numeral 60 includes a second image area r3-60 in which train image data is output, and a non-image area n3-60 in which nothing is output. The same applies to odd-numbered frames (not shown) of frames F3-30 and F3-60 and later.
  • image data (train) of the second image area r2-60 captured at a frame rate of 60 fps and a non-image area n2-30 in which nothing is output It is a frame that is configured. The same applies to the subsequent even numbered frames (not shown).
  • the image processing unit 901 combines the image data (train) of the second image area r2-60 of the frame F2-60 with the image data (landscape) of the first image area r1-30 of the frame F1-30. , Generate a frame F2 which is composite image data.
  • the frame F2 has an overlapping range Da1 of the non-image area n1-60 of the frame F1-30 and the non-image area n2-30 of the frame F2-60.
  • the image processing unit 901 fills the range Da1 with a specific color or performs demosaicing, but in the second embodiment, the image processing unit 901 does not perform such image processing.
  • the image data of the range Da1 in another image area is copied. Thereby, the image processing unit 901 generates a frame F2 with less discomfort.
  • FIG. 26 is an explanatory diagram of a synthesis example 1 of the 60 fps frame F2 according to the second embodiment.
  • Composition example 1 is a range at the same position as the range Da1 in the first image region r3-30 of the frame F3 temporally one frame later than the frame F2-60 as another image region to be copied to the range Da1. It is an example using Db1.
  • the image data of the range Db1 is part of a landscape.
  • the image processing unit 901 specifies a range Da1 in which the non-image area n1-60 of the frame F1-30 and the non-image area n2-30 of the frame F2-60 overlap, and specifies the specified range Da1.
  • the range Db1 at the same position is specified from the frame F3.
  • the image processing unit 901 copies the image data of the range Db1 to the range Da1 of the frame F2.
  • the image processing unit 901 can generate the frame F2 with less discomfort.
  • FIG. 27 is an explanatory diagram of a synthesis example 2 of the frame F2 of 60 fps according to the second embodiment.
  • the image data of the first image area r1-30 of the frame F1-30 is the copy source to the first image area of the frame F2
  • the image data of the range Db1 of the frame F3 is the copy source to the range Da1.
  • the image data of the first image area r3-30 of the frame F3-30 is the copy source to the first image area of the frame F2
  • the image data of the range Db2 of the frame F1 is the range Da2 To be copied to.
  • the range Da2 is a range in which the non-image area n3-60 of the frame F3-30 and the non-image area n2-30 of the frame F2-60 overlap.
  • the range Db2 of the frame F1 is a range at the same position as the range Da2.
  • the image processing unit 901 specifies a range Da2 in which the non-image area n3-60 of the frame F3-30 and the non-image area n2-30 of the frame F2-60 overlap, and specifies the specified range Da2
  • the range Db2 at the same position is specified from the frame F1.
  • the image processing unit 901 copies the image data of the range Db2 to the range Da2 of the frame F2.
  • the image processing unit 901 can generate the frame F2 with less discomfort.
  • Synthesis Example 3 is an example in which any one of Synthesis Example 1 and Synthesis Example 2 is selected and synthesized.
  • the image processing unit 901 specifies the range Da1 in Synthesis Example 1 and the range Da2 in Synthesis Example 2.
  • the image processing unit 901 selects one of the ranges Da1 and Da2 and applies a combination example in which the selected range is specified.
  • the image processing unit 901 applies the synthesis example 1 when the range Da1 is selected, and applies the synthesis example 2 when the range Da2 is selected.
  • the image processing unit 901 uses, for example, the narrowness of the range as a selection criterion as a selection criterion for selecting any one of the ranges Da1 and Da2.
  • a selection criterion for selecting any one of the ranges Da1 and Da2.
  • FIG. 28 is an explanatory diagram of a synthesis example 4 of the 60 fps frame F2 according to the second embodiment.
  • the synthesis example 4 is not the image data (a part of the scenery) of the range Db1 in the first image area r3-30 of the frame F3 but the second image area r1 of the frame F1. Image data of range Db3 at -60 (end of train) is used.
  • the image data (train) of the second image region r2-60 in the frame F2 is added with the image data of the range Db3, but the progress of the image data (train) of the second image region r2-60
  • the image data (train) of the second image region r2-60 is illusion of the residual image of the running train, since it is added to the opposite direction, when the user looks at the image. Therefore, also in this case, the frames F2, F4, ... with less discomfort can be generated.
  • the second frame is a frame captured only at the second frame rate (for example, 60 fps) to be synthesized.
  • the second frame rate for example, 60 fps
  • the third frame is a frame that includes an image area captured one time later than the second frame and at least one of the first frame rate and the second frame rate.
  • it is a frame F3 of FIGS.
  • FIG. 29 is a flowchart of a procedure example 1 of the combining processing according to the combining example 1 of the frame F2 by the image processing unit 901.
  • the input frames are assumed to be sequentially stored in the buffer.
  • the image processing unit 901 determines whether the second frame is present in the buffer (step S2901). If the second frame is in the buffer (step S2901: YES), the image processing unit 901 identifies a range which is a non-image area of the first frame and a non-image area of the second frame (step S2902). Specifically, for example, the image processing unit 901 specifies a range Da1 in which the non-image area n1-60 of the frame F1-30 and the non-image area n2-30 of the frame F2-60 overlap.
  • the image processing unit 901 duplicates the image data of the first image area a1 of the first frame (step S2903). Specifically, for example, the image processing unit 901 duplicates the image data (landscape) of the first image region r1-30 of the frame F1.
  • the image processing unit 901 copies the image data in the range specified in step S2902 from the third frame (step S2904). Specifically, for example, the image processing unit 901 duplicates the image data of the range Da1 and the range Db1 specified in step S2902 from the frame F3.
  • the image processing unit 901 updates the second frame (step S2905). Specifically, for example, the image processing unit 901 generates the image data (landscape) of the second image area r2-60 of the frame F2-60, the copied first image area r1-30, and the image of the copied area Db1.
  • the frame F2-60 is updated to the frame F2 by synthesizing the data.
  • step S2901: NO If the second frame is not in the buffer (step S2901: NO), the image processing unit 901 ends the image processing (steps S1413 and S1415). Thus, the image processing unit 901 can generate the frame F2 with less discomfort.
  • FIG. 30 is a flowchart illustrating a procedure example 2 of the combining processing according to the combining example 2 of the frame F2 by the image processing unit 901.
  • the input frames are assumed to be sequentially stored in the buffer.
  • the image processing unit 901 determines whether the second frame is present in the buffer (step S3001). If the second frame is in the buffer (step S3001: YES), the image processing unit 901 identifies a range which is a non-image area of the third frame and a non-image area of the second frame (step S3002). Specifically, for example, the image processing unit 901 specifies a range Da2 in which the non-image area n3-60 of the frame F3-30 and the non-image area n2-30 of the frame F2-60 overlap.
  • the image processing unit 901 duplicates the image data of the first image area a1 of the third frame (step S3003). Specifically, for example, the image processing unit 901 duplicates the image data (landscape) of the first image region r3-30 of the frame F3.
  • the image processing unit 901 duplicates the image data of the range specified in step S3002 from the first frame (step S3004). Specifically, for example, the image processing unit 901 duplicates the image data of the range Da2 and the range Db2 specified in step S3002 from the frame F1.
  • the image processing unit 901 updates the second frame (step S3005). Specifically, for example, the image processing unit 901 outputs the image data (landscape) of the second image area r2-60 of the frame F2-60, the copied first image area r3-30, and the image of the copied area Db2 The frame F2-60 is updated to the frame F2 by synthesizing the data.
  • step S3001 NO
  • the image processing unit 901 ends the image processing (steps S1413 and S1415).
  • the image processing unit 901 can generate the frame F2 with less discomfort.
  • FIG. 31 is a flowchart of a procedure example 3 of the combining processing according to the combining example 3 of the frame F2 by the image processing unit 901.
  • the input frames are assumed to be sequentially stored in the buffer.
  • the image processing unit 901 determines whether the second frame is present in the buffer (step S3101). If the second frame is in the buffer (step S3101: YES), the image processing unit 901 identifies a first range which is a non-image area of the first frame and a non-image area of the second frame (step S3102). Specifically, for example, the image processing unit 901 specifies a range Da1 in which the non-image area n1-60 of the frame F1-30 and the non-image area n2-30 of the frame F2-60 overlap.
  • the image processing unit 901 identifies a second range which is a non-image area of the third frame and a non-image area of the second frame (step S3103). Specifically, for example, the image processing unit 901 specifies a range Da2 in which the non-image area n3-60 of the frame F3-30 and the non-image area n2-30 of the frame F2-60 overlap.
  • the image processing unit 901 selects one of the specified first range and second range (step S3104). Specifically, for example, the image processing unit 901 selects a narrower range (a smaller area) of the first range and the second range. The selected range is referred to as a selection range. In the case of the ranges Da1 and Da2, the image processing unit 901 selects the range Da1. As a result, the range used for synthesis can be minimized, and discomfort can be further suppressed.
  • the image processing unit 901 duplicates the image data of the first image area a1 of the selected frame (step S3105).
  • the selected frame is a frame that is a specification source of the selected range. For example, when the first range (range Da1) is selected, the selected frame is the first frame (frame F1) and the second range (frame When the range Da2) is selected, the selected frame is the third frame (frame F3). Therefore, the image data of the first image area a1 of the selected frame is the image data (landscape) of the first image area r1-30 of the frame F1 if the selected frame is the frame F1 and the selected frame is the frame F3. If there is, it is the image data (landscape) of the first image region r3-30 of the frame F3.
  • the image processing unit 901 duplicates the image data of the selected range of step S3104 from the non-selected frame (step S3106).
  • a non-selected frame is a frame serving as a specification source of the non-selected range. For example, when the first range (range Da1) is not selected, the non-selected frame is the first frame (frame F1). If the second range (range Da2) is not selected, the non-selected frame is the third frame (frame F3). Therefore, if the selection range is the range Da1, the image processing unit 901 duplicates the image data of the range Db1 at the same position as the range Da1 from the frame F3. If the selection range is the range Da2, the image processing unit 901 copies the image data at the same position as the range Da2. The image data in the range Db2 is copied from the frame F1.
  • the image processing unit 901 updates the second frame (step S3107). Specifically, for example, when the selection range is the first range (range Da1), the image processing unit 901 generates the second image region r2-60 of the frame F2-60 and the copied first image region r1-30. The frame F2-60 is updated to the frame F2 by combining the image data (landscape) and the image data of the copied range Db1.
  • the selection range is the second range (range Da2)
  • the image processing unit 901 displays the image data of the second image region r2-60 of the frame F2-60 and the copied first image region r3-30 (landscape
  • the frame F2-60 is updated to the frame F2 by combining the image data of the area Db2 and the image data of the copied range Db2.
  • step S3101: NO If the second frame is not in the buffer (step S3101: NO), the image processing unit 901 ends the image processing (steps S1413 and S1415). Thus, the image processing unit 901 can minimize the sense of discomfort due to copying by selecting the narrower range.
  • FIG. 32 is a flowchart of a procedure example 4 of the combining processing according to the combining example 4 of the frame F2 by the image processing unit 901.
  • the input frames are assumed to be sequentially stored in the buffer.
  • the image processing unit 901 determines whether the second frame is present in the buffer (step S3201). If the second frame is in the buffer (step S3201: YES), the image processing unit 901 identifies a range which is a non-image area of the first frame and a non-image area of the second frame (step S3202). Specifically, for example, the image processing unit 901 specifies a range Da1 in which the non-image area n1-60 of the frame F1-30 and the non-image area n2-30 of the frame F2-60 overlap.
  • the image processing unit 901 duplicates the image data of the first image area a1 of the first frame (step S3203). Specifically, for example, the image processing unit 901 duplicates the image data (landscape) of the first image region r1-30 of the frame F1.
  • the image processing unit 901 duplicates the image data of the range specified in step S3202 from the first frame (step S3204). Specifically, for example, the image processing unit 901 duplicates the image data of the range Da1 and the range Db3 specified in step S3202 from the frame F1.
  • the image processing unit 901 updates the second frame (step S3205). Specifically, for example, the image processing unit 901 generates the image data (landscape) of the second image area r2-60 of the frame F2-60, the copied first image area r1-30, and the image of the copied area Db1.
  • the frame F2-60 is updated to the frame F2 by synthesizing the data.
  • step S3201 NO
  • the image processing unit 901 ends the image processing (steps S1413 and S1415).
  • the image processing unit 901 can generate the frame F2 with less discomfort.
  • the image processing apparatus includes the first imaging area for imaging the subject and the second imaging area for imaging the subject, and the first imaging area is the first frame. From the imaging element 100 capable of setting a rate (for example, 30 fps) and setting a second frame rate (for example, 60 fps) faster than the first frame rate in the second imaging region Image processing is performed on a plurality of frames generated by output.
  • a rate for example, 30 fps
  • a second frame rate for example, 60 fps
  • the image processing apparatus includes a specifying unit 1213 and a combining unit 1214.
  • the identifying unit 1213 is a first frame generated by the output from the first imaging region and the second imaging region, and a second frame generated by the output from the second imaging region (for example, And a non-image area n1-60 corresponding to the second imaging area in the first frame and a non-image area n2-30 corresponding to the first imaging area in the second frame based on the frame F2-60).
  • a certain range Da1 is specified.
  • the combining unit 1214 includes a second frame, image data of a first image region r1-30 corresponding to the first imaging region in the first frame, image data of the first image region r1-30 in the first frame, and a second It combines with specific image data of the range Da1 specified by the specifying unit 1213 in another image area other than the frame.
  • the first frame is a frame (for example, frame F1) generated temporally before the second frame, and the specific image
  • the data is a range (Da1) in a first image area a1 (r3-30) of a frame (for example, frame F3) generated by output from the first imaging area and the second imaging area temporally after the second frame Image data (i.e., image data of range Db1).
  • the first frame is a frame (for example, frame F3) generated temporally after the second frame, and the specific image data is generated.
  • the range (Da2) in the first image area a1 (r1-30) of the frame eg, frame F1 generated by the output from the first imaging area and the second imaging area temporally before the second frame Image data (i.e., image data of range Db2).
  • the identification unit 1213 identifies the range to be used by the combining unit 1214 based on the first range (Da1) and the second range (Da2). Do.
  • the combining unit 1214 is used to specify one frame (F1 / F3) as a specification source of one range (Da1 / Da2) specified by the specifying unit 1213 among the second frame and the first frame and the third frame.
  • the image data of the first image area a1 (r1-30 / r3-30), and the other of the first frame and the third frame that is the specification source of the other range (Da2 / Da1) not specified by the specification unit 1213 The image data (Db1 / Db2) of one range (Da1 / Da2) in the first image area a1 (r3-30 / r1-30) of the frame (F3 / F1) is synthesized.
  • the image processing unit 901 can minimize the sense of discomfort due to copying by selecting the narrower range.
  • the first frame is a frame generated temporally before the second frame, and the specific image data is the first frame.
  • Image data of the range (Da1) in the second image area a2 of i.e., image data of the range Db3) may be used.
  • the moving picture compression apparatus includes a first imaging area for imaging a subject, and a second imaging area for imaging a subject, and the first frame rate in the first imaging area.
  • An output from the image sensor 100 which can set (for example, 30 fps) and can set a second frame rate (for example, 60 fps) faster than the first frame rate in the second imaging region Compresses moving image data including a plurality of frames generated by
  • the video compression apparatus includes a specifying unit 1213 and a combining unit 1214.
  • the identifying unit 1213 is a first frame generated by the output from the first imaging region and the second imaging region, and a second frame generated by the output from the second imaging region (for example, And a non-image area n1-60 corresponding to the second imaging area in the first frame and a non-image area n2-30 corresponding to the first imaging area in the second frame based on the frame F2-60).
  • a certain range Da1 is specified.
  • the combining unit 1214 includes a second frame, image data of a first image region r1-30 corresponding to the first imaging region in the first frame, image data of the first image region r1-30 in the first frame, and a second It combines with specific image data of the range Da1 specified by the specifying unit 1213 in another image area other than the frame.
  • the compression unit 902 compresses the first frame and the frame combined by the combining unit 1214.
  • the image processing program according to the second embodiment includes a first imaging area for imaging a subject and a second imaging area for imaging a subject, and the first frame rate in the first imaging area.
  • An output from the image sensor 100 which can set (for example, 30 fps) and can set a second frame rate (for example, 60 fps) faster than the first frame rate in the second imaging region
  • the processor 1201 executes image processing of a plurality of frames generated by
  • the image processing program causes the processor 1201 to execute specific processing and composition processing.
  • the image processing program is generated by the processor 1201 from among a plurality of frames, the first frame generated by the output from the first imaging region and the second imaging region, and the output from the second imaging region
  • the non-image area n1-60 corresponding to the second imaging area in the first frame and corresponding to the first imaging area in the second frame based on the second frame (for example, the frame F2-60)
  • a range Da1 which is a non-image area n2-30 to be identified is specified.
  • the image processing program causes the processor 1201 to perform, in the combining process, the second frame, the image data of the first image area r1-30 corresponding to the first imaging area in the first frame, and the first image area r1 in the first frame.
  • the image data of ⁇ 30 and the specific image data of the range Da1 specified by the specific processing in another image area other than the second frame are synthesized.
  • the non-image area n2-30 captured in the second frame can be interpolated by software using a frame that is temporally close to the second frame. Therefore, it is possible to obtain a combined frame with less discomfort than the second frame by software.
  • the moving picture compression program causes the processor 1201 to execute identification processing, combining processing, and compression processing.
  • the video compression program causes the processor 1201 to execute specific processing and composition processing.
  • the image processing program is generated by the processor 1201 from among a plurality of frames, the first frame generated by the output from the first imaging region and the second imaging region, and the output from the second imaging region
  • the non-image area n1-60 corresponding to the second imaging area in the first frame and corresponding to the first imaging area in the second frame based on the second frame (for example, the frame F2-60)
  • a range Da1 which is a non-image area n2-30 to be identified is specified.
  • the moving image compression program causes the processor 1201 to combine the second frame, the image data of the first image region r1-30 corresponding to the first imaging region in the first frame, and the first image region r1 in the first frame in the combining process.
  • the image data of ⁇ 30 and the specific image data of the range Da1 specified by the specific processing in another image area other than the second frame are synthesized.
  • the moving image compression program causes the processor 1201 to compress the first frame and the frame synthesized by the synthesis process.
  • the image processing program of (2-7) and the moving picture compression program of (2-8) described above are stored in a portable recording medium such as a CD-ROM, a DVD-ROM, a flash memory, a memory card 504, etc. It is also good. Further, the image processing program of (2-7) and the moving image compression program of (2-8) described above may be recorded in a moving image compression apparatus or a server that can be downloaded to the electronic device 500.
  • Example 3 will be described.
  • the image processing unit 901 fills in a specific color or performs demosaicing.
  • the image processing unit 901 generates frames F2, F4,... With less uncomfortable feeling without performing such image processing.
  • a configuration that includes the image processing unit 901 but does not include the imaging device 100 or the compression unit 902 is referred to as an image processing apparatus. Further, a configuration including the imaging element 100 and the preprocessing unit 1210 is referred to as an imaging device.
  • the compression unit 902 compresses a frame subjected to image processing of the image processing apparatus (image processing unit 901). However, the compression unit 902 need not necessarily compress and outputs the uncompressed image to the liquid crystal monitor 503. It is also good.
  • the same reference numerals are used for the portions common to the first and second embodiments and the description thereof is omitted.
  • FIG. 33 is an explanatory diagram of a synthesis example of the 60 fps frame F2 according to the third embodiment.
  • the preprocessing unit 1210 Before capturing a frame F2-60, the preprocessing unit 1210 detects a specific subject such as a train from a frame F1 or the like before the frame F2-60, and detects a motion vector of the specific subject in the immediately preceding frame F1. .
  • the preprocessing unit 1210 can obtain an image area R12-60 of 60 [fps] in the next frame F2-60 from the image area of the specific subject of the frame F1 and the motion vector.
  • the image processing unit 901 duplicates the image data (landscape) of the first image region r1-30 of the immediately preceding frame F1 as in the first embodiment.
  • a frame F2 can be obtained by combining the image data (landscape) of one image region r1-30 and the image data (a part of a train and a landscape) of the image region R12-60.
  • FIG. 34 is an explanatory drawing showing the correspondence between the setting of the imaging region and the image region of the frame F2-60.
  • (A) shows an example of motion vector detection
  • (B) shows the correspondence between the setting of the imaging area and the image area of the frame F2-60.
  • the imaging area p1-60 is an imaging area of the specific subject which has already been detected before the generation of the frame F1 after the generation of the temporally previous frame F0-60 of the frame F1. Therefore, in the frame F1, the image data o1 of the specific subject (train) exists in the second image area r1-60 corresponding to the imaging area p1-60.
  • the preprocessing unit 1210 causes the detection unit 1211 to detect the motion vector mv of the specific subject based on the image data o1 of the specific subject of the frame F0 and the image data o1 of the specific subject of the frame F1. Then, the pre-processing unit 1210 detects a second image area r2-60 where the specific subject appears in the next frame F2-60 based on the second image area r1-60 of the specific subject of the frame F1 and the motion vector mv. A detected imaging area p2-60 of the imaging surface 200 of the imaging element 100 corresponding to the detected second image area r2-60 is detected.
  • the preprocessing unit 1210 causes the setting unit 1212 to set the frame rate of the specific imaging area P12-60 including the imaging area p1-60 specified at the time of generation of the frame F1 and the detection imaging area p2-60.
  • the second frame rate is set, and the setting instruction is output to the imaging element 100.
  • the imaging element 100 sets a specific imaging area P12-60 to the second frame rate and performs imaging to generate a frame F2-60.
  • the image processing unit 901 causes the combining unit 1214 to generate image data of the first image region r1-30 included in the frame F1 and a second frame F2 generated by imaging at the second frame rate set by the setting unit 1212. Image data of a specific imaging region P12-60 included in ⁇ 60 is synthesized. Thus, the frame F2-60 is updated to the frame F2.
  • the preprocessing unit 1210 sets the frame rate of the detection imaging region p2-60 to the second frame rate, and the detection imaging of the imaging surface 200 is performed.
  • the frame rate of the other imaging area other than the area p2-60 is set to the first frame rate.
  • the second imaging region in which the second frame rate is set is only the detection imaging region p2-60. .
  • the frame F2-60 includes the image data o1 of the specific subject (train) and the image data o2 of a part of the landscape in the image region R12-60.
  • the image area R12-60 is expanded on the opposite side of the moving direction of the specific subject as compared to the second image area r2-60. Therefore, it is not necessary to specify the ranges Da1 and Da2 as in the second embodiment and duplicate and combine the image data of the ranges Db1 and Db2 of other frames.
  • the combining process of the third embodiment is executed, for example, in step S1805 of FIG. Further, this combining process is applied to the case of combining the frames F2-60, F4-60,... Of only the second frame rate, and is not performed in the frames F1, F3,.
  • the image data of the combination source is the image region R12-60 and the first image region r1-30 of the frame F1
  • the process of specifying the ranges Da1 and Da2 or selecting the optimum range from the ranges Da1 and Da2 is not necessary, so that the load reduction of the synthesis process of the frame F2 is achieved. Can.
  • the imaging apparatus includes the imaging element 100, the detection unit 1211 and the setting unit 1212.
  • the imaging device 100 has a first imaging area for imaging a subject and a second imaging area for imaging a subject, and can set a first frame rate (for example, 30 [fps]) in the first imaging area. And a second frame rate (for example, 60 [fps]) faster than the first frame rate can be set in the second imaging region.
  • the detection unit 1211 detects a detection imaging area p2-60 of the specific subject in the imaging element 100 based on the second image area r1-60 of the specific subject included in the frame F1 generated by the output from the imaging element 100. .
  • the setting unit 1212 performs specific imaging including the imaging region p1-60 of the specific subject used to generate the frame F1 and the imaging region (hereinafter, detection imaging region) p2-60 detected by the detection unit 1211.
  • the frame rate of the area P12-60 is set to the second frame rate.
  • the imaging region at the second frame rate can be extended and the specific subject can be imaged at the second frame rate so that the range Da1 in which the non-image regions overlap in the frames F1 and F2 does not occur. It is possible to suppress the image loss of the frame F2-60 captured at two frame rates.
  • the detection unit 1211 is more temporally than the second image region r1-60 of the specific subject included in the frame F1, the frame F1, and the frame F1.
  • the detection imaging area p2-60 of the specific subject is detected based on the motion vector mv of the specific subject with the previous frame F0-60.
  • the setting unit 1212 When the frame rate is set to the second frame rate, and the frame is the second frame F2-60 generated by the output from the specific imaging area after the first frame F1, the frame rate of the detection imaging area p2-60 Is set to the second frame rate, and the frame rate of the other imaging area other than the detected imaging area p2-60 (the part of the imaging surface 200 excluding the detected imaging area p2-60) is set to the first frame rate.
  • the image processing apparatus includes a first imaging area for imaging a subject, and a second imaging area for imaging a subject, and the first frame rate in the first imaging area.
  • An output from the image sensor 100 which can set (for example, 30 fps) and can set a second frame rate (for example, 60 fps) faster than the first frame rate in the second imaging region Image processing of the frame generated by
  • the image processing apparatus includes a detection unit 1211, a setting unit 1212, and a combining unit 1214.
  • the detection unit 1211 detects an imaging area p2-60 of the specific subject in the imaging element 100 based on the second image area r1-60 of the specific subject included in the frame F1 generated by the output from the imaging element 100.
  • the setting unit 1212 is a frame of a specific imaging area P12-60 including the imaging area p1-60 of the specific subject used to generate the frame F1 and the detection imaging area p2-60 detected by the detection unit 1211. Set the rate to the second frame rate.
  • the combining unit 1214 includes the image data of the first image region r1-30 included in the first frame F1 and the second frame F2-60 generated by the imaging at the second frame rate set by the setting unit 1212. And image data of a specific imaging region P12-60 to be combined.
  • the imaging region at the second frame rate can be extended and the specific subject can be imaged at the second frame rate so that the range Da1 in which the non-image regions overlap in the frames F1 and F2 does not occur. It is possible to suppress the image loss of the frame F2-60 captured at two frame rates. In addition, since it is not necessary to interpolate the overlapping range Da1 at the time of combining, it is possible to obtain an image with less discomfort and to reduce the load of combining processing.
  • the moving picture compression apparatus includes a first imaging area for imaging a subject, and a second imaging area for imaging a subject, and the first frame rate in the first imaging area.
  • An output from the image sensor 100 which can set (for example, 30 fps) and can set a second frame rate (for example, 60 fps) faster than the first frame rate in the second imaging region Compresses moving image data including a plurality of frames generated by
  • the video compression apparatus includes a detection unit 1211, a setting unit 1212, a combining unit 1214, and a compression unit 902.
  • the detection unit 1211 detects an imaging area p2-60 of the specific subject in the imaging element 100 based on the second image area r1-60 of the specific subject included in the frame F1 generated by the output from the imaging element 100.
  • the setting unit 1212 sets a frame rate of a specific imaging region P12-60 including the imaging region p1-60 of the specific subject used to generate the frame F1 and the imaging region p2-60 detected by the detection unit 1211. Is set to the second frame rate.
  • the combining unit 1214 includes the image data of the first image region r1-30 included in the first frame F1 and the second frame F2-60 generated by the imaging at the second frame rate set by the setting unit 1212. And image data of a specific imaging region P12-60 to be combined.
  • the compression unit 902 compresses the first frame F1 and the combined second frame F2 obtained by the combining unit 1214.
  • the imaging region at the second frame rate can be extended and the specific subject can be imaged at the second frame rate so that the range Da1 in which the non-image regions overlap in the frames F1 and F2 does not occur. It is possible to suppress the image loss of the frame F2-60 captured at two frame rates. In addition, since it is not necessary to interpolate the overlapping range Da1 at the time of combining, it is possible to obtain an image with less discomfort and to reduce the load of combining processing. Further, since the frame F2-60 is updated after being updated to the frame F2, the difference between the frames F1 and F2 can be minimized, and the compression processing load can be reduced.
  • the setting program according to the third embodiment includes the first imaging area for imaging the subject and the second imaging area for imaging the subject, and the first frame rate is set in the first imaging area.
  • the processor for controlling the imaging element 100 can set 30 [fps] and can set a second frame rate (for example, 60 [fps]) faster than the first frame rate in the second imaging region. Make it run on 1201.
  • the setting program causes the processor 1201 to execute detection processing and setting processing.
  • the setting program causes the processor 1201 to pick up an imaging area of the specific subject in the imaging element 100 based on the second image area r1-60 of the specific subject included in the frame F1 generated by the output from the imaging element 100. Let p2-60 be detected.
  • the setting program causes the processor 1201 to perform specific imaging including the imaging area p1-60 of the specific subject used to generate the frame F1 and the detection imaging area p2-60 detected by the detection process.
  • the frame rate of the area P12-60 is set to the second frame rate.
  • the imaging region at the second frame rate can be extended and the specific subject can be imaged at the second frame rate so that the range Da1 in which the non-image regions overlap in the frames F1 and F2 does not occur. It is possible to realize the suppression of the image loss of the frame F2-60 captured at the two frame rates by software.
  • the image processing program according to the third embodiment includes the first imaging area for imaging the subject and the second imaging area for imaging the subject, and the first frame rate in the first imaging area.
  • An output from the image sensor 100 which can set (for example, 30 fps) and can set a second frame rate (for example, 60 fps) faster than the first frame rate in the second imaging region
  • the processor 1201 executes image processing of the frame generated by the.
  • the image processing program causes the processor 1201 to execute detection processing, setting processing, and combining processing.
  • the image processing program causes the processor 1201 to capture an image of the specific subject in the imaging device 100 based on the second image area r1-60 of the specific object included in the frame F1 generated by the output from the imaging device 100.
  • the area p2-60 is detected.
  • the image processing program causes the processor 1201 to include an imaging area p1-60 of the specific subject used to generate the frame F1 and a detection imaging area p2-60 detected by the detection process.
  • the frame rate of the imaging region P12-60 is set to the second frame rate.
  • the image processing program is configured to cause the processor 1201 to generate image data of the first image region r1-30 included in the first frame F1 and an image generated by imaging at the second frame rate set by the setting processing. Image data of a specific imaging area P12-60 included in the two frames F2-60 is combined.
  • the imaging region at the second frame rate can be extended and the specific subject can be imaged at the second frame rate so that the range Da1 in which the non-image regions overlap in the frames F1 and F2 does not occur.
  • Software can realize to suppress the image loss of the frame F2-60 captured at two frame rates.
  • the load reduction of composition processing can be realized by software.
  • the frame F2-60 is updated after being updated to the frame F2, the difference between the frames F1 and F2 can be minimized, and the reduction of the compression processing load can be realized by software. .
  • the moving picture compression program according to the third embodiment includes a first imaging area for imaging a subject, and a second imaging area for imaging a subject, and the first frame rate in the first imaging area.
  • An output from the image sensor 100 which can set (for example, 30 fps) and can set a second frame rate (for example, 60 fps) faster than the first frame rate in the second imaging region
  • the processor 1201 executes compression of moving image data including a plurality of frames generated by the.
  • the moving image compression program causes the processor 1201 to execute detection processing, setting processing, combining processing, and compression processing.
  • the moving image compression program causes the processor 1201 to capture an image of the specific subject in the imaging device 100 based on the second image area r1-60 of the specific object included in the frame F1 generated by the output from the imaging device 100.
  • the area p2-60 is detected.
  • the moving image compression program causes the processor 1201 to include an imaging area p1-60 of the specific subject used to generate the frame F1 and a detection imaging area p2-60 detected by the detection process.
  • the frame rate of the imaging region P12-60 is set to the second frame rate.
  • the moving image compression program is configured to generate, in the processor 1201, image data of the first image region r1-30 included in the first frame F1 and imaging performed at the second frame rate set by the setting processing. Image data of a specific imaging area P12-60 included in the two frames F2-60 is combined.
  • the moving image compression program causes the processor 1201 to compress the first frame F1 and the second frame F2 after combination obtained by the combination process.
  • the imaging region at the second frame rate can be extended and the specific subject can be imaged at the second frame rate so that the range Da1 in which the non-image regions overlap in the frames F1 and F2 does not occur.
  • Suppression of the image loss of the frame F2-60 imaged at two frame rates can be realized by software.
  • the load reduction of composition processing can be realized by software.
  • the frame F2-60 since the frame F2-60 is updated after being updated to the frame F2, the difference between the frames F1 and F2 can be minimized, and the reduction of the compression processing load can be realized by software. it can.
  • DESCRIPTION OF SYMBOLS 100 imaging element, 200 imaging surface, 500 electronic devices, 501 imaging optical system, 502 control part, 503 liquid crystal monitor, 504 memory card, 505 operation part, 507 flash memory, 508 recording part, 600 moving image file, B1 to Bn data block , 817 additional information, 901 image processing unit, 902 compression unit, 1201 processor, 1202 memory, 1203 integrated circuit, 1204 bus, 1210 pre-processing unit, 1211 detection unit, 1212 setting unit, 1213 identification unit, 1214 combination unit, 1220 acquisition Unit, 1301 Subtractor, 1302 DCT, 1303 Quantizer, 1304 Entropy Encoder, 1305 Code Amount Controller, 1306 Inverse Quantizer, 1307 Inverse DCT, 1308 Generator, 1309 Frame memory, 1310 a motion detection unit, 1311 a motion compensator, 1312 compression controller

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un dispositif d'imagerie qui possède : un élément d'imagerie ayant une première région d'imagerie afin d'imager un sujet et une seconde région d'imagerie afin d'imager un sujet, l'élément d'imagerie permettant de régler une première fréquence de trame pour la première région d'imagerie, et une seconde fréquence de trame plus rapidement que la première fréquence de trame pour la seconde région d'imagerie ; une unité de détection pour détecter, sur la base de la région d'imagerie d'un sujet particulier compris dans une trame générée en sortie par l'élément d'imagerie, la région d'imagerie du sujet particulier dans l'élément d'imagerie ; une unité de réglage pour régler une fréquence de trame d'une région d'imagerie particulière qui comprend la région d'imagerie du sujet particulier utilisée pour générer la trame et la région d'imagerie détectée par l'unité de détection à la seconde fréquence de trame.
PCT/JP2018/036134 2017-09-29 2018-09-27 Dispositif d'imagerie, dispositif de traitement d'image, dispositif de compression d'image animée, programme de réglage, programme de traitement d'image et programme de compression d'image animée WO2019065919A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017192111 2017-09-29
JP2017-192111 2017-09-29

Publications (1)

Publication Number Publication Date
WO2019065919A1 true WO2019065919A1 (fr) 2019-04-04

Family

ID=65902898

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/036134 WO2019065919A1 (fr) 2017-09-29 2018-09-27 Dispositif d'imagerie, dispositif de traitement d'image, dispositif de compression d'image animée, programme de réglage, programme de traitement d'image et programme de compression d'image animée

Country Status (1)

Country Link
WO (1) WO2019065919A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020080141A1 (fr) * 2018-10-19 2020-04-23 ソニー株式会社 Dispositif de capteur et procédé de traitement de signal
US11470268B2 (en) 2018-10-19 2022-10-11 Sony Group Corporation Sensor device and signal processing method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006324834A (ja) * 2005-05-18 2006-11-30 Hitachi Ltd 撮像装置及び撮像方法
WO2013164915A1 (fr) * 2012-05-02 2013-11-07 株式会社ニコン Dispositif de formation d'image
JP2014165855A (ja) * 2013-02-27 2014-09-08 Nikon Corp 電子機器

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006324834A (ja) * 2005-05-18 2006-11-30 Hitachi Ltd 撮像装置及び撮像方法
WO2013164915A1 (fr) * 2012-05-02 2013-11-07 株式会社ニコン Dispositif de formation d'image
JP2014165855A (ja) * 2013-02-27 2014-09-08 Nikon Corp 電子機器

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020080141A1 (fr) * 2018-10-19 2020-04-23 ソニー株式会社 Dispositif de capteur et procédé de traitement de signal
US11470268B2 (en) 2018-10-19 2022-10-11 Sony Group Corporation Sensor device and signal processing method

Similar Documents

Publication Publication Date Title
JP7400471B2 (ja) 動画圧縮装置および動画圧縮プログラム
JP2023168553A (ja) 撮像素子および撮像装置
US9288399B2 (en) Image processing apparatus, image processing method, and program
JP2015092660A (ja) 撮像装置、撮像装置の制御方法、電子機器、電子機器の制御方法、及び制御プログラム
KR20140044289A (ko) 촬상 장치, 촬상 소자, 촬상 제어 방법 및 프로그램
DE102013227163A1 (de) Bildaufnahmeelement, bildaufnahmevorrichtung und verfahren und programm zum steuern von diesen
JP2023166557A (ja) 動画圧縮装置
US11064143B2 (en) Image processing device and image pickup apparatus for processing divisional pixal signals to generate divisional image data
EP3439282A1 (fr) Dispositif de capture d'image, dispositif de traitement d'image et appareil électronique
WO2019065919A1 (fr) Dispositif d'imagerie, dispositif de traitement d'image, dispositif de compression d'image animée, programme de réglage, programme de traitement d'image et programme de compression d'image animée
CN104041004A (zh) 摄像设备及其控制方法和摄像系统
JP2015165280A (ja) 撮像装置およびその制御方法
WO2019065918A1 (fr) Dispositif de traitement d'image, dispositif de compression d'image animée, programme de traitement d'image et programme de compression d'image animée
WO2019189199A1 (fr) Dispositif de génération, dispositif électronique, programme de génération et structure de données
WO2019189206A1 (fr) Dispositif de reproduction, dispositif de compression, dispositif électronique, programme de reproduction et programme de décompression
JP7167928B2 (ja) 動画圧縮装置、電子機器、および動画圧縮プログラム
WO2019065917A1 (fr) Dispositif de compression d'image animée, appareil électronique, et programme de compression d'image animée
US20230156367A1 (en) Imaging element and imaging device
JP2018007076A (ja) 撮像装置および画像処理装置
JP2005223370A (ja) 固体撮像装置およびその画像記録方法、ならびにその画像再生方法
JP2020057877A (ja) 電子機器および設定プログラム
JP2008263650A (ja) 固体撮像装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18861646

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18861646

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP