CN112640437B - Imaging element, imaging device, image data processing method, and storage medium - Google Patents

Imaging element, imaging device, image data processing method, and storage medium Download PDF

Info

Publication number
CN112640437B
CN112640437B CN201980056436.3A CN201980056436A CN112640437B CN 112640437 B CN112640437 B CN 112640437B CN 201980056436 A CN201980056436 A CN 201980056436A CN 112640437 B CN112640437 B CN 112640437B
Authority
CN
China
Prior art keywords
image data
captured image
bit
imaging element
compressed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201980056436.3A
Other languages
Chinese (zh)
Other versions
CN112640437A (en
Inventor
河合智行
长谷川亮
樱武仁史
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujifilm Corp
Original Assignee
Fujifilm Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujifilm Corp filed Critical Fujifilm Corp
Publication of CN112640437A publication Critical patent/CN112640437A/en
Application granted granted Critical
Publication of CN112640437B publication Critical patent/CN112640437B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/65Control of camera operation in relation to power supply
    • H04N23/651Control of camera operation in relation to power supply for reducing power consumption by affecting camera operations, e.g. sleep mode, hibernation mode or power off of selective parts of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/41Bandwidth or redundancy reduction
    • H04N1/411Bandwidth or redundancy reduction for the transmission or storage or reproduction of two-tone pictures, e.g. black and white pictures
    • H04N1/413Systems or arrangements allowing the picture to be reproduced without loss or modification of picture-information
    • H04N1/417Systems or arrangements allowing the picture to be reproduced without loss or modification of picture-information using predictive or differential encoding
    • H04N1/4172Progressive encoding, i.e. by decomposition into high and low resolution components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/41Bandwidth or redundancy reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/71Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
    • H04N25/75Circuitry for providing, modifying or processing image signals from the pixel array
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/53Constructional details of electronic viewfinders, e.g. rotatable or detachable
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

The present technology relates to an imaging element, an imaging device, an image data processing method, and a storage medium. The imaging element includes: a storage section that stores captured image data obtained by capturing an object at a 1 st frame rate, and is built in an imaging element; a processing unit that performs processing on captured image data; and an output section that outputs at least one of the processed image data and the captured image data to the outside of the imaging element and is built in the imaging element, the processing section generating compressed image data compressed and obtained by dividing the 1 st captured image data in a plurality of bit ranges, based on a degree of difference between the 1 st captured image data obtained by capturing and the 2 nd captured image data stored in the storage section, the output section outputting the compressed image data as processed image data to the outside at the 2 nd frame rate.

Description

Imaging element, imaging device, image data processing method, and storage medium
Technical Field
The present technology relates to an imaging element, an imaging device, an image data processing method, and a storage medium.
Background
Japanese patent application laid-open No. 6-204891 discloses a data compression method using a run-length code. In the data compression method described in japanese patent application laid-open No. 6-204891, first, two consecutive data of L-bit image data inputted in sequence are compared in bit units, and differential data composed of bit strings in which the coincidence is set to "0" and the non-coincidence is set to "1" is obtained. Next, the "0" consecutive number from the high order bit of the differential data is found and converted into a run code of w= [ "log 2 L" +1] bit width (where "" is a gaussian (Gauss) symbol). Then, data after low order bits other than the "0" consecutive number+1 bits in the differential data are appended to the run code is output as compressed data.
Disclosure of Invention
Technical problem to be solved by the invention
However, in the case where the compression processing of the image data is performed by the data compression method described in japanese patent application laid-open No. 6-204891 on the outside of the imaging element after the image data obtained by photographing the subject by the photoelectric conversion element included in the imaging element is output to the outside of the imaging element, since the image data output from the imaging element to the outside is the image data before the compression processing, there is a concern that the power consumption increases with the output of the image data to the outside of the imaging element.
An embodiment of the present invention provides an imaging element, an imaging device, an image data processing method, and a storage medium, which can reduce power consumption caused by outputting image data to the outside of the imaging element, compared to a case where image data obtained by photographing is directly output to the outside of the imaging element.
Means for solving the technical problems
An embodiment 1 of the present invention relates to an imaging element, including: a storage section that stores captured image data obtained by capturing an object at a1 st frame rate, and is built in an imaging element; a processing unit which performs processing on captured image data and is incorporated in an imaging element; and an output section that outputs at least one of processed image data and captured image data obtained by performing processing on the captured image data to the outside of the imaging element and is built in the imaging element, the processing section generating compressed image data compressed and obtained by dividing the 1 st captured image data in a plurality of bit ranges based on a degree of difference between the 1 st captured image data obtained by capturing and the 2 nd captured image data stored in the storage section, the output section outputting the compressed image data generated in the processing section to the outside as processed image data at the 2 nd frame rate.
Thus, the imaging element according to aspect 1 of the present invention can reduce power consumption associated with the output of image data to the outside of the imaging element, as compared with the case where image data obtained by photographing is directly output to the outside of the imaging element.
A2 nd aspect of the present invention is the imaging element according to the 1 st aspect, wherein the 1 st frame rate is a frame rate higher than the 2 nd frame rate.
Thus, the imaging element of the 2 nd aspect of the technique of the present invention can quickly generate compressed image data, compared with a case of photographing at the same frame rate as that used for outputting compressed image data by the output section.
The 3 rd aspect of the present invention is the imaging element according to the 1 st or 2 nd aspect, wherein the 2 nd captured image data is image data obtained before one or more frames of the 1 st captured image data obtained by capturing.
Thus, the imaging element according to aspect 3 of the present invention can increase the degree of difference compared with image data obtained by capturing 1 st captured image data and 2 nd captured image data simultaneously.
An imaging element according to claim 4 is the imaging element according to any one of claim 1 to claim 3, wherein the degree of difference is a degree of difference between the 1 st captured image data and the 2 nd captured image data in a line unit each time the 1 st captured image data is read in a line unit by the reading unit.
Thus, the imaging element of the 4 th aspect according to the technique of the present invention can output compressed image data promptly, as compared with the case where the degree of difference between the 1 st captured image data and the 2 nd captured image data is determined after waiting for the 1 st captured image data to be read in frame units.
The 5 th aspect of the present invention is the imaging element according to any one of the 1 st to 4 th aspects, wherein the degree of difference is a degree of difference between predetermined upper bits of the 1 st captured image data and the 2 nd captured image data.
Thus, the imaging element according to the 5 th aspect of the technique of the present invention can quickly determine the degree of difference between the 1 st captured image data and the 2 nd captured image data, compared with the case where all bits of the 1 st captured image data and the 2 nd captured image data are compared.
An imaging element according to claim 6 is the imaging element according to claim 5, wherein the 1 st captured image data and the 2 nd captured image data are image data having the same number of bits as each other, the compressed image data are image data having the 2 nd bit which is smaller than the 1 st bit which is the number of bits of the 1 st captured image data, and the predetermined upper bit is a bit corresponding to a value obtained by subtracting the 2 nd bit from the 1 st bit.
Thus, the imaging element of the 6 th aspect according to the technique of the present invention can determine the degree of difference between the 1 st captured image data and the 2 nd captured image data with high accuracy, compared with the case where the 1 st captured image data and the 2 nd captured image data are compared with each other with bits that are not related to the bits of the compressed image data to determine the degree of difference between the 1 st captured image data and the 2 nd captured image data.
The 7 th aspect of the present invention is the imaging element according to any one of the 1 st to 6 th aspects, wherein the compressed image data is data based on one bit image data determined from a degree of difference among a plurality of bit image data obtained by dividing the 1 st captured image data in a plurality of bit ranges.
Accordingly, the imaging element according to the 7 th aspect of the technique of the present invention can reduce power consumption associated with outputting image data, as compared with a case where all bits of 1 st captured image data are output.
An 8 th aspect of the present invention is the imaging element according to the 7 th aspect, wherein the plurality of bit image data are high bit image data and low bit image data, and the compressed image data has data based on the high bit image data when the degree of difference satisfies a predetermined condition, and has data based on the low bit image data when the degree of difference does not satisfy the predetermined condition.
Thus, the imaging element according to the 8 th aspect of the technique of the present invention can adjust the degree of suppression of image quality degradation and the degree of reduction in power consumption in accordance with the movement of the subject, as compared with the case where all bits of the 1 st captured image data are output regardless of the movement of the subject.
An imaging element according to claim 9 is the imaging element according to claim 7 or 8, wherein a part of bits of the compressed image data is bits to which bit image specifying information is given, and the bit image specifying information can specify which bit image data of the plurality of bit image data the compressed image data is based on.
Therefore, the imaging element according to the 9 th aspect of the present invention can quickly determine which bit image data out of the plurality of bit image data the compressed image data is based on, as compared with the case where the bit image specification information is output at a time different from the output time of the compressed image data.
An imaging element according to a 10 th aspect of the present invention is the imaging element according to any one of the 1 st to 9 th aspects, wherein the compressed image data is image data in line units, and has divided image specifying information capable of specifying which of a plurality of pieces of divided image data obtained by dividing the 1 st captured image data in a plurality of bit ranges is based on the divided image data.
Accordingly, the imaging element according to the 10 th aspect of the present invention can determine, in units of lines, on which divided image data the compressed image data is based.
An 11 th aspect of the present invention is the imaging element according to any one of the 1 st to 10 th aspects, wherein the output unit outputs the 1 st captured image data to the outside before the 2 nd captured image data is stored in the storage unit when capturing of a moving image is started.
Accordingly, the imaging element according to the 11 th aspect of the present invention can avoid delay in outputting the image data by the output unit even before the 2 nd captured image data is stored in the storage unit.
The 12 th aspect of the present invention is the imaging element according to any one of the 1 st to 11 th aspects, wherein the output unit outputs data based on image data belonging to a specific bit range among the 1 st captured image data to the outside before the 2 nd captured image data is stored in the storage unit when capturing of a moving image is started.
Thus, the imaging element according to aspect 12 of the present invention can reduce power consumption associated with outputting image data, as compared with the case where the 1 st captured image data is output before the 2 nd captured image data is stored in the storage unit.
The 13 th aspect of the present invention is the imaging element according to any one of the 1 st to 12 th aspects, wherein the output unit outputs, to the outside, the substitute compressed image data obtained by dividing the 1 st captured image data in a plurality of bit ranges, based on a degree of difference between the 1 st captured image data and the image data predetermined as the image data in place of the 2 nd captured image data, before the 2 nd captured image data is stored in the storage unit when the imaging for moving image is started.
Thus, the imaging element according to aspect 13 of the present invention can reduce power consumption associated with outputting image data, as compared with the case where the 1 st captured image data is output before the 2 nd captured image data is stored in the storage unit.
In the imaging element according to the 14 th aspect of the present invention, in the case where the imaging element according to any one of the 1 st to 13 th aspects is continuously used for capturing a still image at predetermined time intervals, the output unit outputs the 1 st captured image data or the image data belonging to a predetermined bit range among the 1 st captured image data to the outside before the 2 nd captured image data is stored in the storage unit, and outputs the compressed image data to the outside on condition that the 2 nd captured image data is stored in the storage unit.
Accordingly, the imaging element according to aspect 14 of the present invention can reduce power consumption associated with outputting image data, as compared with outputting 1 st captured image data before storing 2 nd captured image data in the storage unit.
The 15 th aspect of the present technology is the imaging element according to any one of the 1 st to 14 th aspects, wherein the imaging element is a stacked imaging element having a photoelectric conversion element and a storage portion is stacked in the photoelectric conversion element.
Thus, the imaging element according to embodiment 15 of the present invention can quickly determine the degree of difference between the 1 st captured image data and the 2 nd captured image data, compared with the case of using an imaging element of a type in which no storage portion is stacked on the photoelectric conversion element.
A 16 th aspect of the present invention is an imaging device including: an imaging element according to any one of aspects 1 to 15 relating to the present technology; and a control unit for controlling the display unit to display an image based on the compressed image data output from the output unit included in the imaging element.
Accordingly, the imaging device according to the 16 th aspect of the present invention can reduce power consumption associated with the output of image data to the outside of the imaging element, as compared with the case where image data obtained by imaging is directly output to the outside of the imaging element.
A 17 th aspect of the present invention is an image data processing method of an imaging element having a storage unit, a processing unit, and an output unit, the method including: causing a storage section to store captured image data obtained by capturing an object at a 1st frame rate; causing a processing unit to perform processing on the captured image data; the output unit is configured to output at least one of processed image data and captured image data obtained by processing the captured image data to the outside of the imaging element; the processing unit is configured to generate compressed image data obtained by dividing the 1st captured image data in a plurality of bit ranges based on the degree of difference between the 1st captured image data obtained by capturing and the 2 nd captured image data stored in the storage unit; and causing the output section to output the compressed image data generated in the processing section to the outside as processed image data at the 2 nd frame rate.
Accordingly, the image data processing method according to the 17 th aspect of the present invention can reduce power consumption associated with the output of image data to the outside of the imaging element, as compared with the case where image data obtained by photographing is directly output to the outside of the imaging element.
An 18 th aspect of the present invention is a program for causing a computer to function as a processing unit and an output unit included in an imaging element having a storage unit, a processing unit, and an output unit built therein, the program including: a storage section storing captured image data obtained by capturing an object at a1 st frame rate; a processing unit that performs processing on captured image data; an output unit that outputs at least one of processed image data and captured image data obtained by performing processing on the captured image data to the outside of the imaging element; the processing section generates compressed image data compressed and obtained by dividing the 1 st captured image data in a plurality of bit ranges, based on the degree of difference between the 1 st captured image data obtained by capturing by the photoelectric conversion element and the 2 nd captured image data stored in the storage section; and an output section outputting the compressed image data generated in the processing section to the outside as processed image data at the 2 nd frame rate.
Thus, the program of the 18 th aspect of the technique of the present invention can reduce the power consumption associated with the output of image data to the outside of the imaging element, as compared with the case where the image data obtained by shooting is directly output to the outside of the imaging element.
A 19 th aspect of the present invention is an imaging element comprising: a memory storing captured image data obtained by capturing an object at a1 st frame rate and built in an imaging element; and a processor that performs processing on the captured image data and outputs at least one of processed image data and captured image data obtained by performing processing on the captured image data to the outside of the imaging element and built in the imaging element, the processor generating compressed image data compressed and obtained by dividing the 1 st captured image data in a plurality of bit ranges according to a degree of difference between the 1 st captured image data obtained by capturing and the 2 nd captured image data stored in the memory, and outputting the generated compressed image data as processed image data to the outside at the 2 nd frame rate.
According to an embodiment of the present invention, the following effects can be obtained: compared with a case where image data obtained by photographing is directly output to the outside of the imaging element, power consumption caused by the output of the image data to the outside of the imaging element can be reduced.
Drawings
Fig. 1 is a perspective view showing an external appearance of an imaging device as an interchangeable lens camera according to embodiments 1 to 4.
Fig. 2 is a rear view showing the back side of the imaging device according to embodiments 1 to 4.
Fig. 3 is a block diagram showing an example of a hardware configuration of the imaging device according to embodiments 1 to 4.
Fig. 4 is a schematic configuration diagram showing an example of the configuration of a hybrid viewfinder in the imaging device according to embodiments 1 to 4.
Fig. 5 is a schematic configuration diagram showing an example of a schematic configuration of an imaging element included in the imaging device according to embodiments 1 to 4.
Fig. 6 is a block diagram showing an example of the configuration of the main part of an imaging element included in the imaging device according to embodiments 1 to 4.
Fig. 7 is a flowchart showing an example of the compression processing flow according to embodiment 1.
Fig. 8 is a flowchart showing an example of the flow of image data output processing according to embodiments 1 to 4.
Fig. 9 is a flowchart showing an example of the flow of display control processing according to embodiments 1 to 4.
Fig. 10 is an explanatory diagram for explaining an example of a method of generating compressed pixel data.
Fig. 11 is an explanatory diagram for explaining an example of the method of determining the upper n bits.
Fig. 12 is a flowchart showing an example of the compression processing flow according to embodiment 2.
Fig. 13 is a schematic configuration diagram showing an example of the configuration of compressed image data of one line size.
Fig. 14 is a schematic block diagram showing a modification of the structure of compressed image data of one line size.
Fig. 15 is a flowchart showing an example of the compression processing flow according to embodiment 3.
Fig. 16 is a flowchart showing an example of the compression processing flow according to embodiment 4.
Fig. 17 is a schematic configuration diagram showing an example of a method for assigning a bit range determination flag to compressed pixel data.
Fig. 18 is a state transition diagram showing example 1 of an image data flow when compression processing and image data output processing are performed.
Fig. 19 is a state transition diagram showing example 2 of the image data flow when the compression process and the image data output process are performed.
Fig. 20 is a state transition diagram showing example 3 of the image data flow when the compression process and the image data output process are performed.
Fig. 21 is a state transition diagram showing the 4 th example of the image data flow when the compression process and the image data output process are performed.
Fig. 22 is a state transition diagram showing the 5 th example of the image data flow when the compression process and the image data output process are performed.
Fig. 23 is a conceptual diagram illustrating an example of a mode in which a program according to the embodiment is attached to an imaging element from a storage medium storing the program according to the embodiment.
Fig. 24 is a block diagram showing an example of a schematic configuration of a smart device in which an imaging element according to the embodiment is incorporated.
Detailed Description
An example of an embodiment of an imaging device according to the technology of the present invention will be described below with reference to the drawings.
[ Embodiment 1]
As an example, as shown in fig. 1, the image pickup apparatus 10 is a lens-interchangeable camera. The image pickup apparatus 10 is a digital camera, and includes an image pickup apparatus main body 12, an interchangeable lens 14 interchangeably mounted to the image pickup apparatus main body 12, and a mirror is omitted. The interchangeable lens 14 includes an imaging lens 18, the imaging lens 18 having a focusing lens 16 movable in the optical axis direction by manual operation.
A hybrid viewfinder (registered trademark) 21 is provided on the imaging device main body 12. Here, the hybrid viewfinder 21 is, for example, a viewfinder that selectively uses an optical viewfinder (hereinafter, referred to as "OVF") and an electronic viewfinder (hereinafter, referred to as "EVF"). In addition, OVF means "optical viewfinder: the optical viewfinder is abbreviated. Also, EVF means "electronic viewfinder: short for electronic viewfinder ".
The interchangeable lens 14 is interchangeably attached to the image pickup apparatus main body 12. Further, a focus ring 22 used in the manual focus mode is provided on the barrel of the interchangeable lens 14. The focus lens 16 moves in the optical axis direction in response to a manual rotation operation of the focus ring 22, and the subject light is imaged on an imaging element 20 (see fig. 3) described later at a focus position corresponding to the subject distance.
On the front surface of the image pickup apparatus main body 12, an OVF viewfinder window 24 included in the hybrid viewfinder 21 is provided. A viewfinder switching lever (viewfinder switching section) 23 is provided on the front surface of the imaging apparatus main body 12. When the viewfinder switching lever 23 is turned in the direction of arrow SW, switching is performed between an optical image visually recognizable by OVF and an electronic image visually recognizable by EVF (through image).
The optical axis L2 of the OVF is different from the optical axis L1 of the interchangeable lens 14. A release button 25, a setting dial 28 for imaging system mode, playback system mode, and the like are provided on the upper surface of the imaging apparatus main body 12.
The release button 25 functions as an imaging preparation instruction unit and an imaging instruction unit, and can detect a pressing operation in two stages, i.e., an imaging preparation instruction state and an imaging instruction state. The imaging preparation instruction state is, for example, a state of being pressed from the standby position to the intermediate position (half-pressed position), and the imaging instruction state is a state of being pressed to the final pressed position (full-pressed position) beyond the intermediate position. Hereinafter, the "state of being pressed from the standby position to the half-pressed position" is referred to as a "half-pressed state", and the "state of being pressed from the standby position to the full-pressed position" is referred to as a "full-pressed state".
In the imaging device 10 according to embodiment 1, as the operation mode, the imaging mode and the playback mode are selectively set in accordance with an instruction from the user. In the imaging mode, the manual focus mode and the automatic focus mode are selectively set according to an instruction from the user. In the auto-focus mode, the release button 25 is set to a half-pressed state to adjust the imaging condition, and then, if the full-pressed state is set immediately thereafter, exposure is performed. That is, after an AE (AutomaticExposure: auto exposure) function is started to set an exposure state by setting the release button 25 to a half-pressed state, an AF (Auto-Focus) function is started to control focusing, and if the release button 25 is set to a full-pressed state, shooting is performed.
As an example, as shown in fig. 2, a touch panel/display 30, a cross key 32, a menu key 34, an instruction button 36, and a viewfinder eyepiece 38 are provided on the back surface of the imaging apparatus main body 12.
The touch panel/display 30 includes a liquid crystal display (hereinafter referred to as "1 st display") 40 and a touch panel 42 (see fig. 3).
The 1 st display 40 displays an image, character information, and the like. The 1 st display 40 displays a live view image (live view image), which is an example of a continuous frame image obtained by capturing continuous frames in the imaging mode. The 1 st display 40 is also configured to display a still image, which is an example of a single frame image that is captured in a single frame when an instruction to capture the still image is given. Further, the 1 st display 40 is also used to display a playback image in the playback mode and/or to display a menu screen or the like.
The touch panel 42 is a transmissive touch panel, and overlaps the surface of the display area of the 1 st display 40. The touch panel 42 detects contact by an indicator such as a finger or a stylus, for example. The touch panel 42 outputs detection result information indicating a detection result (whether or not the pointer is in contact with the touch panel 42) to a predetermined output destination (for example, a CPU52 (see fig. 3) described later) at a predetermined period (for example, 100 ms). In the case where the touch panel 42 detects the contact based on the pointer, the detection result information includes two-dimensional coordinates (hereinafter, referred to as "coordinates") at which the contact position based on the pointer on the touch panel 42 can be determined, and no coordinates are included in the case where the touch panel 42 does not detect the contact of the object to be pointed.
The cross key 32 functions as a multifunction key that outputs various instruction signals such as selection of one or more menus, zooming, and/or frame transfer. The menu key 34 is an operation key having the following functions: a function as a menu button for issuing an instruction to cause one or more menus to be displayed on the screen of the 1 st display 40; and a function as an instruction button for issuing an instruction to determine and execute the selected content or the like. The instruction button 36 is operated when a desired object such as a selection item is deleted, the specified content is canceled, the operation state is returned to the previous operation state, and the like.
The image pickup apparatus 10 has a still image pickup mode and a moving image pickup mode as operation modes of the image pickup system. The still image capturing mode is an operation mode in which a still image obtained by capturing an object by the image capturing apparatus 10 is recorded, and the moving image capturing mode is an operation mode in which a moving image obtained by capturing an object by the image capturing apparatus 10 is recorded.
As an example, as shown in fig. 3, the imaging device 10 includes a bayonet 46 (see also fig. 1) provided in the imaging device body 12 and a bayonet 44 on the interchangeable lens 14 side corresponding to the bayonet 46. The interchangeable lens 14 is attached to the imaging apparatus main body 12 in an interchangeable manner by the bayonet 44 being keyed to the bayonet 46.
The imaging lens 18 includes a slide mechanism 48 and a motor 50. The slide mechanism 48 moves the focus lens 16 along the optical axis L1 by performing an operation of the focus ring 22. On the slide mechanism 48, a focus lens 16 is slidably mounted along the optical axis L1. A motor 50 is connected to the slide mechanism 48, and the slide mechanism 48 receives power from the motor 50 to slide the focus lens 16 along the optical axis L1.
The motor 50 is connected to the imaging device body 12 via the bayonets 44, 46, and controls driving in accordance with a command from the imaging device body 12. In embodiment 1, a stepping motor is applied as an example of the motor 50. Accordingly, the motor 50 operates in synchronization with the pulse power in response to a command from the imaging apparatus main body 12. In the example shown in fig. 3, the motor 50 is provided in the imaging lens 18, but the present invention is not limited to this, and the motor 50 may be provided in the imaging device main body 12.
The image pickup apparatus 10 is a digital camera that records still images and moving images obtained by capturing an object. The imaging apparatus main body 12 includes an operation unit 54, an external interface (I/F) 63, and a post-stage circuit 90. The post-stage circuit 90 is a circuit that receives data sent from the imaging element 20. In embodiment 1, IC "INTEGRATED CIRCUIT" is used as the post-stage circuit 90: an integrated circuit. As an example of the IC, LSI (Large scale integration) is exemplified.
The image pickup device 10 operates in either one of a low-speed mode and a high-speed mode. The low-speed mode is an operation mode in which the processing is performed at a low frame rate by the subsequent circuit 90. In this embodiment 1, 60fps (FRAMES PER seconds: frames per second) is used as a low frame rate.
In contrast, the high-speed mode is an operation mode in which the processing is performed by the subsequent circuit 90 at a high frame rate. In this embodiment 1, 240fps is used as the high frame rate.
In embodiment 1, 60fps is illustrated as a low frame rate and 240fps is illustrated as a high frame rate, but the technique of the present invention is not limited thereto. May be 30fps for a low frame rate and 120fps for a high frame rate. Thus, a high frame rate is just higher than a low frame rate.
The post-stage circuit 90 includes a CPU (Central Processing Unit: central processing unit) 52, an I/F56, a main storage portion 58, a sub-storage portion 60, an image processing portion 62, a1 st display control portion 64, a2 nd display control portion 66, a position detection portion 70, and a device control portion 74. In embodiment 1, a single CPU is illustrated as the CPU52, but the technique of the present invention is not limited to this, and a plurality of CPUs may be used instead of the CPU52. That is, various processes performed by the CPU52 may be performed by one processor or a plurality of physically separate processors.
In embodiment 1, the image processing unit 62, the 1 st display control unit 64, the 2 nd display control unit 66, the position detecting unit 70, and the device control unit 74 are each implemented by an ASIC (Application SPECIFIC INTEGRATED Circuit), but the technique of the present invention is not limited thereto. For example, at least one of PLD (Programmable Logic device: programmable logic device) and FPGA (Field Programmable gate array) may be used instead of ASIC. And, at least one of ASIC, PLD, and FPGA may be employed. And, a computer including a CPU, a ROM (Read Only Memory), and a RAM (Random Access Memory: random access Memory) may be employed. The CPU may be single or plural. At least one of the image processing unit 62, the 1 st display control unit 64, the 2 nd display control unit 66, the position detection unit 70, and the device control unit 74 may be realized by a combination of hardware and software configurations.
The CPU52, the I/F56, the main storage 58, the sub storage 60, the image processing section 62, the 1 st display control section 64, the 2 nd display control section 66, the operation section 54, the external I/F63, and the touch panel 42 are connected to each other via a bus 68.
The CPU52 controls the entire image pickup apparatus 10. In the imaging apparatus 10 according to embodiment 1, in the auto-focus mode, the CPU52 performs focus control by driving the control motor 50 so as to maximize the contrast value of an image obtained by imaging. In the auto-focus mode, the CPU52 calculates AE information, which is a physical quantity indicating the brightness of the image obtained by shooting. When the release button 25 is set to the half-pressed state, the CPU52 derives a shutter speed and an F value corresponding to the brightness of the image indicated by the AE information. Then, the relevant portions are controlled to be the derived shutter speed and F value to set the exposure state.
The main storage 58 is a volatile memory, such as a RAM. The secondary storage unit 60 is a nonvolatile memory, and is, for example, a flash memory or an HDD (HARD DISK DRIVE: hard disk drive).
The operation unit 54 is a user interface operated by a user when various instructions are given to the subsequent circuit 90. The operation section 54 includes the release button 25, the dial 28, the viewfinder switching lever 23, the cross key 32, the menu key 34, and the instruction button 36. Various instructions received through the operation section 54 are output as operation signals to the CPU52, and the CPU52 executes processing corresponding to the operation signals input from the operation section 54.
The position detecting unit 70 is connected to the CPU52. The position detecting unit 70 is connected to the focus ring 22 via the bayonets 44, 46, detects the rotation angle of the focus ring 22, and outputs rotation angle information indicating the rotation angle as the detection result to the CPU52. The CPU52 executes processing corresponding to the rotation angle information input from the position detecting section 70.
When the imaging mode is set, image light representing an object is imaged on the light receiving surface of the color imaging element 20 via the imaging lens 18 including the focusing lens 16 movable by a manual operation and the mechanical shutter 72.
The device control section 74 is connected to the CPU52. The device control unit 74 is connected to the imaging element 20 and the mechanical shutter 72. The device control unit 74 is connected to the motor 50 of the imaging lens 18 via the bayonets 44 and 46.
The device control section 74 controls the imaging element 20, the mechanical shutter 72, and the motor 50 under the control of the CPU 52.
As an example, as shown in fig. 4, the hybrid viewfinder 21 includes an OVF76 and an EVF78. The OVF76 is an inverse galilean viewfinder having an objective lens 81 and an eyepiece lens 86, and the EVF78 has a2 nd display 80, a prism 84, and an eyepiece lens 86.
A liquid crystal shutter 88 is disposed in front of the objective lens 81, and when the EVF78 is used, the liquid crystal shutter 88 shields light so that an optical image is not incident on the lens 81.
The prism 84 reflects the electronic image or various information displayed on the 2 nd display 80 and guides to the eyepiece lens 86, and synthesizes the optical image and the electronic image and/or various information displayed on the 2 nd display 80.
Here, when the viewfinder switching lever 23 is turned in the direction of arrow SW shown in fig. 1, the OVF mode in which the optical image is visually recognized by the OVF76 and the EVF mode in which the electronic image is visually recognized by the EVF78 can be alternately switched every time the viewfinder switching lever is turned.
In the OVF mode, the 2 nd display control unit 66 controls the liquid crystal shutter 88 to be in a non-light-shielding state so that the optical image can be visually recognized from the eyepiece portion. In the EVF mode, the 2 nd display control unit 66 controls the liquid crystal shutter 88 to be in a light-shielding state so that only the electronic image displayed on the 2 nd display 80 can be visually recognized from the eyepiece unit.
The imaging element 20 is an example of a "stacked imaging element" according to the technology of the present invention. The imaging element 20 is, for example, a CMOS (Complementary Metal Oxide Semiconductor: complementary metal oxide semiconductor) image sensor. As an example, as shown in fig. 5, the imaging element 20 incorporates a photoelectric conversion element 92, a processing circuit 94, and a memory 96. In the imaging element 20, a processing circuit 94 and a memory 96 are stacked on the photoelectric conversion element 92. The memory 96 is an example of a "storage unit" according to the technology of the present disclosure.
The processing circuit 94 is, for example, an LSI, and the memory 96 is, for example, a RAM. In embodiment 1, a DRAM is used as an example of the memory 96, but the technique of the present invention is not limited to this, and may be an SRAM (Static Random Access Memory: static random access memory).
In the first embodiment, the processing circuit 94 is implemented by an ASIC, but the technique of the present invention is not limited thereto. For example, at least one of PLD and FPGA may be used instead of ASIC. And, at least one of ASIC, PLD, and FPGA may be employed. Also, a computer including a CPU, a ROM, and a RAM may be employed. The CPU may be single or plural. The processing circuit 94 may be realized by a combination of hardware and software structures.
The photoelectric conversion element 92 has a plurality of photosensors arranged in a matrix. Embodiment 1 adopts a photodiode as an example of a photosensor. Further, as an example of the plurality of photosensors, there is a photodiode having a pixel size of "4896×3265".
The photoelectric conversion element 92 includes a color filter including a G color filter corresponding to G (green), an R color filter corresponding to R (red), and a B color filter corresponding to B (blue), which are most useful for obtaining a luminance signal. In embodiment 1, the plurality of photodiodes of the photoelectric conversion element 92 are arranged periodically in the row direction (horizontal direction) and the column direction (vertical direction) with respect to each of the G filter, the R filter, and the B filter. Therefore, the image pickup apparatus 10 can perform processing in the repetition pattern when performing synchronization processing of R, G, B signals or the like. The synchronization process is a process of: all color information is calculated for each pixel from a mosaic image corresponding to the color filter array of the single-plate color imaging element. For example, in the case of an imaging element composed of RGB three-color filters, the synchronization process refers to the following process: from a mosaic image composed of RGB, RGB all color information is calculated for each pixel.
Although a CMOS image sensor is illustrated as the imaging element 20, the technique of the present invention is not limited thereto, and is also applicable even if the photoelectric conversion element 92 is a CCD (Charge Coupled Device: charge coupled device) image sensor, for example.
The imaging element 20 has a so-called electronic shutter function, and the charge accumulation time of each photodiode in the photoelectric conversion element 92 is controlled by activating the electronic shutter function under the control of the device control section 74. The charge accumulation time is referred to as a so-called shutter speed.
The processing circuit 94 is controlled by the device control section 74. The processing circuit 94 reads captured image data obtained by capturing an object by the photoelectric conversion element 92. Here, "captured image data" refers to image data representing an object. The captured image data is signal charges accumulated in the photoelectric conversion element 92. The details will be described later, but the captured image data is roughly divided into 1 st captured image data and 2 nd captured image data.
In embodiment 1, image data obtained by capturing an object with a plurality of photodiodes included in a specified one of the partial areas is used as an example of captured image data, but the technique of the present invention is not limited to this. For example, image data obtained by capturing an object by all photodiodes in the photoelectric conversion element 92 may be employed.
The processing circuit 94 performs a/D (Analog/Digital: analog/Digital) conversion on the captured image data read from the photoelectric conversion element 92. The processing circuit 94 stores captured image data obtained by a/D converting the captured image data in the memory 96. The processing circuit 94 acquires captured image data from the memory 96. The processing circuit 94 outputs processed image data obtained by performing processing on the acquired captured image data to the I/F56 of the post-stage circuit 90.
In the following, for convenience of explanation, an example of a mode in which the processing circuit 94 outputs the processed image data to the I/F56 of the subsequent stage circuit 90 will be described, but the technique of the present invention is not limited thereto. For example, the processing circuit 94 may output the acquired captured image data to the I/F56 of the subsequent stage circuit 90, or may output both the captured image data and the processed image data to the I/F56 of the subsequent stage circuit 90. The processing circuit 94 may selectively output the captured image data and the processed image data to the I/F56 of the subsequent circuit 90 according to an instruction issued by the user or the imaging environment.
As an example, as shown in fig. 6, the processing circuit 94 includes a photoelectric conversion element driving circuit 94A, AD (Analog-to-Digital converter) conversion circuit 94B, an image processing circuit 94C, and an output circuit 94D, and operates under the control of the CPU52. The photoelectric conversion element driving circuit 94A is connected to the photoelectric conversion element 92 and the AD conversion circuit 94B. The memory 96 is connected to the AD conversion circuit 94B and the image processing circuit 94C. The image processing circuit 94C is connected to the output circuit 94D. The output circuit 94D is connected to the I/F56 of the subsequent stage circuit 90.
The image processing circuit 94C is an example of a "processing unit" according to the technology of the present disclosure. The output circuit 94D is an example of an "output unit" according to the technique of the present invention.
The photoelectric conversion element drive circuit 94A controls the photoelectric conversion element 92 under the control of the device control section 74, and reads analog captured image data from the photoelectric conversion element 92. The AD conversion circuit 94B digitizes the captured image data read by the photoelectric conversion element drive circuit 94A, and stores the digitized captured image data in the memory 96.
In the image pickup apparatus 10, various processes may be performed at a plurality of frame rates including the 1 st frame rate and the 2 nd frame rate. The 1 st frame rate and the 2 nd frame rate are both variable frame rates, and the 1 st frame rate is a frame rate higher than the 2 nd frame rate.
The image processing circuit 94C performs processing on captured image data stored in the memory 96. The output circuit 94D outputs the processed image data obtained by the processing performed by the image processing circuit 94C to the outside of the imaging element 20 at the 2 nd frame rate. The term "outside of the imaging element 20" as used herein refers to the I/F56 of the subsequent stage circuit 90.
In addition, in the imaging element 20, the subject is photographed at the 1 st frame rate. In the imaging element 20, the processing of reading by the photoelectric conversion element drive circuit 94A, storing the captured image data in the memory 96 by the AD conversion circuit 94B, and performing by the image processing circuit 94C is performed at the 1 st frame rate, but the technique of the present invention is not limited to this. For example, in the processing of reading by the photoelectric conversion element driving circuit 94A, storing the captured image data in the memory 96 by the AD conversion circuit 94B, and storing the captured image data in the memory 96 by at least the AD conversion circuit 94B in the image processing circuit 94C, the 1 st frame rate may be performed. In this case, the following embodiment can be exemplified: in the processing of reading by the photoelectric conversion element driving circuit 94A, storing the captured image data in the memory 96 by the AD conversion circuit 94B, and performing by the image processing circuit 94C, reading by the photoelectric conversion element driving circuit 94A, and storing the captured image data in the memory 96 by the AD conversion circuit 94B are performed at the 1 st frame rate.
The image processing circuit 94C generates compressed image data compressed by dividing the 1 st captured image data in a plurality of bit ranges according to the degree of difference between the 1 st captured image data and the 2 nd captured image data.
Here, the 1 st captured image data is captured image data obtained by capturing an object by the photoelectric conversion element 92, and the 2 nd captured image data is captured image data stored in the memory 96. In other words, among a pair of captured image data in which the time of capturing the subject is before and after, captured image data obtained by first capturing is the 2 nd captured image data, and captured image data obtained by later capturing is the 1 st captured image data. In embodiment 1, the 1 st captured image data is the latest captured image data obtained by capturing an object by the photoelectric conversion element 92, and the 2 nd captured image data is captured image data obtained one frame before the 1 st captured image data.
The term "plurality of bit ranges" as used herein refers to, for example, a high-order bit range and a low-order bit range. When the captured image data of one frame amount is 12 bits, the upper bit range means the upper 6 bits out of the 12 bits, and the lower bit range means the lower 6 bits out of the 12 bits. The term "12 bits of captured image data of one frame amount" as used herein means that each pixel has a pixel value of 12 bits. Thus, the upper bit range refers to the upper 6 bits of the 12 bits of the pixel value for each pixel, and the lower bit range refers to the lower 6 bits of the 12 bits of the pixel value for each pixel. Hereinafter, the pixel value is also referred to as "pixel data".
Next, the operation of the imaging device 10 according to the technique of the present invention will be described.
In the following description, for convenience of description, the display device is referred to as "display device" without designating the 1 st display 40 and the 2 nd display 80 without distinguishing them. The display device is an example of a "display unit" according to the technology of the present invention. In the following description, for convenience of description, the 1 st display control unit 64 and the 2 nd display control unit 66 are not given signs, and are referred to as "display control units". The display control unit is an example of the "control unit" according to the technology of the present invention.
In the following, for convenience of explanation, a case where a preview image is displayed on a display device will be explained. For convenience of explanation, the memory 96 is assumed to be a memory capable of storing captured image data of two or more frames in a FIFO manner. In the following, for convenience of explanation, it is assumed that captured image data of two or more frames is stored in the memory 96. In the following description, among the captured image data of two frames before and after the time stored in the memory 96, the captured image data stored in the memory 96 is the 2 nd captured image data first, and the captured image data stored in the memory 96 after the 2 nd captured image data is the 1 st captured image data. Hereinafter, the image represented by the 1 st captured image data is referred to as "1 st captured image", and the image represented by the 2 nd captured image data is referred to as "2 nd captured image".
First, with reference to fig. 7, a compression process performed by the image processing circuit 94C when the image processing circuit 94C of the processing circuit 94 generates compressed image data of one frame amount will be described.
In the compression process shown in fig. 7, the processing circuit 94 performs the compression process at the 1 st frame rate. For convenience of explanation, in the compression process shown in fig. 7, the number of bits of captured image data of one frame amount stored in the memory 96 is set to 12 bits, and the 1 st captured image data is compressed to 7-bit image data. The number of original bits of the 1 st captured image data and the 2 nd captured image data is 12. The "original number of bits" here refers to the number of bits of the 1 st captured image data and the 2 nd captured image data stored in the memory 96 before the compression process shown in fig. 7 is performed. In the compression process shown in fig. 7, 12 bits are an example of "1 st bit" according to the technique of the present invention, and 7 bits are an example of "2 nd bit" according to the technique of the present invention.
In the compression process shown in fig. 7, first, in step S100, the image processing circuit 94C reads the current pixel data Dn of each of all the pixels of the 1 st captured image from the memory 96 with respect to the line of interest, and the compression process proceeds to step S102. Here, the attention line refers to an unused one of the 1 st to N th horizontal lines of the captured image data stored in the memory 96. The term "unused" as used herein means unused in the processing of step S106 or step S108 described later. The current pixel data Dn refers to pixel data of pixels included in the 1 st captured image data. The number of bits of the current pixel data Dn read from the memory 96 is 12 bits.
In step S102, the image processing circuit 94C reads the respective front pixel data Dp of all the pixels of the 2 nd captured image from the memory 96 with respect to the line of interest, and the compression process proceeds to step S104. The front pixel data Dp refers to pixel data of pixels included in the 2 nd captured image data. The number of bits of the previous pixel data Dp read from the memory 96 is 12 bits.
In step S104, the image processing circuit 94C compares the current pixel data Dn read in step S100 with the upper n bits of the previous pixel data Dp read in step S102 with respect to the pixel of interest. Then, the image processing circuit 94C determines whether the upper n bits of the current pixel data Dn and the previous pixel data Dp are different. Here, "pixel of interest" refers to an unprocessed pixel among all pixels of the line of interest. The "unprocessed pixel" refers to a pixel that has not yet been set as a processing target of step S106 or step S108.
The "upper n bits" referred to herein are examples of the predetermined upper bits according to the technique of the present invention. In this step S104, the upper n bits are the upper 5 bits. Here, the upper 5 bits are bits corresponding to a value obtained by subtracting 7 bits, which is the number of bits of compressed image data obtained by compressing the 1 st captured image data, from 12 bits, which is the number of bits of the 1 st captured image data.
In step S104, if the current pixel data Dn is the same as the upper n bits of the previous pixel data Dp, the determination is no, and the compression process proceeds to step S106. In step S104, if the current pixel data Dn is different from the upper n bits of the previous pixel data Dp, the compression process proceeds to step S108, where it is determined as yes.
Here, the case where the current pixel data Dn is the same as the upper n bits of the previous pixel data Dp refers to the case where the subject is unchanged. In contrast, the case where the current pixel data Dn is different from the upper n bits of the previous pixel data Dp refers to the case where the subject has changed.
In step S106, the image processing circuit 94C generates compressed pixel data Do of the lower b bits of the current pixel data Dn with respect to the pixel of interest, and the compression process shifts to step S109. In this step S106, the lower b bits are the lower 7 bits. Here, the compressed pixel data Do of the lower b bits is generated in order to transfer noise information to the post-stage circuit 90.
In addition, in this step S106, specific image processing may be further performed on the generated compressed pixel data Do of the lower b bits.
In step S108, the image processing circuit 94C generates compressed pixel data Do of the upper b bits of the current pixel data Dn with respect to the pixel of interest, and the compression process shifts to step S109. In this step S108, the upper b bits are the upper 7 bits.
In addition, in this step S108, a specific image process may be further performed on the generated compressed pixel data Do of the high-order b bits.
If the processing of step S106 or step S108 is performed, for example, as shown in fig. 10, the current pixel data Dn is compressed into compressed pixel data Do.
In the example of fig. 10, first, an exclusive or logical sum of the current pixel data Dn of 12 bits and the previous pixel data Dp of 12 bits is calculated.
Next, a logical product of 12-bit data with high-order 5 bits of "1" and low-order 7 bits of "0" and the exclusive or logical sum is calculated. Here, the "upper 5 bits" is illustrated because the upper 5 bits are used as the upper n bits in step S104. In contrast, if the upper 7 bits are used as the upper n bits in step S104, for example, a logical product of the exclusive or logical sum and 12-bit data in which the upper 7 bits are "1" and the lower 5 bits are "0" is calculated.
Then, in the case where the calculated logical product is not "0", the high-order [11:5] bits of the current pixel data Dn are set to [6:0] bits of the compressed pixel data Do. In the case where the calculated logical product is "0", the lower [6:0] bits of the current pixel data Dn are set to the [6:0] bits of the compressed pixel data Do.
In step S109, it is determined in step S104 to step S108 whether or not to end the processing of one line amount. In step S109, when the processing of one line is not completed in step S104 to step S108, the determination is no, and the compression processing proceeds to step S104. In step S109, when the processing of one line is completed in step S104 to step S108, the determination is yes, and the compression processing proceeds to step S110.
In step S110, it is determined whether the line of interest reaches the final line in the vertical direction of the 1 st captured image data stored in the memory 96. In step S110, if the line of interest does not reach the final line in the vertical direction of the 1 st captured image data stored in the memory 96, the determination is no, and the compression process proceeds to step S112. In step S110, when the line of interest reaches the final line in the vertical direction of the 1 st captured image data stored in the memory 96, it is determined as yes, and the compression process proceeds to step S114.
In step S112, the image processing circuit 94C shifts the attention line by one line in the vertical direction of the 1 st captured image data stored in the memory 96 by increasing the address of the attention line by 1, and then the compression processing shifts to step S100.
In step S114, the image processing circuit 94C sets the compressed pixel data Do for all pixels obtained by performing the processing of step S106 or step S108 as compressed image data of one frame amount, and outputs the compressed image data of one frame amount to the output circuit 94D, and the image processing circuit 94C ends the compression processing.
The compressed image data outputted by executing the processing of step S114 is an example of "processed image data", "data based on one-bit image data", and "data based on divided image data" according to the technique of the present invention.
In step S114, the image processing circuit 94C may perform a specific image process on the compressed image data. In this case, the processed compressed image data to which the specific image processing is applied is output to the output circuit 94D. The term "processed compressed image data" as used herein refers to an example of "processed image data", "data based on one-bit image data" and "data based on divided image data" according to the technique of the present invention.
Next, an image data output process performed by the output circuit 94D of the processing circuit 94 will be described with reference to fig. 8.
In the image data output process shown in fig. 8, first, in step S130, the output circuit 94D determines whether compressed image data is input from the image processing circuit 94C. In this step S130, the compressed image data input from the image processing circuit 94C is the compressed image data output in step S114 included in the compression processing shown in fig. 7.
In step S130, when the compressed image data is input from the image processing circuit 94C, it is determined as yes, and the image data output process proceeds to step S132. In step S130, when compressed image data is not input from the image processing circuit 94C, it is determined as no, and the image data output process proceeds to step S134.
In step S132, the output circuit 94D outputs the compressed image data input in step S130 to the I/F56 of the subsequent circuit 90 at the 2 nd frame rate, and the image data output process shifts to step S134.
In step S134, the output circuit 94D determines whether or not an image data output process end condition, which is a condition for ending the image data output process, is satisfied. As the image data output process end condition, for example, a condition in which an instruction to end the image data output process is received by the touch panel 42 and/or the operation unit 54 may be mentioned. The image data output process end condition may be, for example, the following condition: after the image data output process is started, a predetermined time is exceeded without pressing the release button 25. The term "predetermined time" here means, for example, 5 minutes. The predetermined time may be a fixed value or a variable value that can be changed according to an instruction given by the user.
In step S134, if the image data output process end condition is not satisfied, the determination is no, and the image data output process proceeds to step S130. In step S134, when the image data output process end condition is satisfied, the output circuit 94D determines yes, and ends the image data output process.
By executing the compression processing shown in fig. 7 and the image data output processing shown in fig. 8 by the processing circuit 94, as an example, the image data is converted as shown in fig. 18.
In the example shown in fig. 18, the reading of the captured image data of one frame amount starts in synchronization with the vertical drive signal, and the pixel data is read from the photoelectric conversion element 92 from the 1 st horizontal line to the N (> 1) th horizontal line. The pixel data of one frame amount read from the photoelectric conversion element 92 is first stored in the memory 96 as the current pixel data Dn.
Next, when pixel data of one frame amount is read from the photoelectric conversion element 92 and stored in the memory 96, the current pixel data Dn immediately before one frame, which has been stored in the memory 96, becomes the previous pixel data Dp, and new pixel data is stored in the memory 96 as the current pixel data Dn. For example, when the pixel data of the n-th frame starts to be stored in the memory 96, the pixel data of the n-1-th frame is changed from the current pixel data Dn to the previous pixel data Dp, and the pixel data of the n-th frame stored in the memory 96 is the current pixel data Dn. Then, the 2 nd captured image of the previous frame and the 1 st captured image of the current frame are compared, and the 1 st captured image is bit-compressed. The bit compression is, for example, the processing of steps S106 and 108 shown in fig. 7, that is, the processing of generating compressed pixel data Do from the 1 st captured image.
Next, a display control process performed by the display control unit of the subsequent circuit 90 will be described with reference to fig. 9. For convenience of explanation, the following description is given on the premise that: by performing the image data output processing shown in fig. 8, compressed image data is output from the output circuit 94D to the post-stage circuit 90, and the compressed image data is input to the CPU52 and the image processing section 62.
In the display control process shown in fig. 9, in step S150, the display control unit determines whether or not compressed image data is input from the image processing unit 62. In step S150, when compressed image data is not input from the image processing unit 62, the determination is no, and the display control process proceeds to step S154. In step S150, when compressed image data is input from the image processing unit 62, it is determined as yes, and the display control process proceeds to step S152.
In step S152, the display control section outputs the compressed image data as graphics data to the display device, and the display control process proceeds to step S154. When the compressed image data is output to the display device by executing the processing of step S152, the display device displays an image represented by the compressed image data.
In step S154, the display control unit determines whether or not a condition for ending the display control process, that is, a display control process ending condition is satisfied. The display control process end condition is, for example, the same condition as the image data output process end condition described above.
In step S154, if the display control process end condition is not satisfied, the determination is no, and the display control process proceeds to step S150. In step S154, when the display control process end condition is satisfied, the display control unit determines yes, and ends the display control process.
As described above, in the image pickup apparatus 10 according to embodiment 1, the degree of difference between the 1 st captured image data obtained by capturing the object by the photoelectric conversion element 92 and the 2 nd captured image data stored in the memory 96 is determined by the image processing circuit 94C. The "degree of difference" here is, for example, the determination result in step S104. Then, the image processing circuit 94C generates compressed image data compressed by dividing the 1 st captured image data in a plurality of bit ranges according to the degree of difference. Then, the compressed image data generated in the image processing circuit 94C is output to the post-stage circuit 90 for output at the 2 nd frame rate by the output circuit 94D.
Thus, the imaging device 10 according to embodiment 1 can reduce power consumption associated with the output of the image data to the outside of the imaging element 20, as compared with the case where the 1 st captured image data obtained by capturing is directly output to the outside of the imaging element 20.
In the imaging device 10 according to embodiment 1, a frame rate higher than the 2 nd frame rate is used as the 1 st frame rate.
Thus, the image pickup apparatus 10 according to embodiment 1 can quickly generate compressed image data, compared with the case where the image processing circuit 94C performs processing at the same frame rate as that for outputting compressed image data by the output circuit 94D.
In the imaging device 10 according to embodiment 1, the 2 nd captured image data is image data obtained one frame before the 1 st captured image data.
Thus, the imaging device 10 according to embodiment 1 can increase the degree of difference between the 1 st captured image data and the 2 nd captured image data, compared to the case where the 1 st captured image data and the 2 nd captured image data are captured simultaneously.
In the imaging device 10 according to embodiment 1, the degree of difference between the 1 st captured image data and the 2 nd captured image data is the degree of difference between the 1 st captured image data and the 2 nd captured image data by the upper n bits.
Thus, the imaging device 10 according to embodiment 1 can quickly determine the degree of difference between the 1 st captured image data and the 2 nd captured image data, compared with the case where all bits of the 1 st captured image data and the 2 nd captured image data are compared.
In the imaging device 10 according to embodiment 1, the 1 st imaging image data and the 2 nd imaging image data are image data having the same number of bits as each other, the compressed image data is 7 bits smaller than 12 bits, and the upper n bits are 5 bits which are bits corresponding to a value obtained by subtracting 7 bits from 12 bits.
Thus, the imaging device 10 according to embodiment 1 can determine the degree of difference between the 1 st captured image data and the 2 nd captured image data with high accuracy, as compared with the case where the 1 st captured image data and the 2 nd captured image data are compared with each other with bits that are not related to the bits of the compressed image data to determine the degree of difference between the 1 st captured image data and the 2 nd captured image data.
In the imaging device 10 according to embodiment 1, a stacked CMOS image sensor is used as the imaging element 20.
Thus, the imaging device 10 according to embodiment 1 can quickly determine the degree of difference between the 1 st captured image data and the 2 nd captured image data, compared with the case where the photoelectric conversion element 92 uses the imaging element of the type stacked on the memory 96.
In embodiment 1, the description has been made taking an example in which the number of bits of 1 st captured image data of one frame amount stored in the memory 96 is 12 bits and compressed image data of 7 bits is generated, but the technique of the present invention is not limited to this. For example, as shown in fig. 11, 9-bit compressed image data may be generated from a 1 st captured image of 16 bits. In this case, the upper n bits used in the processing of step S104 shown in fig. 7 are 7 bits obtained by subtracting 9 bits of compressed image data from 16 bits of the 1 st captured image. And, 7-bit compressed image data can be generated from the 1 st captured image of 14 bits. In this case, the upper n bits used in the processing of step S104 shown in fig. 7 are 7 bits obtained by subtracting the 7 bits of the compressed image data from the 14 bits of the 1 st captured image. And, compressed image data of 7 bits can be generated from the 1 st captured image of 12 bits. In this case, the upper n bits used in the processing of step S104 shown in fig. 7 are 5 bits obtained by subtracting 7 bits of the compressed image data from 12 bits of the 1 st captured image. Further, 6-bit compressed image data can be generated from the 1 st captured image of 10 bits. In this case, the upper n bits used in the processing of step S104 shown in fig. 7 are 4 bits obtained by subtracting 6 bits of compressed image data from 10 bits of the 1 st captured image.
In embodiment 1, the attention lines are arranged in one row, but the technique of the present invention is not limited to this. For example, the attention line may be set to a plurality of rows.
In embodiment 1, the embodiment has been described by taking an example in which the 1 st and 2 nd captured image data of one frame amount are stored in the memory 96 and then the image processing circuit 94C compares the 1 st captured image data and the 2 nd captured image data in units of lines, but the technique of the present invention is not limited thereto. That is, the comparison of the 1 st captured image data and the 2 nd captured image data in step S104 shown in fig. 7 may be performed before the 1 st captured image data of one frame amount is stored in the memory 96.
In this case, for example, each time the 1 st captured image data is read from the photoelectric conversion element 92 in line units by the processing circuit 94, which is an example of the "reading unit" according to the technique of the present invention, the 1 st captured image data is compressed according to the degree of difference between the 1 st captured image data and the 2 nd captured image data in line units. Thus, the image pickup device 10 can quickly output compressed image data, compared with a case where the degree of difference between the 1 st captured image data and the 2 nd captured image data is determined after waiting for the 1 st captured image data to be read in frame units. The "degree of difference" referred to herein corresponds to, for example, the determination result in step S104 shown in fig. 7. The "line unit" here may be one line or a plurality of lines.
In embodiment 1, the captured image data one frame before the 1 st captured image data is used as the 2 nd captured image data, but the technique of the present invention is not limited to this. For example, the 2 nd captured image data may be captured image data preceding a plurality of frames of the 1 st captured image data.
[ Embodiment 2]
In embodiment 1, the description has been given by taking an example of a mode of generating compressed pixel data Do for each line of interest, but in embodiment 2, a mode of specifying whether compressed image data of a higher order bit or compressed image data of a lower order bit is possible for each line is described. In embodiment 2, the same components as those in embodiment 1 are denoted by the same reference numerals, and description thereof is omitted.
The imaging device 10 according to embodiment 2 differs from the imaging device 10 according to embodiment 1 in that the compression process shown in fig. 12 is performed by the image processing circuit 94C instead of the compression process shown in fig. 7.
Therefore, the compression process according to embodiment 2 executed by the image processing circuit 94C will be described with reference to fig. 12.
The compression process shown in fig. 12 is different from the compression process shown in fig. 7 in that the process of step S200 is provided, and the process of step S202 is provided instead of the process of step S114.
In the compression process shown in fig. 12, when it is determined to be yes in step S109, the compression process proceeds to step S200.
In step S200, the image processing circuit 94C assigns a bit range determination flag to the most significant 2 bits of compressed pixel data Do of all pixels of the line of interest, that is, compressed image data of one line amount, and then the compression process proceeds to step S110. The bit range specifying flag is an example of "bit image specifying information" and "divided image specifying information" according to the technique of the present invention.
The compressed image data to which the bit range determination flag is given is generated by the image processing circuit 94C executing the processing of this step S200. The compressed image data generated here is an example of "data based on one-bit image data" according to the technology of the present invention, and the one-bit image data is determined based on the degree of difference among a plurality of bits obtained by dividing the 1 st captured image data in a plurality of bit ranges. The compressed image data generated here is also an example of "data based on any of a plurality of divided image data obtained by dividing the 1 st captured image data in a plurality of bit ranges".
The bit range determination flag is roughly classified into a high bit determination flag and a low bit determination flag. The high bit determination flag refers to a flag that can determine high bit compressed image data. The high-order bit compressed image data is, for example, compressed pixel data Do indicating that the pixel data of the half pixels is high-order b bits, that is, compressed image data of one line amount. The low bit determination flag refers to a flag that can determine low bit compressed image data. The low-order bit compressed image data is, for example, compressed pixel data Do indicating that the pixel data of the half pixels is low-order b-bits, that is, compressed image data of one line amount. Hereinafter, for convenience of explanation, it is referred to as "bit-compressed image data" in the case where it is not necessary to distinguish between high-order bit-compressed image data and low-order bit-compressed image data.
The high-order bit compressed image data and the low-order bit compressed image data are examples of "a plurality of bit image data" and "a plurality of divided image data" according to the technique of the present invention. The high-order bit compressed image data is an example of "high-order bit image data" according to the technique of the present invention. The low-order bit compressed image data is an example of "low-order bit image data" according to the technique of the present invention.
In the example illustrated in fig. 13, a bit range determination flag is given to the most significant 2 bits of compressed image data of one line amount of 2 bytes. As an example, in compressed image data of one line shown in fig. 13, a bit range determination flag is given to the most significant 2 bits, and a synchronization code of blank, dummy, or the like and compressed pixel data Do for each pixel follow the bit range determination flag. The high-order bit specifying flag given to the 2 highest-order bits of the compressed image data of one line shown in fig. 13 is, for example, "00", and the low-order bit specifying flag is, for example, "01". That is, compressed image data of one line amount to which "00" is given to the most significant 2 bits is regarded as a high-order bit compressed image, and compressed image data of one line amount to which "01" is given to the most significant 2 bits is regarded as a low-order bit compressed image.
In step S202, the image processing circuit 94C sets the compressed image data of the plurality of lines to which the bit range determination flag is respectively given as compressed image data of one frame amount, and outputs the compressed image data of one frame amount to the output circuit 94D, and the image processing circuit 94C ends the compression processing.
The compressed image data outputted by executing the processing in step S202 is an example of "processed image data", "data based on bit image data", and "data based on divided image data" according to the technique of the present invention.
In step S202, the image processing circuit 94C may perform a specific image process on the compressed image data. In this case, the processed compressed image data to which the specific image processing is applied is output to the output circuit 94D. The term "processed compressed image data" as used herein refers to an example of "processed image data", "data based on one-bit image data" and "data based on divided image data" according to the technique of the present invention.
As described above, in the image pickup apparatus 10 according to embodiment 2, the compressed image data is one-bit compressed image data determined based on the degree of difference between the high-bit compressed image data and the low-bit compressed image data obtained by dividing the 1 st captured image data in a plurality of bit ranges. The "degree of difference" here is, for example, the determination result of step S104 described above.
Thus, the image pickup apparatus 10 according to embodiment 2 can reduce power consumption associated with outputting image data, as compared with a case where all bits of the 1 st captured image data are output.
In the imaging apparatus 10, the most significant 2 bits of the compressed image data are regarded as the bits to which the bit range determination flag is added.
Accordingly, compared with the case where the bit range determination flag is output to the post-stage circuit 90 at a time different from the output time of the compressed image data, the image pickup apparatus 10 according to embodiment 2 can quickly determine, in the post-stage circuit 90, on which bit the compressed image data is based.
In the imaging device 10 according to embodiment 2, the processing of steps S104 to S109 is performed to generate compressed image data for each line. Then, the compressed image data has a bit range determination flag.
Thus, the imaging device 10 according to embodiment 2 can determine which bit of compressed image data is the compressed image data for each line.
In embodiment 2, the description has been given of the embodiment in which the bit range determination flag is given to the 2 most significant bits of the compressed image data, but the technique of the present invention is not limited to this. For example, a bit range determination flag may be given to the 2 least significant bits of the compressed image data, or a bit range determination flag may be given to the 2 determinable bits in the compressed image data. Also, for example, a specific 1 bit in the compressed image data may be used as a bit to which the bit range determination flag is given. In this way, a part of bits of the compressed image data may be used as bits to which the bit range determination flag is given.
In embodiment 2, it is determined in step S104 whether or not the upper n bits of the current pixel data Dn and the previous pixel data Dp are different, but the technique of the present invention is not limited thereto. In step S104, it may be determined whether or not the degree of difference between the upper n bits of the current pixel data Dn and the previous pixel data Dp satisfies a prescribed condition. In this case, the compressed pixel data Do generated by executing the processing of step S106 is an example of "data based on low-order bit image data" according to the technique of the present invention. The compressed pixel data Do generated by executing the processing of step S108 is an example of "data based on high-order bit image data" according to the technique of the present invention.
For example, when the absolute value of the upper n-bit difference between the current pixel data Dn and the previous pixel data Dp is equal to or greater than the threshold value, the process of step S108 is executed, and when the absolute value of the upper n-bit difference between the current pixel data Dn and the previous pixel data Dp is less than the threshold value, the process of step S106 is executed. Thus, compared with a case where all bits of the 1 st captured image data are output irrespective of the movement of the subject, the degree of suppression of image quality degradation and the degree of suppression of power consumption can be adjusted in accordance with the movement of the subject. The term "difference" as used herein is an example of the "degree of difference" according to the technique of the present invention. The term "above threshold" as used herein is an example of the term "satisfying the predetermined condition" in the technique of the present invention, and the term "below threshold" is an example of the term "not satisfying the predetermined condition" in the technique of the present invention. The threshold value may be a fixed value or a variable value that can be changed according to an instruction received by the touch panel 42 and/or the operation unit 54.
In embodiment 2 described above, the compressed pixel data is generated by comparing the current pixel data Dn of the latest frame with the previous pixel data Dp of one frame for each of all the pixels of the line of interest, but the technique of the present invention is not limited thereto. For example, if the average value of the upper n bits of the current pixel data Dn of the latest frame matches the average value of the upper n bits of the previous pixel data Dp of one frame, the process of step S106 may be performed for each of all the pixels of the line of interest. In this case, if the average value of the upper n bits of the current pixel data Dn of the latest frame is different from the average value of the upper n bits of the previous pixel data Dp of one frame, the processing of step S108 is performed for each of all the pixels of the line of interest.
In the present embodiment, "match" means not only complete match but also match within an error predetermined as an allowable error. As the "predetermined error" here, for example, a value of an error that is derived in advance from a sensory test using an actual machine and/or a computer/simulation or the like so as not to visually recognize a change in the subject is employed.
For example, if the representative pixel data of the upper n bits of the current pixel data Dn of the latest frame matches the representative pixel data of the upper n bits of the previous pixel data Dp of one frame in the attention line, the process of step S106 may be performed for each of all the pixels of the attention line. In this case, if the representative pixel data of the upper n bits of the current pixel data Dn of the latest frame is different from the representative pixel data of the upper n bits of the previous pixel data Dp of one frame in the attention line, the processing of step S108 is performed for each of all the pixels of the attention line.
Further, for example, in the attention line, if the sum of the pixel data of the upper n bits of the current pixel data Dn of the latest frame coincides with the sum of the pixel data of the upper n bits of the previous pixel data Dp of one frame, the processing of step S106 may be performed for each of all the pixels of the attention line. In this case, if the sum of the upper n bits of the current pixel data Dn of the latest frame is different from the sum of the upper n bits of the previous pixel data Dp of one frame in the attention line, the process of step S108 is performed for each of all the pixels of the attention line.
In embodiment 2, the compressed image data of half the pixels is compressed pixel data Do of b bits, i.e., compressed image data of one line, but the technique of the present invention is not limited to this. For example, in the line of interest, in the case where the average value of the upper n bits of the previous pixel data Dp coincides with the average value of the upper n bits of the current pixel data Dn, the compressed image data of the line of interest may be set to the upper bit compressed image data. In the line of interest, if the average pixel value of the upper n bits of the previous pixel data Dp is different from the average pixel value of the upper n bits of the current pixel data Dn, the compressed image data of the line of interest may be set to be the lower bit compressed image data.
For example, when the representative pixel data of the upper n bits of the previous pixel data Dp in the line of interest matches the representative pixel data of the upper n bits of the current pixel data Dn, the compressed image data of the line of interest may be set to be the upper bit compressed image data. In addition, when the representative pixel data of the upper n bits of the previous pixel data Dp in the attention line is different from the representative pixel data of the upper n bits of the current pixel data Dn, the compressed image data of the attention line may be set to be the lower bit compressed image data.
Further, for example, in a case where the sum of the pixel data of the upper n bits of the preceding pixel data Dp in the line of interest coincides with the sum of the pixel data of the upper n bits of the current pixel data Dn, the compressed image data of the line of interest may be set to the upper bit compressed image data. Further, in a case where the sum of the pixel data of the upper n bits of the previous pixel data Dp in the attention line is different from the sum of the pixel data of the upper n bits of the current pixel data Dn, the compressed image data of the attention line may be set to the lower bit compressed image data.
In embodiment 2, the high-order bit compressed image data and the low-order bit compressed image data are illustrated as examples of the "plurality of bit image data" and the "plurality of divided image data" according to the technique of the present invention, but the technique of the present invention is not limited thereto. As the "plural-bit image data" and the "plural-divided image data" according to the technique of the present invention, three or more bits of compressed image data may be used.
In this case, for example, high-order bit compressed image data, medium-order bit compressed image data, and low-order bit compressed image data can be exemplified. The high-order bit compressed image data, the medium-order bit compressed image data, and the low-order bit compressed image data are obtained by dividing the 1 st captured image data in three bit ranges of the high-order bit, the medium-order bit, and the low-order bit. In this case, for example, as shown in fig. 14, a high-order bit determination flag, a medium-order bit determination flag, or a low-order bit determination flag is given to the highest-order 2 bits of the compressed image data of the line of interest. The high-order bit compressed image data is obtained by adding a high-order bit determination flag to the highest-order 2 bits of the compressed image data of the line of interest. The median bit determination flag is given to the most significant 2 bits of the compressed image data of the line of interest to obtain median bit compressed image data. The low-order bit compressed image data is obtained by giving a low-order bit determination flag to the highest-order 2 bits of the compressed image data of the line of interest. In addition, as described above, when the high bit determination flag is "00" and the low bit determination flag is "01", the medium bit determination flag may be "10" or "11".
In embodiment 2, the compressed image data shown in step S202 is compressed image data of a plurality of lines, but the compressed image data of each line need not be used directly. The compressed image data shown in the above step S202 may include processed compressed image data obtained by, for example, performing a specific image process on compressed image data of at least one line of compressed image data of a plurality of lines.
[ Embodiment 3]
In embodiment 1 and embodiment 2, the embodiment in which pixel data is compared in a row unit is described as an example, but in embodiment 3, the case in which pixel data is compared for each pixel is described. In embodiment 3, the same components as those described in embodiment 1 are denoted by the same reference numerals, and description thereof is omitted.
The imaging device 10 according to embodiment 3 differs from the imaging device 10 according to embodiment 1 in that the compression process shown in fig. 15 is performed by the image processing circuit 94C instead of the compression process shown in fig. 7.
Therefore, the compression process according to embodiment 3 executed by the image processing circuit 94C will be described with reference to fig. 15.
In the compression process shown in fig. 15, in step S250, the image processing circuit 94C reads unprocessed current pixel data Dn in all pixels of the 1 st captured image from the memory 96, and the compression process shifts to step S252. The "unprocessed current pixel data Dn" herein means current pixel data Dn that has not been used in the processing of step S254 described later.
In step S252, the image processing circuit 94C reads the unprocessed pre-pixel data Dp among all the pixels of the 2 nd captured image from the memory 96, and the compression process shifts to step S254.
In step S254, the image processing circuit 94C compares the current pixel data Dn read in step S250 with the upper n bits of the previous pixel data Dp read in step S252. Then, the image processing circuit 94C determines whether the upper n bits of the current pixel data Dn and the previous pixel data Dp are different.
In step S254, if the current pixel data Dn is the same as the upper n bits of the previous pixel data Dp, the determination is no, and the compression process proceeds to step S256. In step S254, if the current pixel data Dn is different from the upper n bits of the previous pixel data Dp, the compression process proceeds to step S258, where it is determined as yes.
In step S256, the image processing circuit 94C generates compressed pixel data Do of the lower b bits of the current pixel data Dn, and the compression process shifts to step S260.
In step S258, the image processing circuit 94C generates compressed pixel data Do of the higher b bits of the current pixel data Dn, and then the compression process shifts to step S260.
In step S260, the image processing circuit 94C determines whether the processing is ended for all pixels. In step S260, for example, the image processing circuit 94C determines whether or not the pixel data of all the pixels included in the 1 st captured image and the 2 nd captured image are used in the processing of step S254.
In step S260, if the processing is not completed for all the pixels, the determination is no, and the compression processing proceeds to step S250. In step S260, when the processing is completed for all pixels, it is determined as yes, and the compression processing proceeds to step S262.
In step S262, the image processing circuit 94C sets the compressed pixel data Do for all pixels obtained by performing the processing of step S256 or step S258 as compressed image data of one frame amount, and outputs the compressed image data of one frame amount to the output circuit 94D, and the image processing circuit 94C ends the compression processing.
The compressed image data outputted by executing the processing in step S262 is an example of "processed image data", "data based on bit image data", and "data based on divided image data" according to the technique of the present invention.
In step S262, the image processing circuit 94C may perform a specific image process on the compressed image data. In this case, the processed compressed image data to which the specific image processing is applied is output to the output circuit 94D. The term "processed compressed image data" as used herein refers to an example of "processed image data", "data based on one-bit image data" and "data based on divided image data" according to the technique of the present invention.
As described above, in the image pickup device 10 according to embodiment 3, the processing from step S250 to step S258 shown in fig. 15 is executed for each pixel, and in step S262, compressed image data of one frame amount is generated in the same manner as in embodiment 1.
Accordingly, the image pickup device 10 according to embodiment 3 can reduce power consumption associated with the output of the image data to the outside of the imaging element 20 in the same manner as the image pickup device 10 according to embodiment 1 described above, as compared with the case where the 1 st image data obtained by photographing is directly output to the outside of the imaging element 20.
[ Embodiment 4]
In embodiment 3, the embodiment in which only the compressed pixel data Do of the pixel of interest is generated is described as an example, but in embodiment 4, the embodiment in which whether the compressed pixel data of the high order bit or the compressed pixel data of the low order bit is the compressed pixel data of the low order bit can be specified for each pixel is described as an example. In embodiment 4, the same components as those in embodiment 1 are denoted by the same reference numerals, and the description thereof is omitted.
The imaging device 10 according to embodiment 4 differs from the imaging device 10 according to embodiment 3 in that the compression process shown in fig. 16 is performed by the image processing circuit 94C instead of the compression process shown in fig. 15.
Therefore, the compression process according to embodiment 4 executed by the image processing circuit 94C will be described with reference to fig. 16.
The compression process shown in fig. 16 is different from the compression process shown in fig. 15 in that the processes of step S300, step S302, and step S304 are provided instead of the process of step S262.
In the compression process shown in fig. 16, after the process of step S256 is performed, the compression process shifts to step S300.
In step S300, the image processing circuit 94C assigns a low-order bit determination flag to the highest-order bit of the compressed pixel data Do generated in step S256, and the compression process proceeds to step S260. The low-order bit determination flag used in this step S300 refers to, for example, "0" as shown in fig. 17.
In step S302, the image processing circuit 94C assigns a high-order bit determination flag to the high-order bit of the compressed pixel data Do generated in step S256, and the compression process proceeds to step S260. The high-order bit determination flag used in this step S302 refers to, for example, "1" as shown in fig. 17.
In the example 17, "0" is illustrated as the low-order bit determination flag, and "1" is illustrated as the high-order bit determination flag, but the technique of the present invention is not limited thereto. For example, "1" may be used as the low-order bit determination flag, and "0" may be used as the high-order bit determination flag.
If the determination is yes in step S260, the compression process proceeds to step S304. In step S304, the image processing circuit 94C sets the compressed image data of all pixels to which the bit range determination flag is respectively given as compressed image data of one frame amount, and outputs the compressed image data of one frame amount to the output circuit 94D, and the image processing circuit 94C ends the compression processing.
The compressed image data outputted by executing the processing of step S304 is an example of "processed image data", "data based on bit image data", and "data based on divided image data" according to the technique of the present invention.
In step S304, the image processing circuit 94C may perform a specific image process on the compressed image data. In this case, the processed compressed image data to which the specific image processing is applied is output to the output circuit 94D. The term "processed compressed image data" as used herein refers to an example of "processed image data", "data based on one-bit image data" and "data based on divided image data" according to the technique of the present invention.
As described above, in the image pickup device 10 according to embodiment 3, the bit range specifying flag is given to the compressed image data in one line unit, whereas in the image pickup device 10 according to embodiment 4, the bit range specifying flag is given to the compressed pixel data Do in one pixel unit.
Thus, the image pickup apparatus 10 according to embodiment 4 can determine which of a plurality of bit ranges the compressed pixel data Do belongs to for each pixel.
In addition, in the embodiments 1 and 2, the pixel data is compared in one line unit, and in the embodiments 3 and 4, the pixel data is compared for each pixel, but the technique of the present invention is not limited to this, and the 1 st captured image data and the 2 nd captured image data may be compared in one frame unit. In this case, for example, compressed image data for each frame can be generated by dividing the 1 st captured image data in a plurality of bit ranges according to the degree of difference between the 1 st captured image data and the 2 nd captured image data for each frame. Further, compressed image data for each frame may be generated based on the degree of difference between the 1 st captured image data and the 2 nd captured image data in the upper n bits.
In this case, for example, with respect to more than half of the pixels among all the pixels, in the case where the current pixel data Dn and the previous pixel data Dp are different, high-order bit compressed image data is generated. Then, with respect to more than half of the pixels among all the pixels, when the current pixel data Dn and the previous pixel data Dp match, low-order bit compressed image data is generated.
In this case, a bit range determination flag may be given to the compressed image data in one frame unit.
In the above embodiments, whether or not the current pixel data Dn and the previous pixel data Dp are different is determined based on the difference between the current pixel data Dn and the previous pixel data Dp, but the technique of the present invention is not limited thereto. Whether or not the current pixel data Dn and the previous pixel data Dp are different may also be determined based on a ratio with respect to one of the current pixel data Dn and the previous pixel data Dp, a sum of the current pixel data Dn and the previous pixel data Dp, and a product of the current pixel data Dn and the previous pixel data Dp. The present invention is not limited to the comparison in units of pixels, and is also applicable to the comparison in units of lines for the 1 st captured image and the 2 nd captured image and the comparison in units of frames.
In each of the above embodiments, when shooting of a live preview image or a moving image for recording is started, the captured image data of the 1 st frame is not output to the I/F56 of the post-stage circuit 90, but the technique of the present invention is not limited thereto. When shooting of a live preview image or a moving image for recording is started, the 1 st captured image data may be output to the I/F56 of the subsequent stage circuit 90 by the output circuit 94D before the 2 nd captured image data is stored in the memory 96. In the example shown in fig. 19, 1 st captured image data, which is 1 st captured image data, is directly output to the I/F56 of the subsequent stage circuit 90. Thus, even before the 2 nd captured image data is stored in the memory 96, a delay in outputting the image data by the output circuit 94D can be avoided.
When shooting of a live preview image or a moving image for recording is started, the output circuit 94D may output data based on image data belonging to a specific bit range from the 1 st captured image data to the I/F56 of the subsequent circuit 90 before the 2 nd captured image data is stored in the memory 96. For example, as shown in fig. 20, the output circuit 94D may output the high order n bits of the 1 st captured image data, which is the 1 st captured image data, as compressed image data to the I/F56 of the subsequent circuit 90. The image data subjected to the specific image processing for the 1 st frame of the image data, i.e., the upper n bits of the 1 st image data, may be output as compressed image data to the I/F56 of the subsequent circuit 90.
Thus, compared with the case where the 1 st captured image data is directly output before the 2 nd captured image data is stored in the memory 96, the power consumption associated with outputting the image data by the output circuit 94D can be reduced.
When shooting of a live preview image or a moving image for recording is started, the output circuit 94D may output the substitute compressed image data in accordance with the degree of difference between the reference image data and the 1 st captured image data before the 2 nd captured image data is stored in the memory 96. Thus, compared with the case where the 1 st captured image data is directly output before the 2 nd captured image data is stored in the memory 96, the power consumption associated with outputting the image data by the output circuit 94D can be reduced.
The reference image data is image data that is predetermined as image data instead of the 2 nd captured image data. As an example of the predetermined image data, image data representing a black level image visually perceived as black may be mentioned. The substitute compressed image data is compressed image data obtained by dividing the 1 st captured image data in the above-described plural bit ranges. The degree of difference between the reference image data and the 1 st captured image data may be a difference between the reference image data and the 1 st captured image data, or may be a difference between the reference image data and the 1 st captured image data in the upper n bits.
Further, as an example, as shown in fig. 22, in the case where the photographing for a still image is continuously performed by the photoelectric conversion element 92 at predetermined time intervals, the 1 st captured image data may be directly output by the output circuit 94D before the 2 nd captured image data is stored in the memory 96. In this case, on condition that the 2 nd captured image data is stored in the memory 96, the compressed image data is output to the I/F56 of the subsequent stage circuit 90 by the output circuit 94D. For example, in the example shown in fig. 22, in the 1 st frame, the 1 st captured image data is directly output by the output circuit 94D before the 2 nd captured image data is stored in the memory 96, and after the 2 nd frame, the compressed image data is output. Thus, even before the 2 nd captured image data is stored in the memory 96, a delay in outputting the image data by the output circuit 94D can be avoided.
In the case where the photoelectric conversion element 92 continuously captures still images at predetermined time intervals, the output circuit 94D may output image data belonging to a predetermined bit range from the 1 st captured image data before the 2 nd captured image data is stored in the memory 96. As an example of the "image data belonging to the predetermined bit range in the 1 st captured image data", there is illustrated the image data of the upper n bits in the 1 st captured image data. In the example shown in fig. 22, a time interval determined from the vertical drive signal is used as the "predetermined time interval". As an example of the time interval determined based on the vertical drive signal, 16.667ms corresponding to 60fps can be mentioned.
In the above embodiment, the case where the image data is compressed to display the preview image has been described, but the technique of the present invention is not limited to this. For example, the compressed image data may be stored in the sub-storage unit 60 by the CPU52 in the post-stage circuit 90, or may be output to the outside of the image pickup device 10 via the external I/F63.
Although the processing circuit 94 implemented by an ASIC is illustrated in each of the above embodiments, the compression processing and the image data output processing may be implemented by a software configuration based on a computer.
In this case, for example, as shown in fig. 23, a program 600 is stored in a storage medium 700, and the program 600 is used to cause a computer 20A built in the imaging element 20 to execute the compression processing and the image data output processing described above. The computer 20A includes a CPU20A1, a ROM20A2, and a RAM20A3. Then, the program 600 of the storage medium 700 is installed on the computer 20A, and the CPU20A1 of the computer 20A executes the compression processing and the image data output processing described above in accordance with the program 600. Although a single CPU is illustrated as the CPU20A1, the technique of the present invention is not limited thereto, and a plurality of CPUs may be employed instead of the CPU20A1. That is, the compression processing and/or the image data output processing described above may be performed by one processor or a plurality of processors physically separated.
Further, as an example of the storage medium 700, an arbitrary portable storage medium such as an SSD (Solid STATE DRIVE: solid state drive) or a USB (Universal Serial Bus: universal serial bus) memory may be mentioned.
The program 600 may be stored in a storage unit such as another computer or a server device connected to the computer 20A via a communication network (not shown), and the program 600 may be downloaded in response to a request from the image pickup device 10. In this case, the downloaded program 600 is executed by the computer 20A.
Also, the computer 20A may be provided outside the imaging element 20. In this case, the computer 20A may control the processing circuit 94 according to the program 600.
As hardware resources for executing the various processes described in the above embodiments, various processors shown below can be used. Here, the various processes described in the above embodiments may be, for example, compression processing, image data output processing, and display control processing. As a processor, for example, a general-purpose processor, i.e., a CPU, is mentioned, and as described above, the processor functions as a hardware resource for executing various processes related to the technology of the present invention by executing software, i.e., a program. The processor may be, for example, a dedicated circuit as a processor having a circuit configuration specifically set for executing a specific process, such as an FPGA, a PLD, or an ASIC. A memory is also built in or connected to any of the processors, and any of the processors also performs various processes by using the memory.
The hardware resources for executing the various processes related to the technique of the present invention may be constituted by one of these various processors, or may be constituted by a combination of two or more processors of the same kind or different kinds (for example, a combination of a plurality of FPGAs, or a combination of a CPU and an FPGA). Also, the hardware resource for performing various processes related to the technique of the present invention may be one processor.
As an example constituted by one processor, there are the following ways: as represented by a computer such as a client and a server, one processor is constituted by a combination of one or more CPUs and software, and functions as a hardware resource for executing various processes related to the technology of the present invention. Second, there are the following ways: as represented by a SoC (System-on-a-chip) or the like, a processor is used in which the functions of the entire System including a plurality of hardware resources for executing various processes according to the technique of the present invention are realized by one IC chip. As such, various processes related to the technology of the present invention are realized by using one or more of the various processors described above as hardware resources.
As a hardware configuration of these various processors, more specifically, a circuit in which circuit elements such as semiconductor elements are combined can be used.
In the above embodiments, the lens-interchangeable camera is illustrated as the imaging device 10, but the technique of the present invention is not limited to this. For example, the techniques of the present invention may be applied to the smart device 900 shown in fig. 24. As an example, the smart device 900 shown in fig. 24 is an example of an imaging device according to the technology of the present invention. The imaging element 20 described in the above embodiment is mounted on the smart device 900. Even in the smart device 900 thus configured, the same operations and effects as those of the imaging device 10 system described in the above embodiments can be obtained. In addition, the technology of the present invention can be applied not only to the smart device 900 but also to a personal computer or a wearable terminal device.
In the above embodiments, the 1 st display 40 and the 2 nd display 80 are illustrated as display devices, but the technology of the present invention is not limited thereto. For example, a separate display attached to the image pickup apparatus main body 12 may be used as the "display portion" related to the technique of the present invention.
In the above embodiments, the embodiment in which the 1 st captured image data is bit-compressed has been described as an example, but if the technique of the present invention is applied, both the 1 st captured image data and the 2 nd captured image data may be bit-compressed, or the 2 nd captured image data may be bit-compressed.
The compression process, the image data output process, and the display control process described in the above embodiment are only one example. Accordingly, unnecessary steps may be deleted, new steps may be added, or the processing order may be switched within a range not departing from the gist of the present invention.
The description and the illustrations described above are detailed descriptions of the related parts of the technology of the present invention, and are merely examples of the technology of the present invention. For example, the description about the above-described structure, function, operation, and effect is a description about one example of the structure, function, operation, and effect of the portion related to the technology of the present invention. Accordingly, unnecessary parts may be deleted from the description and illustration shown above, new elements may be added, or replaced, without departing from the spirit of the present invention. In order to avoid complication and to facilitate understanding of the technical aspects of the present invention, descriptions concerning technical common knowledge and the like, which are not particularly described in terms of enabling implementation of the present invention, are omitted from the descriptions and illustrations shown above.
In the present specification, "a and/or B" has the same meaning as "at least one of a and B". That is, "A and/or B" means that A may be alone, B may be alone, or a combination of A and B. In the present specification, when "and/or" is added to express 3 or more items, the same concept as "a and/or B" may be applied.
All documents, patent applications and technical standards described in this specification are incorporated by reference into this specification to the same extent as if each document, patent application and technical standard was specifically and individually indicated to be incorporated by reference.

Claims (17)

1. An imaging element, comprising:
A storage section that stores captured image data obtained by capturing an object at a 1st frame rate, and is built in the imaging element;
a processing section that performs processing on the captured image data, and is built in the imaging element; and
An output section that outputs at least one of processed image data obtained by performing the processing on the captured image data and the captured image data to an outside of the imaging element, and is built in the imaging element,
The processing section generates compressed image data obtained by compressing the 1 st captured image data by dividing the 1 st captured image data into a plurality of bit ranges based on a degree of difference between the 1 st captured image data obtained by capturing and the 2 nd captured image data stored in the storage section, the 2 nd captured image data being image data obtained one frame or more earlier than the 1 st captured image data obtained by capturing,
The output section outputs the compressed image data generated in the processing section to the outside as the processed image data at a2 nd frame rate.
2. The imaging element of claim 1 wherein,
The 1 st frame rate is a higher frame rate than the 2 nd frame rate.
3. The imaging element according to claim 1 or 2, wherein,
The degree of difference is a degree of difference between the 1 st captured image data and the 2 nd captured image data in units of lines every time the reading section reads the 1 st captured image data in units of lines.
4. The imaging element according to claim 1 or 2, wherein,
The degree of difference is a degree of difference between predetermined upper bits of the 1 st captured image data and the 2 nd captured image data.
5. The imaging element of claim 4 wherein,
The 1 st captured image data and the 2 nd captured image data are image data having the same number of bits as each other,
The compressed image data is the image data of the 2 nd bit which is smaller than the 1 st captured image data in bit number, i.e. the 1 st bit,
The predetermined high order bit is a bit corresponding to a value obtained by subtracting the 2 nd bit from the 1 st bit.
6. The imaging element of any one of claims 1, 2, 5, wherein,
The compressed image data is data based on one bit of image data determined according to the degree of difference among a plurality of bits of image data obtained by dividing the 1 st captured image data in the plurality of bit ranges.
7. The imaging element of claim 6 wherein,
The plurality of bit image data is high bit image data and low bit image data,
The compressed image data has data based on the higher-order bit image data in a case where the degree of difference satisfies a prescribed condition, and has data based on the lower-order bit image data in a case where the degree of difference does not satisfy the prescribed condition.
8. The imaging element of claim 6 wherein,
A part of bits of the compressed image data is bits to which bit image determination information is given, the bit image determination information can determine which bit image data of the plurality of bit image data the compressed image data is based on.
9. The imaging element of any one of claims 1,2, 5, 7, 8, wherein,
The compressed image data is image data in line units, and has divided image determination information that can determine which of a plurality of divided image data obtained by dividing the 1 st captured image data in the plurality of bit ranges is based on.
10. The imaging element of any one of claims 1, 2, 5, 7, 8, wherein,
When shooting of a moving image is started, the output unit outputs the 1 st captured image data to the outside before the 2 nd captured image data is stored in the storage unit.
11. The imaging element of any one of claims 1, 2, 5, 7, 8, wherein,
When shooting of a moving image is started, the output unit outputs data based on image data belonging to a specific bit range among the 1 st captured image data to the outside before the 2 nd captured image data is stored in the storage unit.
12. The imaging element of any one of claims 1, 2, 5, 7, 8, wherein,
When shooting of a moving image is started, the output unit outputs, to the outside, substitute compressed image data obtained by dividing the 1 st captured image data in a plurality of bit ranges, the substitute compressed image data compressed in accordance with a degree of difference between image data predetermined as image data in place of the 2 nd captured image data and the 1 st captured image data, before the 2 nd captured image data is stored in the storage unit.
13. The imaging element of any one of claims 1, 2, 5, 7, 8, wherein,
In the case where the photographing of the still image is continuously performed at predetermined time intervals, the output section outputs the 1 st captured image data or the image data belonging to a predetermined bit range among the 1 st captured image data to the outside before the 2 nd captured image data is stored in the storage section, and outputs the compressed image data to the outside on condition that the 2 nd captured image data is stored in the storage section.
14. The imaging element of any one of claims 1, 2, 5, 7, 8, wherein,
The imaging element is a stacked imaging element having a photoelectric conversion element, and the storage portion is stacked on the photoelectric conversion element.
15. An image pickup apparatus, comprising:
The imaging element of any one of claims 1 to 14; and
And a control unit configured to control a display unit to display an image based on the compressed image data, the compressed image data being output from the output unit included in the imaging element.
16. An image data processing method of an imaging element incorporating a storage section, a processing section, and an output section, comprising the steps of:
causing the storage section to store captured image data obtained by capturing an object at a1 st frame rate;
causing the processing unit to perform processing on the captured image data;
causing the output section to output at least one of processed image data obtained by performing the processing on the captured image data and the captured image data to the outside of the imaging element;
The processing unit is configured to generate compressed image data obtained by compressing the 1 st captured image data by dividing the 1 st captured image data into a plurality of bit ranges based on a degree of difference between the 1 st captured image data obtained by capturing and the 2 nd captured image data stored in the storage unit, the 2 nd captured image data being image data obtained one frame or more earlier than the 1 st captured image data obtained by capturing; and
The output section is caused to output the compressed image data generated in the processing section to the outside as the processed image data at a2 nd frame rate.
17. A computer-readable storage medium storing a program for causing a computer to function as a processing section and an output section included in an imaging element having a storage section, the processing section, and the output section built therein, the program being for executing the steps of:
the storage section stores captured image data obtained by capturing an object at a1 st frame rate;
The processing unit performs processing on the captured image data;
the output section outputs at least one of processed image data obtained by performing the processing on the captured image data and the captured image data to the outside of the imaging element;
The processing unit generates compressed image data obtained by compressing the 1 st captured image data by dividing the 1 st captured image data into a plurality of bit ranges, based on the degree of difference between the 1 st captured image data obtained by capturing and the 2 nd captured image data stored in the storage unit, the 2 nd captured image data being image data obtained one frame or more earlier than the 1 st captured image data obtained by capturing; and
The output section outputs the compressed image data generated in the processing section to the outside as the processed image data at a2 nd frame rate.
CN201980056436.3A 2018-08-31 2019-06-27 Imaging element, imaging device, image data processing method, and storage medium Active CN112640437B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2018-163997 2018-08-31
JP2018163997 2018-08-31
PCT/JP2019/025647 WO2020044764A1 (en) 2018-08-31 2019-06-27 Imaging element, imaging device, image data processing method, and program

Publications (2)

Publication Number Publication Date
CN112640437A CN112640437A (en) 2021-04-09
CN112640437B true CN112640437B (en) 2024-05-14

Family

ID=69644102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980056436.3A Active CN112640437B (en) 2018-08-31 2019-06-27 Imaging element, imaging device, image data processing method, and storage medium

Country Status (4)

Country Link
US (1) US11184535B2 (en)
JP (1) JP6915166B2 (en)
CN (1) CN112640437B (en)
WO (1) WO2020044764A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005191939A (en) * 2003-12-25 2005-07-14 Nikon Corp Image compression apparatus and program for generating predicted difference compressed data of fixed bit length, and, image expansion apparatus, image expansion program, and electronic camera
JP2007027848A (en) * 2005-07-12 2007-02-01 Oki Electric Ind Co Ltd Reversible coding method and apparatus, and reversible decoding method and apparatus
CN101953153A (en) * 2008-03-31 2011-01-19 富士胶片株式会社 Imaging system, imaging method, and computer-readable medium containing program
WO2014045741A1 (en) * 2012-09-19 2014-03-27 富士フイルム株式会社 Image processing device, imaging device, image processing method, and image processing program
JP2015136093A (en) * 2013-12-20 2015-07-27 ソニー株式会社 Imaging element, imaging device and electronic device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06204891A (en) 1993-01-05 1994-07-22 Fujitsu Ltd Data compression method
US5412741A (en) * 1993-01-22 1995-05-02 David Sarnoff Research Center, Inc. Apparatus and method for compressing information
KR101270167B1 (en) * 2006-08-17 2013-05-31 삼성전자주식회사 Method and apparatus of low complexity for compressing image, method and apparatus of low complexity for reconstructing image
US8879858B1 (en) * 2013-10-01 2014-11-04 Gopro, Inc. Multi-channel bit packing engine

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005191939A (en) * 2003-12-25 2005-07-14 Nikon Corp Image compression apparatus and program for generating predicted difference compressed data of fixed bit length, and, image expansion apparatus, image expansion program, and electronic camera
JP2007027848A (en) * 2005-07-12 2007-02-01 Oki Electric Ind Co Ltd Reversible coding method and apparatus, and reversible decoding method and apparatus
CN101953153A (en) * 2008-03-31 2011-01-19 富士胶片株式会社 Imaging system, imaging method, and computer-readable medium containing program
WO2014045741A1 (en) * 2012-09-19 2014-03-27 富士フイルム株式会社 Image processing device, imaging device, image processing method, and image processing program
JP2015136093A (en) * 2013-12-20 2015-07-27 ソニー株式会社 Imaging element, imaging device and electronic device

Also Published As

Publication number Publication date
US20210176406A1 (en) 2021-06-10
JP6915166B2 (en) 2021-08-04
JPWO2020044764A1 (en) 2021-08-10
US11184535B2 (en) 2021-11-23
WO2020044764A1 (en) 2020-03-05
CN112640437A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
JP2013519301A (en) Select capture conditions from brightness and motion
US11405566B2 (en) Imaging element, imaging apparatus, image data processing method, and program
US11438504B2 (en) Imaging element, imaging apparatus, operation method of imaging element, and program
US20240080410A1 (en) Imaging apparatus, image data processing method of imaging apparatus, and program
US20230388664A1 (en) Imaging element, imaging apparatus, imaging method, and program
CN113316928B (en) Imaging element, imaging apparatus, image data processing method, and computer-readable storage medium
CN112640437B (en) Imaging element, imaging device, image data processing method, and storage medium
US20210185272A1 (en) Imaging element, imaging apparatus, image data processing method, and program
US11785362B2 (en) Imaging element with output circuit that outputs to first and second circuits, imaging apparatus, image data output method, and program
US20220141420A1 (en) Imaging element, imaging apparatus, operation method of imaging element, and program
CN116458168A (en) Detection device, imaging device, detection method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant