US10791340B2 - Method and system to refine coding of P-phase data - Google Patents
Method and system to refine coding of P-phase data Download PDFInfo
- Publication number
- US10791340B2 US10791340B2 US15/351,558 US201615351558A US10791340B2 US 10791340 B2 US10791340 B2 US 10791340B2 US 201615351558 A US201615351558 A US 201615351558A US 10791340 B2 US10791340 B2 US 10791340B2
- Authority
- US
- United States
- Prior art keywords
- phase data
- refinement
- bit
- plane
- bits
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000003384 imaging method Methods 0.000 claims description 86
- 230000002596 correlated effect Effects 0.000 claims description 5
- 230000000875 corresponding effect Effects 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 5
- 230000001419 dependent effect Effects 0.000 claims description 4
- 238000007670 refining Methods 0.000 claims 3
- 230000001131 transforming effect Effects 0.000 claims 1
- 238000004891 communication Methods 0.000 description 19
- 238000007906 compression Methods 0.000 description 17
- 230000006835 compression Effects 0.000 description 17
- 230000006870 function Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 7
- 238000013144 data compression Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 230000000295 complement effect Effects 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000010267 cellular communication Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000010348 incorporation Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/65—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using error resilience
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/167—Position within a video image, e.g. region of interest [ROI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/34—Scalability techniques involving progressive bit-plane based encoding of the enhancement layer, e.g. fine granular scalability [FGS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
Definitions
- Various embodiments of the disclosure relate to data compression. More specifically, various embodiments of the disclosure relate to a method and system to refine coding for P-phase data compression.
- Image sensors are widely used in imaging devices, such as digital cameras, medical imaging equipment, thermal imaging devices, radar, sonar, and other electronic devices. Such imaging devices, which include image sensors, may be associated with digital Correlated Double Sampling (CDS) processing.
- the CDS processing may include a noise component and a true signal component.
- the noise component may be referred to as P-phase data.
- the true signal component may be referred to as D-phase data.
- the difference between the P-phase data and the D-phase data may be used to remove noise, such as an internal thermal noise (or kTC noise), associated with an image or a sequence of images to be captured by use of an image sensor of an imaging device. It may be desirable to refine the P-phase data for efficient compression of the image or the sequence of images captured by the image sensor.
- refinement bits may be placed closer to each other in every coding block. Similar data pattern in every coding block may provide similar coding bits for a block encoding. A fixed refinement order may provide similar coded and un-coded bits in every coding block, which may not be desirable. As a consequence, coded bits and un-coded bits may provide a similar error pattern geometrically for an original and a decoded image.
- a method and system are provided to refine coding of P-phase data substantially as shown in, and/or described in connection with, at least one of the figures, as set forth more completely in the claims.
- FIGS. 1A and 1B collectively, depict a block diagram that illustrates a network environment to refine coding of P-phase data by an imaging device, in accordance with an embodiment of the disclosure
- FIG. 2 is a block diagram of an imaging device to refine coding of P-phase data, in accordance with an embodiment of the disclosure
- FIG. 3 illustrates an exemplary scenario to refine coding of P-phase data for P-phase data compression in an imaging device, in accordance with an embodiment of the disclosure
- FIG. 4 depicts a flow chart that illustrates exemplary operations to refine coding of P-phase data in an imaging device, in accordance with an embodiment of the disclosure.
- Exemplary aspects of the disclosure may include a method to refine coding of P-phase data in an imaging device.
- the imaging device may include one or more circuits configured to receive an input P-phase data block, which may comprise a plurality of entropy coded bits and a plurality of un-coded bits of P-phase data values.
- the plurality of entropy coded bits may be coded by a differential pulse code modulation (DPCM) or pulse code modulation (PCM).
- DPCM differential pulse code modulation
- PCM pulse code modulation
- the one or more circuits may be further configured to determine a refinement step size for the received input P-phase data block, based on a count of refinement bits available for coding of the plurality of un-coded bits and a block size of the input P-phase data block.
- the determined refinement step size may correspond to a gap size to be maintained among the refinement bits available for coding of the plurality of un-coded bits in each of the one or more bit-planes.
- the gap size may be maintained for equal distribution of the refinement bits in each of the one or more bit-planes for the refinement.
- the one or more circuits may be further configured to determine a refinement start position for the received input P-phase data block, based on a number of sample groups of color values of the input P-phase data block and the block size of the input P-phase data block.
- the determined refinement start position may correspond to a position from which the allocation of the refinement bits in the plurality of un-coded bits of the P-phase data values is to be initiated for the refinement.
- the one or more circuits may be configured to refine the plurality of un-coded bits of the P-phase data values by allocation of the refinement bits in one or more bit-planes of the input P-phase data block, based on the determined refinement step size and the determined refinement start position.
- the one or more circuits may be further configured to detect whether the count of the refinement bits available for coding of the plurality of un-coded bits is greater than or equal to a bit-plane size of a first bit-plane of the one or more bit-planes.
- Refinement of the first bit-plane of the plurality of un-coded bits may be executed by allocation of a number of the refinement bits equal to the bit-plane size in the first bit-plane in the event that the count of the refinement bits is greater than or equal to the bit-plane size of the first bit-pane.
- the method may include refinement of the first bit-plane of the plurality of un-coded bits by a bit-by-bit allocation of the refinement bits in the first bit-plane in the event that the count of the refinement bits is less than the bit-plane size of the first bit-plane.
- the refinement bits may be allocated in the first bit-plane from the determined refinement start position, and the refinement bits may be equally spaced in the first bit-plane based on the determined refinement step size.
- the count of the refinement bits may be updated after each one-bit refinement or one-bit-plane refinement.
- the received input P-phase data block may be one of a plurality of P-phase data blocks received from an image sensor after entropy coding of the plurality of P-phase data blocks.
- a difference between P-phase data values and D-phase data values may be computed.
- the P-phase data values may correspond to the plurality of P-phase data blocks representative of a plurality of pixels in an image frame.
- the P-phase data values may correspond to digital pixel reset values that represent reference voltages of a plurality of pixels in the image frame.
- the D-phase data values may correspond to light-dependent digital pixel values that represent signal voltages of the plurality of pixels in the image frame.
- the method may include transformation of the image frame to a refined image frame, based on the computed difference between the P-phase data values and the D-phase data values.
- the computed difference may be utilized to obtain the refined image frame by removal of noise from the image frame.
- the image sensor may comprise a plurality of light-sensing elements, such that the computed difference may result in cancellation of the P-phase data values from corresponding D-phase data values for each of the plurality of light-sensing elements. This may be done to generate correlated double sampling (CDS) corrected digital output pixel values in the refined image frame.
- CDS correlated double sampling
- FIGS. 1A and 1B collectively, depict a block diagram that illustrates a network environment to refine coding of P-phase data by an imaging device, in accordance with an embodiment of the disclosure.
- the network environment 100 may include an imaging device 102 , an image sensor 104 , a server 106 , a communication network 108 , and one or more users, such as a user 110 .
- the imaging device 102 may be communicatively coupled to the server 106 , via the communication network 108 .
- the user 110 may be associated with the imaging device 102 .
- the imaging device 102 may comprise suitable logic, circuitry, interfaces, and/or code that may be configured to communicate with the server 106 .
- the imaging device 102 may include the image sensor 104 .
- the imaging device 102 may be configured to refine coding of P-phase data. Examples of the imaging device 102 may include, but are not limited to, a camera, a camcorder, an image- and/or video-processing device, a motion-capture system, a smart phone, and/or a projector.
- the image sensor 104 may comprise suitable logic, circuitry, interfaces, and/or code that may be configured to detect and convey information that constitutes an image or a sequence of image frames of a video.
- the image sensor 104 may convert the variable attenuation of light waves into signals or small bursts of current that convey the information.
- the sequence of image frames may be processed by the imaging device 102 . This may be done for compression of the P-phase data values of a plurality of blocks representative of a plurality of pixels in a current image frame.
- Examples of the image sensor 104 may include, but are not limited to, semiconductor charge-coupled devices (CCD), complementary metal-oxide-semiconductor (CMOS) image sensors, digital pixel system (DPS) sensors, and/or digital sensors, such as flat-panel detectors.
- CCD semiconductor charge-coupled devices
- CMOS complementary metal-oxide-semiconductor
- DPS digital pixel system
- digital sensors such as flat-panel detectors.
- the server 106 may comprise a suitable logic, circuitry, interfaces, and/or code that may be configured to communicate with the imaging device 102 .
- the server 106 may further include one or more circuitries that may be configured for coding P-phase data. Examples of the server 106 may include, but are not limited to a web server, a database server, a file server, an application server, or a combination thereof.
- the communication network 108 may include a medium through which the imaging device 102 and the server 106 , may communicate with each other.
- the communication network 108 may be a wired or wireless communication network.
- Examples of the communication network 108 may include, but are not limited to, a Wireless Fidelity (Wi-Fi) network, a Local Area Network (LAN), a wireless personal area network (WPAN), a Wireless Local Area Network (WLAN), a wireless wide area network (WWAN), a cloud network, a Long Term Evolution (LTE) network, a plain old telephone service (POTS), a Metropolitan Area Network (MAN), and/or the Internet.
- Various devices in the exemplary network environment 100 may be configured to connect to the communication network 108 , in accordance with various wired and wireless communication protocols.
- wired and wireless communication protocols may include, but are not limited to, Transmission Control Protocol and Internet Protocol (TCP/IP), User Datagram Protocol (UDP), Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), ZigBee, EDGE, infrared (IR), IEEE 802.11, 802.16, Long Term Evolution (LTE), Light Fidelity (Li-Fi), Internet of Things (IOT) communication protocols, and/or other cellular communication protocols or Bluetooth (BT) communication protocols, including variants thereof.
- TCP/IP Transmission Control Protocol and Internet Protocol
- UDP User Datagram Protocol
- HTTP Hypertext Transfer Protocol
- FTP File Transfer Protocol
- ZigBee ZigBee
- EDGE infrared
- IR infrared
- the imaging device 102 may be configured to receive an input to capture an image or a sequence of image frames of a video.
- the sequence of image frames may comprise at least a previous image frame and a current image frame.
- the imaging device 102 may be further configured to receive a plurality of blocks of P-phase data values from the image sensor 104 .
- the received plurality of blocks may represent a plurality of pixels in the current image frame of the captured sequence of image frames.
- the imaging device 102 that include the image sensor 104 may be associated with digital Correlated Double Sampling (CDS) processing.
- the CDS processing may include a noise component and a true signal component.
- the noise component may be referred to as P-phase data, such as the received plurality of blocks of P-phase data values.
- the received plurality of blocks of P-phase data values may correspond to digital pixel reset values that represent reference voltages of the plurality of pixels in an image frame.
- the true signal component may be referred to as D-phase data.
- the D-phase data values may also be concurrently received from the image sensor 104 at the time of the capture of the image frame or the sequence of image frames of the video.
- D-phase data values may correspond to light-dependent digital pixel values that represents signal voltages of the plurality of pixels in the image frame.
- the difference between the received plurality of blocks of P-phase data values and the corresponding D-phase data values may be used to remove noise, such as the kTC noise, associated with the image or the sequence of image frames to be captured by the image sensor 104 of the imaging device 102 .
- noise such as the kTC noise
- the received plurality of blocks of P-phase data values may not be stored before the D-phase data values for the CDS.
- the CDS process in case of a global shutter type of shutter mechanism of the imaging device 102 , requires the noise component, such as the received plurality of blocks of P-phase data values, to be stored before the D-phase data values.
- the P-phase data such as the received plurality of blocks of P-phase data values, may need to be compressed to save memory or storage space of the imaging device 102 .
- the global shutter may refer to a shutter mode that controls incoming light to all light-sensitive elements of the imaging device 102 simultaneously. Thus, in the imaging device 102 that use the global shutter, every pixel may be exposed simultaneously at the same instant in time.
- the imaging device 102 may be configured to receive an input P-phase data block.
- the P-phase data block may comprise a plurality of entropy coded bits and a plurality of un-coded bits of P-phase data values.
- the plurality of entropy coded bits may be coded by DPCM or PCM.
- the imaging device 102 may be configured to determine a refinement step size for the received input P-phase data block, based on a count of refinement bits available for coding of the plurality of un-coded bits and a block size of the received input P-phase data block.
- the determined refinement step size may correspond to a gap size to be maintained among the refinement bits available for coding of the plurality of un-coded bits in each of the one or more bit-planes for equal distribution of the refinement bits in each of the one or more bit-planes for the refinement.
- the determination of the refinement step size is explained in detail, for example, in FIG. 2 .
- the imaging device 102 may further be configured to determine a refinement start position for the received input P-phase data block, based on a number of sample groups of color values of the received input P-phase data block and the block size of the received input P-phase data block.
- the determined refinement start position may correspond to a position from which the allocation of the refinement bits in the plurality of un-coded bits of the P-phase data values is to be initiated for the refinement.
- the determination of the refinement start position is explained in detail, for example, in FIG. 2 .
- the imaging device 102 may further be configured to refine the plurality of un-coded bits of the P-phase data values by allocation of the refinement bits in one or more bit-planes of the received input P-phase data block.
- the plurality of un-coded bits of the P-phase data values may be refined based on the determined refinement step size and the determined refinement start position.
- the imaging device 102 may be configured to detect whether the count of the refinement bits available for coding of the plurality of un-coded bits is greater than or equal to a bit-plane size of a first bit-plane of the one or more bit-planes.
- the first bit-plane of the plurality of un-coded bits may be refined by allocation of a number of the refinement bits equal to the bit-plane size in the first bit-plane.
- the refinement may be executed in the event that the count of the refinement bits is greater than or equal to the bit-plane size of the first bit-plane.
- the imaging device 102 may further include refinement of the first bit-plane of the plurality of un-coded bits by a bit-by-bit allocation of the refinement bits in the first bit-plane.
- the refinement may be executed in the event that the count of the refinement bits is less than the bit-plane size of the first bit-plane.
- the refinement bits may be allocated in the first bit plane from the determined refinement start position.
- the refinement bits may be equally spaced in the first bit-plane, based on the determined refinement step size.
- the count of the refinement bits may be updated after each one-bit refinement or one-bit-plane refinement.
- the input P-phase data block may be one of a plurality of P-phase data blocks received from the image sensor 104 after entropy coding of the plurality of P-phase data blocks.
- the image sensor 104 included in the imaging device 102 , may comprise a plurality of light-sensing elements, such as the light-sensing element 104 A.
- the light-sensing element 104 A may comprise a photodiode 114 and a plurality of transistors 116 .
- the photodiode 114 may be configured to generate an output signal indicative of an intensity level of light impinging on the photodiode 114 .
- the plurality of transistors 116 may be configured to control reset, charge transfer, and row-select operations of the plurality of light-sensing elements.
- the imaging device 102 may be configured to compute a difference between the P-phase data values and D-phase data values.
- the P-phase data values may correspond to the plurality of P-phase data blocks representative of a plurality of pixels in an image frame.
- the imaging device 102 may further be configured to transform the image frame to a refined image frame, based on the computed difference between the P-phase data values and the D-phase data values.
- the computed difference may be utilized for removal of noise from the image frame to obtain the refined image frame.
- the computed difference may result in cancellation of the P-phase data values from corresponding D-phase data values for each of the plurality of light-sensing elements. This may be done to generate correlated double sampling (CDS) corrected digital output pixel values in the refined image frame.
- CDS correlated double sampling
- the P-phase data values received from the image sensor 104 may be processed prior to processing of the D-phase data values to enable storage of the received P-phase data values as the generated compressed P-phase data values in a memory unit (not shown) of the imaging device 102 .
- the imaging device 102 may be configured to transmit the input P-phase data block to the server 106 via the communication network 108 .
- the P-phase data block may comprise a plurality of entropy coded bits and a plurality of un-coded bits of P-phase data values.
- the server 106 may be configured to process the received input P-phase data block, received from the imaging device 102 , to determine the refinement step size for the received input P-phase data block. This may be based on the count of refinement bits available for coding of the plurality of un-coded bits and the block size of the input P-phase data block.
- the server 106 may further be configured to determine the refinement start position for the received input P-phase data block, based on the number of sample groups of color values of the input P-phase data block and the block size of the input P-phase data block. From the determined refinement step size and the determined refinement start position, the server 106 may be configured to refine the plurality of un-coded bits of the P-phase values by allocation of the refinement bits in one or more bit-planes of the input P-phase data block. The server 106 may be further configured to transmit the refined plurality of un-coded bits to the imaging device 102 , via the communication network 108 .
- FIG. 2 is a block diagram of an imaging device to refine coding of P-phase data, in accordance with an embodiment of the disclosure.
- FIG. 2 is explained in conjunction with elements from FIG. 1A and FIG. 1B .
- the imaging device 102 which may include a processing circuitry section 102 A and an incoming light control section 102 B.
- the processing circuitry section 102 A may include one or more circuits configured to refine coding of P-phase data.
- the one or more circuits may include a processor 202 , a memory 204 , a user interface (UI) 206 , a step-size estimator 208 , a start position estimator 210 , a refinement unit 212 , one or more input/output (I/O) units, such as an (I/O) unit 214 , and a network interface 216 .
- the communication network 108 ( FIG. 1A ) is shown associated with the network interface 216 .
- the processing circuitry section 102 A may further include an image transformer 218 , an imager 220 controlled by an imager controller 222 , and an image sensor, such as the image sensor 104 .
- the incoming light control section 102 B may include a plurality of lenses 224 , controlled by a lens controller 226 and a lens driver 228 .
- the plurality of lenses 224 may include an iris 224 A.
- a shutter 230 is also shown in the incoming light control section 102 B.
- the one or more circuits may be directly or indirectly coupled to each other.
- the output of the step-size estimator 208 and the start position estimator 210 may be provided to the refinement unit 212 , in conjunction with the processor 202 . Further, the output of the refinement unit 212 may be provided to the image transformer 218 . The output of the image transformer 218 may be provided to the I/O unit 214 .
- the network interface 216 may be configured to communicate with the exemplary server, such as the server 106 , via the communication network 108 .
- the imager 220 may be communicatively coupled to the image sensor, such as the image sensor 104 .
- the plurality of lenses 224 may be in connection with the lens controller 226 and the lens driver 228 .
- the plurality of lenses 224 may be controlled by the lens controller 226 , in conjunction with the processor 202 .
- the processing circuitry section 102 A of the imaging device 102 may be implemented in an exemplary server, such as the server 106 , without deviation from the scope of the disclosure.
- the processor 202 may comprise suitable logic, circuitry, interfaces, and/or code that may be configured to execute a set of instructions stored in the memory 204 .
- the processor 202 may be further configured to refine coding of the P-phase data.
- the processor 202 may receive an input P-phase data block.
- the P-phase data block may comprise a plurality of entropy coded bits and a plurality of un-coded bits of P-phase data values.
- the received input P-phase data block may be one of a plurality of P-phase data blocks received from one or more sensing devices, such as the image sensor 104 , after entropy coding of the plurality of P-phase data blocks.
- the processor 202 may be implemented based on a number of electronic control unit technologies known in the art. Examples of the processor 202 may be a Reduced Instruction Set Computing (RISC) processor, an Application-Specific Integrated Circuit (ASIC) processor, a Complex Instruction Set Computing (CISC) processor, and/or other processors.
- the memory 204 may comprise suitable logic, circuitry, and/or interfaces that may be configured to store a machine code and/or a set of instructions with at least one code section executable by the processor 202 .
- the memory 204 may store the received input P-phase data block.
- the memory 204 may be further configured to store one or more images and the video captured by the imaging device 102 .
- the memory 204 may be further operable to store operating systems and associated applications of the imaging device 102 . Examples of implementation of the memory 204 may include, but are not limited to, Random Access Memory (RAM), Read Only Memory (ROM), Hard Disk Drive (HDD), Flash memory, and/or a Secure Digital (SD) card.
- RAM Random Access Memory
- ROM Read Only Memory
- HDD Hard Disk Drive
- Flash memory and/or a Secure Digital (SD) card.
- SD Secure Digital
- the UI 206 may comprise suitable interfaces that may be rendered on the I/O unit 214 of the imaging device 102 .
- the UI 206 may further be configured to present refined image frames generated by the imaging device 102 .
- the step-size estimator 208 may comprise suitable logic, circuitry, and/or interfaces that may be configured to determine a refinement step-size for the received input P-phase data block.
- the step-size estimator 208 may be implemented as a coprocessor or a special-purpose circuitry in the imaging device 102 .
- the step-size estimator 208 and the processor 202 may be implemented as an integrated processor or as a cluster of processors that perform the functions of the step-size estimator 208 and the processor 202 .
- the step-size estimator 208 may be implemented as a set of instructions stored in the memory 204 , which upon execution by the processor 202 , may perform the functions and operations of the imaging device 102 .
- the start position estimator 210 may comprise suitable logic, circuitry, and/or interfaces that may be configured to determine a refinement start position for the received input P-phase data block.
- the start position estimator 210 may be implemented as a separate processor or circuitry in the imaging device 102 .
- the start position estimator 210 and the processor 202 may be implemented as an integrated processor or as a cluster of processors that perform the functions of the start position estimator 210 and the processor 202 .
- the start position estimator 210 may be implemented as a set of instructions stored in the memory 204 , which upon execution by the processor 202 , may perform the functions and operations of the imaging device 102 .
- the refinement unit 212 may comprise suitable logic, circuitry, and/or interfaces that may be configured to refine the plurality of un-coded bits of the P-phase data values by allocation of the refinement bits in one or more bit-planes of the received input P-phase data block.
- the refinement unit 212 may be implemented as a separate processor or circuitry in the imaging device 102 .
- the refinement unit 212 and the processor 202 may be implemented as an integrated processor or a cluster of processors that perform the functions of the refinement unit 212 and the processor 202 .
- the refinement unit 212 may be implemented as a set of instructions stored in the memory 204 , which upon execution by the processor 202 , may perform the functions and operations of the imaging device 102 .
- the I/O unit 214 may comprise suitable logic, circuitry, interfaces, and/or code that may be configured to control presentation of the refined images and/or the refined plurality of un-coded bits on a display screen.
- the display screen may be realized through several known technologies, such as, but not limited to, Liquid Crystal Display (LCD) display, Light Emitting Diode (LED) display, and/or Organic LED (OLED) display technology.
- the I/O unit 214 may comprise various input and output devices that may be configured to communicate with the processor 202 .
- Examples of the input devices or input mechanisms may include, but are not limited to, a shutter button, a record button on the imaging device 102 (such as a camera), a software button on the UI 206 of the imaging device 102 , a touch screen, a microphone, a motion and/or gesture sensor, and/or a light sensor.
- Examples of the output devices may include, but are not limited to, the display screen, a projector screen, and/or a speaker.
- the network interface 216 may comprise suitable logic, circuitry, interfaces, and/or code that may be configured to communicate with one or more cloud resources, such as the server 106 (as shown in FIG. 1A ), via the communication network 108 (as shown in FIG. 1A ).
- the network interface 216 may implement known technologies to support wired or wireless communication of the imaging device 102 with the communication network 108 .
- Components of the network interface 216 may include, but are not limited to, an antenna, a radio frequency (RF) transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a coder-decoder (CODEC) chipset, a subscriber identity module (SIM) card, and/or a local buffer.
- RF radio frequency
- the image transformer 218 may comprise suitable logic, circuitry, and/or interfaces that may be configured to transform an image frame to a refined image frame by removal of noise from the image frame.
- the image transformer 218 may be implemented as a coprocessor or a special-purpose circuitry in the imaging device 102 .
- the image transformer 218 and the processor 202 may be implemented as an integrated processor or a cluster of processors that perform the functions of the image transformer 218 and the processor 202 .
- the image transformer 218 may be implemented as a set of instructions stored in the memory 204 , which upon execution by the processor 202 , may perform the functions and operations of the imaging device 102 .
- the imager 220 may comprise suitable circuitry and/or interfaces that may be configured to transform images from analog light signals into a series of digital pixels without any distortion. Examples of implementation of the imager 220 may include, but are not limited to, Charge-Coupled Device (CCD) imagers or Complementary Metal-Oxide-Semiconductor (CMOS) imagers, or a combination thereof.
- CCD Charge-Coupled Device
- CMOS Complementary Metal-Oxide-Semiconductor
- the imager controller 222 may comprise suitable logic, circuitry, and/or interfaces that may be configured to control orientation or direction of the imager 220 , based on the instructions received from the processor 202 .
- the imager controller 222 may be implemented by utilizing various technologies that are well known to those skilled in the art.
- the plurality of lenses 224 may correspond to an optical lens or assembly of lenses, used in conjunction with a camera body, such as the body of the imaging device 102 , and mechanism to capture image frames.
- the image frames may be captured either on photographic film or other media that is capable to store an image chemically or electronically.
- the lens controller 226 may comprise suitable logic, circuitry, and/or interfaces that may be configured to control various characteristics, such as zoom, focus, or aperture, of the plurality of lenses 224 .
- the lens controller 226 may integrated as part of the imaging device 102 , or may be a stand-alone unit, in conjunction with the processor 202 . In case of the stand-alone unit, the lens controller 226 and/or the plurality of lenses 224 , for example, may be implemented as a removable attachment to the imaging device 102 .
- the lens controller 226 may be implemented by use of several technologies that are well known to those skilled in the art.
- the lens driver 228 may comprise suitable logic, circuitry, and/or interfaces that may be configured to perform zoom and focus control and iris control, based on instructions received from the lens controller 226 .
- the lens driver 228 may be implemented by use of several technologies that are well known to those skilled in the art.
- the shutter 230 may allow light to pass for a determined or particular period, exposing the imager 220 to light in order to capture a plurality of image frames.
- the shutter may be of a global shutter type.
- the P-phase data such as the plurality of blocks of P-phase data values, are received prior to the receipt of the D-phase data values in case of the global shutter type of shutter 230 . Consequently, the CDS process in case of the global shutter type of shutter 230 requires the noise component, such as the received plurality of blocks of P-phase data values, to be stored before the D-phase data values.
- the processor 202 may be configured to receive an input to capture an image or a sequence of image frames of a video.
- the sequence of image frames may be captured through the plurality of lenses 224 by use of the image sensor 104 .
- the plurality of lenses 224 may be controlled by the lens controller 226 and the lens driver 228 , in conjunction with the processor 202 .
- the plurality of lenses 224 may be controlled based on an input signal received from the user 110 .
- the input signal may be provided by the user 110 , via selection of a graphical button rendered on the UI 206 or a button-press event of a hardware button available at the imaging device 102 .
- the imaging device 102 may retrieve the image and/or the sequence of image frames pre-stored in the memory 204 .
- the processor 202 may be configured to receive the plurality of blocks of P-phase data values from the image sensor 104 .
- the processor 202 may be configured to process an input P-phase data block.
- the p-phase data block may comprise a plurality of entropy coded bits and a plurality of un-coded bits of P-phase data values.
- the input P-phase data block may be one of a plurality of P-phase data blocks received from one or more sensing devices, such as the image sensor 104 (as described in FIG. 1A ), after entropy coding of the plurality of P-phase data blocks.
- the plurality of entropy coded bits may be coded by a DPCM or PCM.
- the memory 204 in conjunction with the processor 202 , may store the received input P-phase data block.
- the step-size estimator 208 may be configured to receive the input P-phase data block from the memory 204 .
- the step-size estimator 208 may determine a refinement step size for the received input P-phase data block, based on a count of refinement bits available for coding of the plurality of un-coded bits and a block size of the received input P-phase data block.
- the determined refinement step size may correspond to a gap size to be maintained among the refinement bits available for coding of the plurality of un-coded bits in each of the one or more bit-planes for equal distribution of the refinement bits in each of the one or more bit-planes for the refinement.
- the refinement step size for the received input P-phase data block may be determined based on the following equation (1):
- N RefBit corresponds to the count of refinement bits available for coding of the plurality of un-coded bits
- B lockSize corresponds to block size of the received input P-phase data block
- S tepSize corresponds to the determined refinement step size.
- the block size of the received input P-phase data block may be “16” and the count of the refinement bits available for coding of the plurality of un-coded bits may be “2”.
- the refinement step size according to the equation (1) is determined to be “8”.
- the count of the refinement bits available for coding of the plurality of un-coded bits may be 4.
- the refinement step size according to the equation (1) is determined to be 4.
- the start position estimator 210 may be configured to receive the input P-phase data block from the memory 204 .
- the start position estimator 210 may further be configured to determine a refinement start position for the received input P-phase data block based on a number of sample groups of color values of the received input P-phase data block and the block size of the received input P-phase data block.
- the determined refinement start position may correspond to a position from which the allocation of the refinement bits in the plurality of un-coded bits of the P-phase data values is to be initiated for the refinement.
- the refinement start position for the received input P-phase data block may be determined based on the following equation (2):
- B lockSize corresponds to block size of the received input P-phase data block
- N SampleGroup corresponds to number of sample groups of color values of the received input P-phase data block
- X corresponds to the determined refinement start position.
- the number of sample groups of color values of the received input P-phase data block may be “8” and the block size of the received input P-phase data block may be “4”.
- the number of sample groups of color values of the received input P-phase data block may be “8” and the block size of the received input P-phase data block may be 16.
- the refinement unit 212 may be configured to receive the determined refinement step size and the determined refinement start position from the step-size estimator 208 and the start position estimator 210 , respectively.
- the refinement unit 212 may further be configured to refine the plurality of un-coded bits of the P-phase data values by allocation of the refinement bits in one or more bit-planes of the received input P-phase data block, based on the refinement step size and the refinement start position as determined by the above described equations (1) and (2).
- the processor 202 may be configured to detect whether the count of the refinement bits available for coding of the plurality of un-coded bits is greater than or equal to a bit-plane size of a first bit-plane of the one or more bit-planes.
- the refinement unit 212 may further include refinement of the first bit-plane in the event that the count of the refinement bits is greater than or equal to the bit-plane size of the first bit-plane. Such a refinement of the first bit-plane may be referred to as a one-bit-plane refinement.
- the refinement unit 212 may further include refinement of the first bit-plane of the plurality of un-coded bits by a bit-by-bit allocation of the refinement bits in the first bit-plane.
- the bit-by-bit allocation of the refinement bits may be executed in the event that the count of the refinement bits is less than the bit-plane size of the first bit-plane.
- the refinement bits may be allocated in the first bit-plane from the determined refinement start position.
- the refinement bits may be equally spaced in the first bit-plane based on the determined refinement step size. Such a refinement of the first bit-plane may be referred to as a one-bit refinement.
- the count of the refinement bits may be updated after each one-bit refinement or one-bit-plane refinement.
- the processor 202 may be configured to update the count of the refinement bits by reducing the count of the refinement bits by one bit.
- the processor 202 may be configured to update the count of the refinement bits by reducing the count of the refinement bits by one-bit-plane.
- the processor 202 may compute a difference between P-phase data values and D-phase data values.
- the P-phase data values may correspond to the plurality of P-phase data blocks representative of a plurality of pixels in an image frame that may be captured by the imaging device 102 .
- the P-phase data values may correspond to digital pixel reset values that represent reference voltages of a plurality of pixels in the image frame
- the D-phase data values corresponds to light dependent digital pixel values that represents signal voltages of the plurality of pixels in the image frame.
- the image transformer 218 may be configured to transform the image frame to a refined image frame, based on the computed difference between the P-phase data values and the D-phase data values.
- the computed difference may be utilized for removal of noise from the image frame to obtain the refined image frame.
- the computed difference may result in cancellation of the P-phase data values from corresponding D-phase data values for each of the plurality of light-sensing elements, such as the light-sensing element 104 A. This may be done to generate CDS corrected digital output pixel values in the refined image frame.
- the display screen included in the I/O unit 214 may be configured to display or present the refined image frame on the display screen.
- the processor 202 may be configured to store the refined image frame in the memory 204 .
- the network interface 216 may be configured to transmit or communicate the refined image frame to one or more cloud resources, such as the server 106 ( FIG. 1A ), via the communication network 108 ( FIG. 1A ).
- FIG. 3 illustrates an exemplary scenario to refine coding of P-phase data for P-phase data compression in an imaging device, in accordance with an embodiment of the disclosure.
- FIG. 3 has been described in conjunction with elements from FIG. 1A , FIG. 1B , and FIG. 2 .
- an exemplary scenario 300 to refine coding of P-phase data in the imaging device 102 there is shown an input P-phase data block 302 with a block size of “16 ⁇ 1” and a bit-depth of “8” bits, a first bit-plane refinement output 304 , and a second bit-plane refinement output 306 .
- the block size of “16 ⁇ 1” represents “16” pixels with each pixel having the bit-depth of “8” bits.
- the input P-phase data block 302 may comprise a plurality of coded bits to a bit-depth of “5” (as shown) and plurality of un-coded bits to a bit-depth of “3” of P-phase data values.
- the bit-plane size of the input P-phase data block 302 is “16”.
- the input P-phase data block 302 may be one of a plurality of P-phase data blocks received from one or more sensing devices, such as the image sensor 104 ( FIG. 1A ), after entropy coding of the plurality of P-phase data blocks.
- “23” refinement bits may be available for coding of the plurality of un-coded bits in the input P-phase data block 302 .
- the processor 202 may be configured to determine whether the count of the refinements bits (23) available for coding of the plurality of un-coded bits is greater than or equal to the bit-plane size (16) of one or more bit-planes of the input P-phase data block 302 .
- the refinement unit 212 may be configured to refine one-bit-plane of the first bit-plane of the plurality of un-coded bits of the input P-phase data block 302 .
- the result of the one-bit-plane refinement (indicated by operation 304 A) of the first bit-plane of the plurality of un-coded bits in the input P-phase data block 302 is shown in the first bit-plane refinement output 304 .
- the processor 202 may be further configured to update the number of refinement bits (23), based on the one-bit-plane refinement.
- the number of refinement bits (23) is reduced by one-bit-plane size (16).
- the processor 202 may then determine whether the updated number of refinement bits (7) is greater than or equal to the bit-plane size (16) of the second bit plane of the one or more bit-planes of the input P-phase data block 302 .
- the step-size estimator 208 may determine a refinement step size for the input P-phase data block 302 based on the equation (1), as described in FIG. 2 .
- the start position estimator 210 may be configured to determine a refinement start position for the input P-phase data block 302 , based on the equation (2), as described in FIG. 2 .
- the refinement unit 212 may then execute refinement of the second bit-plane of the plurality of un-coded bits by a bit-by-bit allocation (indicated by operation 306 A) of the refinement bits (7) in the second bit-plane.
- the refinement bits (7) may be allocated in the second bit-plane from the refinement start position as determined by the start position estimator 210 , and the refinement bits (7) may be equally spaced in the second bit-plane based on the refinement step size as determined by the step-size estimator 208 .
- Such a refinement of the second bit-plane may be referred to as a one-bit refinement.
- the processor 202 may be configured to update the refinement bits (7) by reducing the refinement bits (7) by one bit. This process of one-bit refinement may continue until the processor 202 detects or determines that the total number of refinement bits available for coding of the plurality of un-coded bits is zero. The remaining plurality of un-coded bits that may not have been refined by the available number of refinement bits may correspond to the un-processed bits, as shown in the second bit-plane refinement output 306 .
- FIG. 4 depicts a flow chart that illustrates exemplary operations to refine coding of P-phase data in an imaging device, in accordance with an embodiment of the disclosure.
- FIG. 4 there is shown a flowchart 400 .
- the flowchart 400 is described in conjunction with elements from FIG. 1A , FIG. 1B , FIG. 2 , and FIG. 3 .
- the method starts at 402 and proceeds to 404 .
- an input P-phase data block which comprises a plurality of coded bits and a plurality of un-coded bits of P-phase data values, may be received.
- the processor 202 may be configured to receive the input P-phase data block.
- the received input P-phase data block may be one of a plurality of P-phase data blocks received from one or more sensing devices, such as the image sensor 104 ( FIG. 1A ), after entropy coding of the plurality of P-phase data blocks.
- An example of the received input P-phase data block 302 is shown and described, for example, in the FIG. 3 .
- a count of refinement bits available for coding of the plurality of un-coded bits is greater than or equal to a bit-plane size of a first bit-plane of one or more bit-planes of the received input P-phase data block.
- the processor 202 may be configured to detect whether the count of the refinement bits available for coding of the plurality of un-coded bits is greater than or equal to the bit-plane size. In the event that the count of the refinement bits available for coding of the plurality of un-coded bits is greater than or equal to the bit-plane size of a first bit-plane of the one or more bit-planes of the received input P-phase data block, control passes to 418 .
- a refinement step size for the received input P-phase data block may be determined.
- the refinement step size may be determined based on the count of refinement bits available for coding of the plurality of un-coded bits and a block size of the received input P-phase data block.
- the step-size estimator 208 may be configured to determine the refinement step size for the received input P-phase data block.
- the determined refinement step size may correspond to a gap size to be maintained among the refinement bits available for coding of the plurality of un-coded bits in each of the one or more bit-planes.
- the refinement step size for the received input P-phase data block may be determined based on the equation (1), as described in FIG. 2 .
- a refinement start position for the received input P-phase data block may be determined.
- the start position estimator 210 may be configured to determine the refinement start position for the received input P-phase data block, based on a number of sample groups of color values of the received input P-phase data block and the block size of the received input P-phase data block.
- the determined refinement start position may correspond to a position from which the allocation of the refinement bits in the plurality of un-coded bits of the P-phase data values is to be initiated for the refinement.
- the refinement start position for the received input P-phase data block may be determined based on the equation (2), as described in FIG. 2 .
- one bit of the first bit-plane of the plurality of un-coded bits may be refined.
- the refinement unit 212 may execute refinement of the first bit-plane of the plurality of un-coded bits by a bit-by-bit allocation of the refinement bits in the first bit-plane. This may be done in the event that the count of the refinement bits is less than the bit-plane size of the first bit-plane.
- the refinement bits may be allocated in the first bit-plane from the determined refinement start position, and the refinement bits may be equally spaced in the first bit-plane based on the determined refinement step size.
- Such a refinement of the first bit-plane may be referred to as a one-bit refinement, an example of which is shown in the second bit-plane refinement output 306 in the FIG. 3 .
- the count of the refinement bits may be updated based on the one-bit refinement.
- the count of the refinement bits may be updated after each one-bit refinement.
- the processor 202 may be configured to update the count of the refinement bits by reducing the count of the refinement bits by one bit.
- the processor 202 may be configured to detect whether the count of refinement bits available for coding of the plurality of un-coded bits is equal to zero. In the event that the count of refinement bits available for coding of the plurality of un-coded bits is equal to zero, control passes to end 422 . In the event that the count of refinement bits available for coding of the plurality of un-coded bits is not equal to zero, control passes back to 412 .
- one-bit-plane of the first bit-plane of the plurality of un-coded bits may be refined.
- the refinement unit 212 may be configured to refine one-bit-plane of the first bit-plane in the event that the count of the refinement bits is greater than or equal to the bit-plane size of the first bit-plane.
- Such a refinement of the first bit-plane may be referred to as a one-bit-plane refinement an example of which is shown in the first bit-plane refinement output 304 in the FIG. 3 .
- the count of the refinement bits may be updated based on one-bit-plane refinement.
- the count of the refinement bits may be updated after each one-bit-plane refinement.
- the processor 202 may be configured to update the count of the refinement bits by reducing the count of the refinement bits by one-bit-plane size. Control passes back to 406 .
- the imaging device 102 may comprise the one or more circuits, such as the processor 202 ( FIG. 2 ), which may be configured to receive an input P-phase data block that comprises a plurality of entropy coded bits and a plurality of un-coded bits of P-phase data values.
- the imaging device 102 may comprise one or more circuits, such as the step-size estimator 208 ( FIG.
- the one or more circuits such as the start position estimator 210 ( FIG. 2 ), may be configured to determine a refinement start position for the received input P-phase data block, based on a number of sample groups of color values of the received input P-phase data block and the block size of the received input P-phase data block.
- the one or more circuits such as the refinement unit 212 ( FIG. 2 ).
- the 2 may be configured to refine the plurality of un-coded bits of the P-phase data values by allocation of the refinement bits in one or more bit-planes of the received input P-phase data block, based on the determined refinement step size and the determined refinement start position.
- a compressed bit stream of the P-phase data values is generated after the refinement of the plurality of un-coded bits of the P-phase data values.
- the compressed P-phase data values that correspond to the received plurality of blocks of P-phase data values may be stored in the memory 204 before the D-phase data is actually received from an image sensor, such as the image sensor 104 , for the CDS process.
- the compressed P-phase data values may be stored prior to the receipt of the D-phase data in case of usage of the global shutter by imaging device 102 for capture of the image or the sequence of images.
- the compression of the P-phase data values saves storage space of the imaging device 102 .
- more image or video data may be captured and stored in the imaging device 102 as a result of the compression of the P-phase data values.
- DPCM may be effective for image compression, after an image is captured and where the captured image has adjacent pixel intensity values that are highly similar to each other.
- the operations performed by the step-size estimator 208 , the start position estimator 210 , and the refinement unit 212 , as described, is advantageous for data that exhibit noise-like characteristics, such as the P-phase data values.
- the DPCM based compression method may not be effective for compression as adjacent P-phase data values may not exhibit high similarity or uniformity.
- the disclosed method and the imaging device 102 for P-phase data compression also ensures removal of noise from the captured image, so as to generate a refined captured image with an improved picture quality.
- an image or a sequence of image frames may be compressed after generation of an actual image or a sequence of image frames
- the generation of the compressed P-phase data values that correspond to the received plurality of blocks of P-phase data values occurs at the time of generation of an image or a sequence of image frames by use of the image sensor 104 .
- an additional compression ability is provided to the imaging device 102 to save memory space both at the time of generation of the image or the sequence of image frames, and post generation of the image or the sequence of image frames.
- the disclosed method to refine coding of P-phase data ensures equal refinement of the plurality of un-coded bits of the input P-phase data block unlike conventional refinement techniques.
- the P-phase data block may be one of a plurality of P-phase data blocks received from the image sensor 104 .
- the P-phase data block may include various P-phase data values and may further represent a plurality of pixels in an image frame captured by any electronic device, such as the imaging device 102 .
- the refinement bits are equally spaced in a bit-plane based on the determined refinement step size, an improvement in the overall desirable structure of the error pattern is observed by the disclosed method to refine coding of P-phase data.
- the coded bits and un-coded bits of P-phase data values included in the processed P-phase data block may be accurately distinguished.
- the imaging device 102 may be a camera.
- all the operations executed by the imaging device 102 as described in the present disclosure may also be executed by the camera.
- raw data is captured which needs to be compressed to save memory space and memory access bandwidth.
- high definition image or video such as ultra-definition video, image, 4K video, and other digital images or video
- the disclosed method to refine coding of P-phase data ensures equal refinement of the plurality of un-coded bits of the input P-phase data block unlike conventional refinement techniques.
- An example of the operations executed by the camera may be understood, for example, from the flowchart 400 of FIG. 4 . Similar to the camera, all the operations executed by the imaging device 102 as described in the present disclosure, such as in FIGS. 1A, 1B, 2, 3 and 4 , may also be executed by a camcorder or a smart phone, for efficient compression to save memory space both at the time of generation of the image or the sequence of image frames, and also post generation of the image or the sequence of image frames, as described.
- Various embodiments of the disclosure may provide a non-transitory computer readable medium and/or storage medium, wherein there is stored thereon, a machine code and/or a computer program with at least one code section executable by a machine and/or a computer for coding of P-phase data.
- the at least one code section in the imaging device 102 may cause the machine and/or computer to perform the steps that comprise reception of an input P-phase data block that comprises a plurality of entropy coded bits and a plurality of un-coded bits of P-phase data values.
- the imaging device 102 may be configured to determine a refinement step size for the received input P-phase data block based on a count of refinement bits available for coding of the plurality of un-coded bits and a block size of the received input P-phase data block.
- the imaging device 102 may be further configured to a refinement start position for the received input P-phase data block based on a number of sample groups of color values of the received input P-phase data block and the block size of the received input P-phase data block.
- the imaging device 102 may be further configured to refine the plurality of un-coded bits of the P-phase data values by allocation of the refinement bits in one or more bit-planes of the received input P-phase data block, based on the determined refinement step size and the determined refinement start position.
- the present disclosure may be realized in hardware, or a combination of hardware and software.
- the present disclosure may be realized in a centralized fashion, in at least one computer system, or in a distributed fashion, where different elements may be spread across several interconnected computer systems.
- a computer system or other apparatus adapted to carry out the methods described herein may be suited.
- a combination of hardware and software may be a general-purpose computer system with a computer program that, when loaded and executed, may control the computer system such that it carries out the methods described herein.
- the present disclosure may be realized in hardware that comprises a portion of an integrated circuit that also performs other functions.
- the present disclosure may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods.
- Computer program in the present context, means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly, or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
where,
NRefBit corresponds to the count of refinement bits available for coding of the plurality of un-coded bits;
BlockSize corresponds to block size of the received input P-phase data block; and
StepSize corresponds to the determined refinement step size.
For example, the block size of the received input P-phase data block may be “16” and the count of the refinement bits available for coding of the plurality of un-coded bits may be “2”. In such an instance, the refinement step size according to the equation (1) is determined to be “8”. In another example, the count of the refinement bits available for coding of the plurality of un-coded bits may be 4. In such an instance, the refinement step size according to the equation (1) is determined to be 4.
BlockSize corresponds to block size of the received input P-phase data block;
NSampleGroup corresponds to number of sample groups of color values of the received input P-phase data block; and
X corresponds to the determined refinement start position.
For example, the number of sample groups of color values of the received input P-phase data block may be “8” and the block size of the received input P-phase data block may be “4”. In such an instance, the refinement start position, according to the equation (2) is determined to be X=1*n (X=0, 1, 2, 3, 0, 1, 2, 3). In another example, the number of sample groups of color values of the received input P-phase data block may be “8” and the block size of the received input P-phase data block may be 16. In such an instance, the refinement start position, according to the equation (2), is determined to be X=2*n (X=0, 2, 4, 6, 8, 10, 12, 14).
Claims (22)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/351,558 US10791340B2 (en) | 2016-11-15 | 2016-11-15 | Method and system to refine coding of P-phase data |
CN201711067505.XA CN108076343B (en) | 2016-11-15 | 2017-11-03 | Method and system for refining the encoding of P-phase data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/351,558 US10791340B2 (en) | 2016-11-15 | 2016-11-15 | Method and system to refine coding of P-phase data |
Publications (2)
Publication Number | Publication Date |
---|---|
US20180139471A1 US20180139471A1 (en) | 2018-05-17 |
US10791340B2 true US10791340B2 (en) | 2020-09-29 |
Family
ID=62106973
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/351,558 Active 2038-02-18 US10791340B2 (en) | 2016-11-15 | 2016-11-15 | Method and system to refine coding of P-phase data |
Country Status (2)
Country | Link |
---|---|
US (1) | US10791340B2 (en) |
CN (1) | CN108076343B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9615037B2 (en) * | 2013-11-08 | 2017-04-04 | Drs Network & Imaging Systems, Llc | Method and system for output of dual video stream via a single parallel digital video interface |
US10750182B2 (en) * | 2018-11-20 | 2020-08-18 | Sony Corporation | Embedded codec circuitry for visual quality based allocation of refinement bits |
US10939107B2 (en) * | 2019-03-01 | 2021-03-02 | Sony Corporation | Embedded codec circuitry for sub-block based allocation of refinement bits |
DE102021117397A1 (en) * | 2020-07-16 | 2022-01-20 | Samsung Electronics Co., Ltd. | IMAGE SENSOR MODULE, IMAGE PROCESSING SYSTEM AND IMAGE COMPRESSION METHOD |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100270798B1 (en) * | 1994-07-29 | 2000-11-01 | 데니스 피셸 | Video dccompression |
WO2002037859A2 (en) | 2000-11-03 | 2002-05-10 | Compression Science | Video data compression system |
US20080279299A1 (en) * | 2007-05-10 | 2008-11-13 | Comsys Communication & Signal Processing Ltd. | Multiple-input multiple-output (mimo) detector incorporating efficient signal point search and soft information refinement |
US7881384B2 (en) * | 2005-08-05 | 2011-02-01 | Lsi Corporation | Method and apparatus for H.264 to MPEG-2 video transcoding |
US20110026582A1 (en) * | 2009-07-29 | 2011-02-03 | Judit Martinez Bauza | System and method of compressing video content |
US20140355675A1 (en) * | 2013-05-29 | 2014-12-04 | Research In Motion Limited | Lossy data compression with conditional reconstruction refinement |
EP2816805A1 (en) | 2013-05-29 | 2014-12-24 | BlackBerry Limited | Lossy data compression with conditional reconstruction reinfinement |
KR101566557B1 (en) * | 2006-10-18 | 2015-11-05 | 톰슨 라이센싱 | Method and apparatus for video coding using prediction data refinement |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100358364C (en) * | 2005-05-27 | 2007-12-26 | 上海大学 | Code rate control method for subtle granule telescopic code based on H.264 |
JP5039142B2 (en) * | 2006-10-25 | 2012-10-03 | フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン | Quality scalable coding method |
DK2123052T3 (en) * | 2007-01-18 | 2011-02-28 | Fraunhofer Ges Forschung | Quality scalable video data stream |
US8498982B1 (en) * | 2010-07-07 | 2013-07-30 | Openlogic, Inc. | Noise reduction for content matching analysis results for protectable content |
-
2016
- 2016-11-15 US US15/351,558 patent/US10791340B2/en active Active
-
2017
- 2017-11-03 CN CN201711067505.XA patent/CN108076343B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100270798B1 (en) * | 1994-07-29 | 2000-11-01 | 데니스 피셸 | Video dccompression |
WO2002037859A2 (en) | 2000-11-03 | 2002-05-10 | Compression Science | Video data compression system |
US7881384B2 (en) * | 2005-08-05 | 2011-02-01 | Lsi Corporation | Method and apparatus for H.264 to MPEG-2 video transcoding |
KR101566557B1 (en) * | 2006-10-18 | 2015-11-05 | 톰슨 라이센싱 | Method and apparatus for video coding using prediction data refinement |
US20080279299A1 (en) * | 2007-05-10 | 2008-11-13 | Comsys Communication & Signal Processing Ltd. | Multiple-input multiple-output (mimo) detector incorporating efficient signal point search and soft information refinement |
US20110026582A1 (en) * | 2009-07-29 | 2011-02-03 | Judit Martinez Bauza | System and method of compressing video content |
US9129409B2 (en) | 2009-07-29 | 2015-09-08 | Qualcomm Incorporated | System and method of compressing video content |
US20140355675A1 (en) * | 2013-05-29 | 2014-12-04 | Research In Motion Limited | Lossy data compression with conditional reconstruction refinement |
EP2816805A1 (en) | 2013-05-29 | 2014-12-24 | BlackBerry Limited | Lossy data compression with conditional reconstruction reinfinement |
US9143797B2 (en) | 2013-05-29 | 2015-09-22 | Blackberry Limited | Lossy data compression with conditional reconstruction refinement |
Also Published As
Publication number | Publication date |
---|---|
CN108076343A (en) | 2018-05-25 |
US20180139471A1 (en) | 2018-05-17 |
CN108076343B (en) | 2020-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7536097B2 (en) | Autofocusing apparatus of camera and autofocusing method thereof | |
US10791340B2 (en) | Method and system to refine coding of P-phase data | |
US9516221B2 (en) | Apparatus and method for processing image in camera device and portable terminal using first and second photographing information | |
US9906732B2 (en) | Image processing device, image capture device, image processing method, and program | |
TW201105118A (en) | Imaging device, imaging method and computer-readable recording medium | |
WO2018003124A1 (en) | Imaging device, imaging method, and imaging program | |
JP2015149691A (en) | Image correction device, image correction method, and imaging apparatus | |
US10055852B2 (en) | Image processing system and method for detection of objects in motion | |
US11032483B2 (en) | Imaging apparatus, imaging method, and program | |
US8654204B2 (en) | Digtal photographing apparatus and method of controlling the same | |
US10778903B2 (en) | Imaging apparatus, imaging method, and program | |
US10397462B2 (en) | Imaging control apparatus and imaging apparatus for synchronous shooting | |
US20200106821A1 (en) | Video processing apparatus, video conference system, and video processing method | |
US10249027B2 (en) | Device and method for P-phase data compression | |
JP7174123B2 (en) | Image processing device, photographing device, image processing method and image processing program | |
US10944899B2 (en) | Image processing device and image processing method | |
US10491840B2 (en) | Image pickup apparatus, signal processing method, and signal processing program | |
JP7110408B2 (en) | Image processing device, imaging device, image processing method and image processing program | |
US10212357B2 (en) | Imaging apparatus and control method to realize accurate exposure | |
JP5182395B2 (en) | Imaging apparatus, imaging method, and imaging program | |
JP2015109502A (en) | Image sensor and operation method of image sensor, imaging apparatus, electronic apparatus and program | |
JP2023065474A (en) | Imaging apparatus | |
KR20240001131A (en) | Image alignment for computational photography | |
KR20050050741A (en) | Apparatus for removing noise of camera picture of handset |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SONY CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:IKEDA, MASARU;TABATABAI, ALI;SIGNING DATES FROM 20161118 TO 20161206;REEL/FRAME:041420/0143 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |