WO2021243709A1 - Method of generating target image data, electrical device and non-transitory computer readable medium - Google Patents
Method of generating target image data, electrical device and non-transitory computer readable medium Download PDFInfo
- Publication number
- WO2021243709A1 WO2021243709A1 PCT/CN2020/094714 CN2020094714W WO2021243709A1 WO 2021243709 A1 WO2021243709 A1 WO 2021243709A1 CN 2020094714 W CN2020094714 W CN 2020094714W WO 2021243709 A1 WO2021243709 A1 WO 2021243709A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- image data
- sparse
- pixels
- color
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 61
- 230000008569 process Effects 0.000 claims description 24
- 230000006835 compression Effects 0.000 claims description 17
- 238000007906 compression Methods 0.000 claims description 17
- 239000000284 extract Substances 0.000 claims description 3
- 230000015654 memory Effects 0.000 description 22
- 238000004891 communication Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 238000003384 imaging method Methods 0.000 description 7
- 239000000463 material Substances 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 239000003086 colorant Substances 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- HBBGRARXTFLTSG-UHFFFAOYSA-N Lithium ion Chemical compound [Li+] HBBGRARXTFLTSG-UHFFFAOYSA-N 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 229910001416 lithium ion Inorganic materials 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4015—Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/147—Details of sensors, e.g. sensor lenses
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/513—Sparse representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Definitions
- the present disclosure relates to a method of generating a target image data, an electrical device implementing such method and a non-transitory computer readable medium including program instructions stored thereon for performing such method.
- Electrical devices such as smartphones and tablet terminals are widely used in our daily life.
- many of the electrical devices are equipped with a camera assembly to capture an image.
- Some of the electrical devices are portable and are thus easy to carry. Therefore, a user of the electrical device can easily take a picture of an object by using the camera assembly of the electrical device anytime, anywhere.
- One of the widely known formats is a Bayer format which includes a sparse image data.
- a dense image data is also generated when the camera assembly captures the object.
- the sparse image data and the dense image data are used to generate the target image data to be displayed on a display or to be stored in a memory of the electrical device.
- a common image signal processor cannot handle such two types of image data.
- the present disclosure aims to solve at least one of the technical problems mentioned above. Accordingly, the present disclosure needs to provide a method of generating a target image data and an electrical device implementing such method.
- a method of generating a target image data may include:
- the sparse image data includes a plurality of pixels of at least first color pixels, second color pixels and third color pixels, and the dense image data includes a plurality of pixels of the first color pixels;
- the method may further include inputting the embedded sparse image data to an image signal processor which processes the sparse image data in the embedded sparse image data to generate the target image data.
- the each two adjacent pixels in the dense image data may constitute a first pair, and the first pair may include a first value of the first color pixel and a second value of the first pixel.
- the each two adjacent pixels in the sparse image data may constitutes a second pair, and the second pair may include a third value of the first color pixel and a fourth value of the second color pixel or may include the third value of the first color pixel and the fourth value of the third color pixel.
- the first value of the first color pixel in the first pair of the dense image data may correspond to the third value of the first color pixel in the second pair which is located at a position corresponding to the first pair of the dense image data.
- the generating the residual data may include subtracting the second value of the first color pixel from the first value of the first color pixel in the first pair.
- wherein generating compressed data may include reducing a number of bits of the residual data.
- the reducing the number of bits of the residual data may include converting the residual data to the compressed data based on a compression curve which defines a relationship between a value of the pixel of the residual data and a value of the pixel of the compressed data.
- each of the first color pixels, each of the second color pixels and each of the third color pixels in the sparse image data may have a spare space in which the sparse image data is not stored.
- sizes of the first data part and the second data part of the split data may be matched with sizes of the spare space in each of the first color pixels, each of the second color pixels and each of the third color pixels in the sparse image data.
- the generating the embedded sparse image data may include embedding the first data part and the second data part of the first pair into the spare space of the second pair of the sparse image data.
- the method may further include:
- the method may further include:
- the method may further include inputting the combined image data to the image signal processor.
- the first color may be green
- the second color may be red
- the third color may be blue
- the sparse image data may be in conformity to a Bayer format.
- an electrical device may include:
- a camera assembly configured to capture an image of an object and to generate a sparse image data and a dense image data
- a main processor configured to:
- the sparse image data includes a plurality of pixels of at least first color pixels, second color pixels and third color pixels, and the dense image data includes a plurality of pixels of the first color pixels;
- the main processor may be further configured to input the embedded sparse image data to an image signal processor which processes the sparse image data in the embedded sparse image data to generate the target image data.
- the each two adjacent pixels in the dense image data may constitute a first pair, and the first pair includes a first value of the first color pixel and a second value of the first pixel.
- the each two adjacent pixels in the sparse image data may constitute a second pair, and the second pair may include a third value of the first color pixel and a fourth value of the second color pixel or may include the third value of the first color pixel and the fourth value of the third color pixel.
- the first value of the first color pixel in the first pair of the dense image data may correspond to the third value of the first color pixel in the second pair which is located at a position corresponding to the first pair of the dense image data.
- the residual data may be generated by subtracting the second value of the first color pixel from the first value of the first color pixel in the first pair.
- the compressed data may be generated by reducing a number of bits of the residual data.
- the number of bits of the residual data may be reduced by converting the residual data to the compressed data based on a compression curve which defines a relationship between a value of the pixel of the residual data and a value of the pixel of the compressed data.
- each of the first color pixels, each of the second color pixels and each of the third color pixels in the sparse image data may have a spare space in which the sparse image data is not stored.
- sizes of the first data part and the second data part of the split data may be matched with sizes of the spare space in each of the first color pixels, each of the second color pixels and each of the third color pixels in the sparse image data.
- the embedded sparse image data may be generated by embedding the first data part and the second data part of the first pair into the spare space of the second pair of the sparse image data.
- the main processor may be further configured to:
- the main processor may be further configured to:
- the main processor may be further configured to input the combined image data to the image signal processor.
- the first color may be green
- the second color may be red
- the third color may be blue
- the sparse image data may be in conformity to a Bayer format.
- a non-transitory computer readable medium may include program instructions stored thereon for performing at least the following:
- the sparse image data includes a plurality of pixels of at least first color pixels, second color pixels and third color pixels, and the dense image data includes a plurality of pixels of the first color pixels;
- FIG. 1 illustrates a plan view of a first side of an electrical device according to an embodiment of the present disclosure
- FIG. 2 illustrates a plan view of a second side of the electrical device according to the embodiment of the present disclosure
- FIG. 3 illustrates a block diagram of the electrical device according to the embodiment of the present disclosure
- FIG. 4 illustrates a flowchart of a target image generation process performed by the electrical device according to the embodiment of the present disclosure (part 1) ;
- FIG. 5 illustrates a flowchart of the target image generation process performed by the electrical device according to the embodiment of the present disclosure (part 2) ;
- FIG. 6 illustrates a schematic drawing to explain a mechanism to generate an embedded sparse image data to be input to an image signal processor in the embodiment of the present disclosure
- FIG. 7 illustrates a schematic drawing to explain how to generate a residual data and a compressed data in the embodiment of the present disclosure
- FIG. 8 illustrates one of the examples of a compression curve to compress the residual data to generate the compressed data in the embodiment of the present disclosure
- FIG 9 illustrates a schematic drawing to explain how to generate a split data from the compressed data in the embodiment of the present disclosure
- FIG. 10 illustrates a schematic drawing to explain a mechanism to generate a target image data in the embodiment of the present disclosure
- FIG. 11 illustrates a schematic drawing to explain how to reconstruct the residual data and values of pixels of the dense image data in the embodiment of the present disclosure.
- FIG. 12 illustrates one example of a generated image data based on a sparse image data, the dense image data reconstructed based on the embedded sparse image data obtained from the image signal processor, and a combined image data which is generated by combining the generated image data and the dense image data in the embodiment of the present disclosure.
- FIG. 1 illustrates a plan view of a first side of an electrical device 10 according to an embodiment of the present disclosure
- FIG. 2 illustrates a plan view of a second side of the electrical device 10 according to the embodiment of the present disclosure.
- the first side may be referred to as a back side of the electrical device 10 whereas the second side may be referred to as a front side of the electrical device 10.
- the electrical device 10 may include a display 20 and a camera assembly 30.
- the camera assembly 30 includes a first main camera 32, a second main camera 34 and a sub camera 36.
- the first main camera 32 and the second main camera 34 can capture an image in a first side of the electrical device 10 and the sub camera 36 can capture an image in the second side of the electrical device 10. Therefore, the first main camera 32 and the second main camera 34 are so-called out-cameras whereas the sub camera 36 is a so-called in-camera.
- the electrical device 10 can be a mobile phone, a tablet computer, a personal digital assistant, and so on.
- the electrical device 10 may have less than three cameras or more than three cameras.
- the electrical device 10 may have two, four, five, and so on, cameras.
- FIG. 3 illustrates a block diagram of the electrical device 10 according to the present embodiment.
- the electrical device 10 may include a main processor 40, an image signal processor 42, a memory 44, a power supply circuit 46 and a communication circuit 48.
- the display 20, the camera assembly 30, the main processor 40, the image signal processor 42, the memory 44, the power supply circuit 46 and the communication circuit 48 are connected each other via a bus 50.
- the main processor 40 executes one or more programs stored in the memory 44.
- the main processor 40 implements various applications and data processing of the electrical device 10 by executing the programs.
- the main processor 40 may be one or more computer processors.
- the main processor 40 is not limited to one CPU core, but it may have a plurality of CPU cores.
- the main processor 40 may be a main CPU of the electrical device 10, an image processing unit (IPU) or a DSP provided with the camera assembly 30.
- the image signal processor 42 controls the camera assembly 30 and processes various kinds of image data captured by the camera assembly 30 to generate a target image data.
- the image signal processor 42 can execute a de-mosaic process, a noise reduction process, an auto exposure process, an auto focus process, an auto white balance process, a high dynamic range process and so on, to the image data captured by the camera assembly 30.
- the main processor 40 and the image signal processor 42 collaborate with each other to generate a target image data of the object captured by the camera assembly 30. That is, the main processor 40 and the image signal processor 42 are configured to capture the image of the object by the camera assembly 30 and execute various kinds of image processes to the captured image data.
- the memory 44 stores a program to be executed by the main processor 40 and various kinds of data. For example, data of the captured image are stored in the memory 44.
- the memory 44 may include a high-speed RAM memory, and/or a non-volatile memory such as a flash memory and a magnetic disk memory. That is, the memory 44 may include a non-transitory computer readable medium, in which the program is stored.
- the power supply circuit 46 may have a battery such as a lithium-ion rechargeable battery and a battery management unit (BMU) for managing the battery.
- BMU battery management unit
- the communication circuit 48 is configured to receive and transmit data to communicate with base stations of the telecommunication network system, the Internet or other devices via wireless communication.
- the wireless communication may adopt any communication standard or protocol, including but not limited to GSM (Global System for Mobile communication) , CDMA (Code Division Multiple Access) , LTE (Long Term Evolution) , LTE-Advanced, 5th generation (5G) .
- the communication circuit 48 may include an antenna and a RF (radio frequency) circuit.
- FIG. 4 and FIG. 5 illustrate a flowchart of a target image generation process performed by the electrical device 10 according to the present embodiment.
- the target image generation process is executed by, for example, the main processor 40 in order to generate the target image data.
- the main processor 40 collaborates with the image signal processor 42 to generate the target image data. Therefore, the main processor 40 and the image signal processor 42 constitute an image processor in the present embodiment.
- program instructions of the target image generation process are stored in the non-transitory computer readable medium of the memory 44.
- the main processor 40 implements the target image generation process illustrated in FIG. 4 and FIG. 5.
- the main processor 40 obtains a sparse image data and a dense image data (Step S10) .
- the main processor 40 obtains the sparse image data and the dense image data from the camera assembly 30. That is, the camera assembly captures an image of an object and generates both the sparse image data and the dense image data.
- the sparse image data includes a plurality of pixels which are composed of green pixels, red pixels and blue pixels.
- the dense image data includes a plurality of pixels of green pixels.
- the camera assembly 30 may have a specialized image sensor to capture the image of the object and generate the sparse image data and the dense image data with a single camera by executing a single imaging operation.
- the first main camera 32 may capture the image of the object and generate both the sparse image data and the dense image data by executing the single imaging operation.
- the camera assembly 30 may use two cameras to capture the image of the object and generate the sparse image data and the dense image data by executing a single imaging operation.
- the first main camera 32 captures the image of the object and generates the sparse image data
- the second main camera 34 captures the image of the object and generates the dense image data.
- the camera assembly 30 may capture the image of the object and generate the sparse image data and the dense image data with a single camera by executing two imaging operations.
- the sub camera 36 captures the image of the object by executing a first imaging operation to generate the sparse image data and then the sub camera 36 captures the image of the object by executing a second imaging operation immediately after the first imaging operation, to generate the dense image data.
- FIG. 6 illustrates a schematic drawing to explain a mechanism to generate an embedded sparse image data to be input to the image signal processor 42.
- the sparse image data is in conformity to a Bayer format. Therefore, an arrangement of green, red and blue of a color filter of an image sensor of the camera assembly 30 to capture the image of the object is in conformity to a Bayer arrangement.
- the number of green pixels is twice as many as the number of red pixels or the number of blue pixels in the sparse image data.
- the sparse image data may also be referred to as RAW data from the camera assembly 30.
- the dense image data is composed of the green pixels. This is because a brightness of the green is more sensitive than a brightness of the red or the blue for human eye. In the present embodiment, the dense image is captured to adjust the brightness of the target image data.
- the main processor 40 generates a residual data based on the dense image data (Step S12) . That is, in the present embodiment, in order to reduce a data amount, the residual data is generated by calculating differences between each two adjacent pixels in the dense image data.
- FIG. 7 illustrates a schematic drawing to explain how to generate the residual data and a compressed data.
- a plurality of green pixels P1 are included in both the sparse image data and the dense image data.
- a plurality of green pixels P2 are included in the dense image data but they are not included in the sparse image data.
- one of the green pixels P1 and one of the green pixels P2 are depicted as an example.
- the brightness of the each two adjacent pixels are approximate to or the same as the brightness of the other. That is, the difference between a value of the green pixel P1 and a value of the green pixel P2 adjacent to the green pixel P1 is generally small. Therefore, in the present embodiment, in order to reduce the data amount, the difference between the value of the green pixel P1 and the value of the green pixel P2 adjacent to the green pixel P1 is obtained by subtracting the value of the green pixel P2 from the value of the green pixel P1 adjacent to green pixel P2.
- the each two adjacent pixels in the dense image data constitute a first pair to generate the residual data.
- the number of the pixels of the residual data is half of the number of the pixels of the dense image data.
- one pixel of the dense image data is composed of 10 bits. That is, a value of the one pixel of the dense image data is between 0 and 1023.
- one pixel of the residual data is composed of 11 bits, because a value of the one pixel of the residual data is between +1023 and -1024.
- the main processor 40 generates a compressed data based on the residual data (Step S14) .
- the residual data There are various ways to compress the residual data to reduce the number of bits of the residual data. Therefore, one example of the ways to compress the residual data will be explained herein.
- FIG. 8 shows one of the examples of a compression curve to compress the residual data to generate the compressed data. That is, the residual data is converted to the compressed data based on the compression curve.
- the compression curve is also referred to as a tone curve to compress various data and defines a relationship between a value of the pixel of the residual data and a value of the pixel of the compressed data.
- the number of bits for the value of the pixel of the compressed data is smaller than the number of bits for the value of the pixel of the residual data.
- one pixel of 11 bits of the residual data is compressed to one pixel of 8 bits of the compressed data. That is, the value of one pixel of the residual data is between -1024 and +1023 whereas the value of one pixel of the compressed data is between -128 and +127.
- the compression curve is substantially linear in a range in which an absolute value of the pixel of the residual data is small.
- the compression curve is substantially flat or constant in a range in which the absolute value of the pixel of the residual data is large.
- the compression curve is S-shaped.
- the pixel of 11 bits of the residual data can be compressed to the pixel of 8 bits of the compressed data.
- the value of the pixel of the residual data is 10
- the value of the pixel of the residual data is also 10. Therefore, if the value of the pixel of the residual data is expanded, the value of the pixel of the residual data can be returned to 10. That is, in the range in which the absolute value of the pixel of the residual data is small, the compressed data can be returned to substantially the same residual data as the original one.
- the value of the pixel of the residual data is 1023
- the value of the pixel of the compressed data is 127.
- the value of the pixel of the compressed data is 126. That is, in the range in which the absolute value of the pixel of the residual data is large, the compressed data cannot be returned to the same residual data as the original one. In other words, when the original absolute value of the pixel of the residual data is large, the expanded value of the pixel of the residual data based on the compressed data is rough.
- the main processor 40 generates a split data based on the compressed data (Step S16) . That is, since a size of the pixel of the compressed data is too large to embed it to the sparse image data, in the present embodiment, each of the pixels of the compressed data is split into two pieces of data, i.e., a first data part and a second data part.
- FIG. 9 illustrates a schematic drawing to explain how to generate the split data from the compressed data.
- An upper part of FIG. 9 shows a comparative example of the related technology and a lower part of FIG. 9 shows an explanation of the present embodiment.
- the value of the pixel of the compressed data is expressed by 8 bits, and it is split into the first data part of 4 bits and the second data part of 4 bits.
- an available space in the image signal processor 42 is composed of 14 bits for each of the pixels of the sparse image data, but each of the pixels of the sparse image data needs 10 bits. Therefore, 4 bits of the 14 bits are reserved bits and not used in the image signal processor 42. That is, a space of 4 bits of the 14 bits is a spare space in which the sparse image data is not stored.
- the value of the pixel of 8 bits of the compressed data is divided into two 4 bits as the split data.
- a size of the first data part and a size of the second data part are matched with a size of the spare space of the sparse image data.
- each pixel of 8 bits is divided into the first part data of 4 bits and the second part data of 4 bits.
- the main processor 40 embeds the split data into the sparse image data to generate the embedded sparse image data (Step S18) .
- the split data of 4 bits is embedded into the 4 reserved bits of the sparse image data. More specifically, each of the red pixels, each of the blue pixels and each of the green pixels of the sparse image data has 4 reserved bits which is the spare space for the image signal processor 42.
- the sparse image data into which the split data has been embedded is also referred to as the embedded sparse image data.
- the red pixel R1 and the green pixel P1 can constitute a second pair and the blue pixel B1 and the green pixel P1 can also constitute the second pair.
- each two adjacent pixels includes the green pixel P1 as well as the red pixel R1 or the blue pixel B1.
- the green pixel P1 in the second pair corresponds to the green pixel P1 in the first pair which is located at a position corresponding to the first pair of the dense image data. That is, when the position of the second pair of the sparse image data is identical to the position of the first pair of the dense image data, a value of the green pixel P1 in the second pair of the sparse image data is substantially the same as a value of the green pixel P1 in the first pair of the dense image data.
- the first data part of the split data is embedded into the spare space of 4 bits of the green pixel P1 of the second pair, and the second data part of the split data is embedded into the spare space of 4 bits of the red pixel R1 or the blue pixel B1.
- the first data part and the second data part of the split data are embedded into the each two adjacent red and green pixels R1 and P1 of the second pair or the each two adjacent blue and green pixels B1 and P1 of the second pair.
- all of the first data parts and the second data parts of the split data are embedded into the spare spaces of the sparse image data.
- the first pairs of the dense image data and the second pairs of the sparse image data have a one-to-one correspondence. Therefore, the first data part and the second data part are embedded into the two adjacent pixels of the second pair which corresponds to the first pair which is original to calculate their first data part and second data part. That is, the first data part and the second data part are inserted into the second pair corresponding to the position of the original first pair.
- the split data may be embedded into the spare space of the sparse image data in any manner if it can specify where the first data parts and the second data parts of the split data are embedded in the sparse image data.
- the information of the green pixels P2 is discarded when the sparse image data is input to the image signal processor 42.
- the data of the green pixel P2 can also be embedded into the sparse image data and thus information of the green pixels P2 is not discarded.
- the main processor 40 inputs the embedded sparse image data to the image signal processor 42 (Step S20) . That is, the embedded sparse image data including the sparse image data and the split data is input to the image signal processor 42 to generate a target image data. Thereafter, the image signal processor 42 initiates processing the sparse image data in the embedded sparse image data to obtain the target image data.
- the main processor 40 obtains the embedded sparse image data from the image signal processor 42 (Step S30) . That is, the image signal processor 42 has one or more data output ports to output various kinds of data during processing and one or more data input ports to input various kinds of data to the image signal processor 42. Therefore, the main processor 40 obtains the embedded sparse image data via one of the data output ports of the image signal processor 42.
- FIG. 10 illustrates a schematic drawing to explain a mechanism to generate a target image data in the present embodiment.
- the embedded sparse image data can be obtained from the image signal processor 42 and the embedded sparse image data includes the sparse image data and the split data.
- the embedded sparse image data obtained from the image signal processor 42 may not be the same as the embedded sparse image data input to the image signal processor 42.
- the split data is stored in the spare space in the sparse image data, it is acceptable for the target image generation process disclosed herein.
- the main processor 40 extracts the split data from the embedded sparse image data (Step S32) .
- each of the pixels of the sparse image data includes the split data of 4 bits. Therefore, the split data of 4 bits in each of the pixels shown in FIG. 9 are extracted from the embedded sparse image data.
- the main processor 40 joins the first data part and the second data part of the split data together to obtain the compressed data (Step S34) .
- the value of the pixel of 8 bits of the compressed data have been split into the first data part of 4 bits and the second data part of 4 bits. Therefore, the value of the pixel of 8 bits of the split data can be reconstructed by joining the first data part and the second data part which have been extracted from the same second pair.
- the compressed data shown in FIG. 9 can be obtained again.
- the main processor 40 expands the compressed data to reconstruct the residual data (Step S36) .
- the pixel of the residual data of 11 bits have been compressed to the pixel of 8 bits of the compressed data. Therefore, the pixel of 11 bits of the residual data can be reconstructed by expanding the compressed data.
- FIG. 11 illustrates a schematic drawing to explain how to reconstruct the residual data and the value of the pixels of the dense image data.
- FIG. 11 shows an opposite procedure to generate the residual data and the compressed data explained with reference to FIG. 7.
- the compressed data can be expanded by using the compression curve shown in FIG. 8 to obtain the residual data again. That is, when generating the compression data, the compression data has been converted from the residual data by using the compression curve shown in FIG. 8. Therefore, the residual data can be obtained again by inversely converting the compressed data by using the compression curve shown in FIG. 8.
- the value of the pixel of the compressed data is 10
- the value of the pixel of the residual data is 10.
- the value of the pixel of the compressed data is 127
- the value of the pixel of the residual data is 1023.
- the value of the pixel of the compressed data is 126
- the value of the pixel of the residual data is 850.
- the reproducibility of the value is not so high if the value of the pixel of the residual data is large.
- the reproducibility of the value is high if the value of the pixel of the residual data is small.
- the main processor 40 reconstructs the dense image data based on the residual data (Step S38) . That is, as shown in FIG. 11, in order to calculate the value of the green pixel P2, the value of green pixel P1 of the sparse image data is added to the value for the green pixel P2 of the residual data.
- the dense image data includes the plurality of the first pairs, each of which includes the green pixel P1 and the green pixel P2.
- the value of the each pixel of the residual data indicates the difference between the value of the green pixel P1 and the value of the green pixel P2 in the each first pair.
- the value of the green pixel P1 can be obtained from the sparse image data from the image signal processor 42, and thus the value of the green pixel P2 can be calculated by adding the value for the green pixel P2 of the residual data.
- the first pair of the dense image data can be obtained by merging the value of the green pixels P1 of the sparse image data and the value of the green pixels P2 calculated by adding the residual data and the values of the green pixels P1.
- the dense image data can be regenerated.
- the main processor 40 obtains a generated image data based on the sparse image data from one of the data output ports of the image signal processor 42 (Step S40) .
- the generated image data during processing based on the sparse image data can be obtained from the image signal processor 42.
- the main processor 40 combines the dense image data reconstructed in the Step 38 and the generated image data obtained in the Step 40 to generate a combined image data (Step S42) .
- FIG. 12 illustrates one example of the generated image data based on the sparse image data and the dense image data reconstructed in Step S38.
- the generated image data has been generated on the basis of the sparse image data in the image signal processor 42.
- the brightness of the image might be slightly rough but it is a full colored image.
- the brightness of the image can be fine enough because the color of the dense image data is green which is a light-sensitive color for human eye. Therefore, in the present embodiment, the dense image data is combined with the generated image data based on the sparse image data to generate the combined image data.
- the main processor 40 inputs the combined image data to one of the data input ports of the image signal processor 42 (Step S44) . Thereafter, the image signal processor continues processing for the combined image data, and the target image data is eventually output from the image signal processor 42.
- an image to be displayed on the display 20 may be generated based on the target image data.
- the target image data may be stored in the memory 44.
- the formats of the target image data are JPEG, TIFF, GIF or the like.
- the dense image data can be embedded as the split data into the sparse image data which is input to the image signal processor 42, and then the dense image can be reconstructed based on the split data embedded in the sparse image data.
- the image based on the dense image data can be regenerated and the quality of the target image data can be improved by combining the generated image data based on the sparse image data and the dense image data reconstructed from the split data in the embedded sparse image data.
- the format of the embedded sparse image data is the same as the format of the sparse image data
- a common image signal processor for the sparse image data can still be used as the image signal processor 42 for the embedded sparse image data. Therefore, it is not necessary to newly develop the image signal processor 42 to process the embedded sparse image data of the present embodiment to generate the target image data.
- the dense image data is generated in green
- another color may be used to generate the dense image data.
- yellow may be used to generate the dense image data.
- the color filter of the image sensor of the camera assembly 30 is composed of red, yellow and blue (RYB)
- the sparse image data is composed of red, yellow and blue whereas the dense image data is composed of yellow.
- the sparse image data may include more than three colors.
- the sparse image data may include green pixels, red pixels, blue pixels and yellow pixels. That is, the sparse image data may include a plurality of pixels of at least three colors.
- first and second are used herein for purposes of description and are not intended to indicate or imply relative importance or significance or to imply the number of indicated technical features.
- the feature defined with “first” and “second” may comprise one or more of this feature.
- a plurality of means two or more than two, unless specified otherwise.
- the terms “mounted” , “connected” , “coupled” and the like are used broadly, and may be, for example, fixed connections, detachable connections, or integral connections; may also be mechanical or electrical connections; may also be direct connections or indirect connections via intervening structures; may also be inner communications of two elements, which can be understood by those skilled in the art according to specific situations.
- a structure in which a first feature is "on" or “below” a second feature may include an embodiment in which the first feature is in direct contact with the second feature, and may also include an embodiment in which the first feature and the second feature are not in direct contact with each other, but are contacted via an additional feature formed therebetween.
- a first feature "on” , “above” or “on top of” a second feature may include an embodiment in which the first feature is right or obliquely “on” , “above” or “on top of” the second feature, or just means that the first feature is at a height higher than that of the second feature; while a first feature “below” , “under” or “on bottom of” a second feature may include an embodiment in which the first feature is right or obliquely “below” , "under” or “on bottom of” the second feature, or just means that the first feature is at a height lower than that of the second feature.
- Any process or method described in a flow chart or described herein in other ways may be understood to include one or more modules, segments or portions of codes of executable instructions for achieving specific logical functions or steps in the process, and the scope of a preferred embodiment of the present disclosure includes other implementations, in which it should be understood by those skilled in the art that functions may be implemented in a sequence other than the sequences shown or discussed, including in a substantially identical sequence or in an opposite sequence.
- the logic and/or step described in other manners herein or shown in the flow chart, for example, a particular sequence table of executable instructions for realizing the logical function may be specifically achieved in any computer readable medium to be used by the instruction execution system, device or equipment (such as the system based on computers, the system comprising processors or other systems capable of obtaining the instruction from the instruction execution system, device and equipment and executing the instruction) , or to be used in combination with the instruction execution system, device and equipment.
- the computer readable medium may be any device adaptive for including, storing, communicating, propagating or transferring programs to be used by or in combination with the instruction execution system, device or equipment.
- the computer readable medium comprise but are not limited to: an electronic connection (an electronic device) with one or more wires, a portable computer enclosure (a magnetic device) , a random access memory (RAM) , a read only memory (ROM) , an erasable programmable read-only memory (EPROM or a flash memory) , an optical fiber device and a portable compact disk read-only memory (CDROM) .
- the computer readable medium may even be a paper or other appropriate medium capable of printing programs thereon, this is because, for example, the paper or other appropriate medium may be optically scanned and then edited, decrypted or processed with other appropriate methods when necessary to obtain the programs in an electric manner, and then the programs may be stored in the computer memories.
- each part of the present disclosure may be realized by the hardware, software, firmware or their combination.
- a plurality of steps or methods may be realized by the software or firmware stored in the memory and executed by the appropriate instruction execution system.
- the steps or methods may be realized by one or a combination of the following techniques known in the art: a discrete logic circuit having a logic gate circuit for realizing a logic function of a data signal, an application-specific integrated circuit having an appropriate combination logic gate circuit, a programmable gate array (PGA) , a field programmable gate array (FPGA) , etc.
- each function cell of the embodiments of the present disclosure may be integrated in a processing module, or these cells may be separate physical existence, or two or more cells are integrated in a processing module.
- the integrated module may be realized in a form of hardware or in a form of software function modules. When the integrated module is realized in a form of software function module and is sold or used as a standalone product, the integrated module may be stored in a computer readable storage medium.
- the storage medium mentioned above may be read-only memories, magnetic disks, CD, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Vascular Medicine (AREA)
- Image Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
A method of generating a target image data includes obtaining a sparse image data and a dense image data, wherein the sparse image data includes a plurality of pixels of at least first color pixels, second color pixels and third color pixels, and the dense image data includes a plurality of pixels of the first color pixels; generating a residual data based on the dense image data by calculating differentials between each two adjacent pixels in the dense image data; generating a compressed data by compressing the residual data to reduce its data amount; generating a split data by splitting each of the pixels of the compressed data into a first data part and a second data part; and generating an embedded sparse image data by embedding the split data into the sparse image data.
Description
The present disclosure relates to a method of generating a target image data, an electrical device implementing such method and a non-transitory computer readable medium including program instructions stored thereon for performing such method.
Electrical devices such as smartphones and tablet terminals are widely used in our daily life. Nowadays, many of the electrical devices are equipped with a camera assembly to capture an image. Some of the electrical devices are portable and are thus easy to carry. Therefore, a user of the electrical device can easily take a picture of an object by using the camera assembly of the electrical device anytime, anywhere.
There are many formats to capture the image of the object and generate the target image data thereof. One of the widely known formats is a Bayer format which includes a sparse image data.
In addition to the sparse image data, in order to improve a quality of the image of the object based on the target image data, a dense image data is also generated when the camera assembly captures the object. In this case, the sparse image data and the dense image data are used to generate the target image data to be displayed on a display or to be stored in a memory of the electrical device. However, a common image signal processor cannot handle such two types of image data.
SUMMARY
The present disclosure aims to solve at least one of the technical problems mentioned above. Accordingly, the present disclosure needs to provide a method of generating a target image data and an electrical device implementing such method.
In accordance with the present disclosure, a method of generating a target image data may include:
obtaining a sparse image data and a dense image data, wherein the sparse image data includes a plurality of pixels of at least first color pixels, second color pixels and third color pixels, and the dense image data includes a plurality of pixels of the first color pixels;
generating a residual data based on the dense image data by calculating differentials between each two adjacent pixels in the dense image data;
generating a compressed data by compressing the residual data to reduce its data amount;
generating a split data by splitting each of the pixels of the compressed data into a first data part and a second data part; and
generating an embedded sparse image data by embedding the split data into the sparse image data.
In some embodiments, the method may further include inputting the embedded sparse image data to an image signal processor which processes the sparse image data in the embedded sparse image data to generate the target image data.
In some embodiments, the each two adjacent pixels in the dense image data may constitute a first pair, and the first pair may include a first value of the first color pixel and a second value of the first pixel.
In some embodiments, the each two adjacent pixels in the sparse image data may constitutes a second pair, and the second pair may include a third value of the first color pixel and a fourth value of the second color pixel or may include the third value of the first color pixel and the fourth value of the third color pixel.
In some embodiments, the first value of the first color pixel in the first pair of the dense image data may correspond to the third value of the first color pixel in the second pair which is located at a position corresponding to the first pair of the dense image data.
In some embodiments, the generating the residual data may include subtracting the second value of the first color pixel from the first value of the first color pixel in the first pair.
In some embodiments, wherein generating compressed data may include reducing a number of bits of the residual data.
In some embodiments, the reducing the number of bits of the residual data may include converting the residual data to the compressed data based on a compression curve which defines a relationship between a value of the pixel of the residual data and a value of the pixel of the compressed data.
In some embodiments, each of the first color pixels, each of the second color pixels and each of the third color pixels in the sparse image data may have a spare space in which the sparse image data is not stored.
In some embodiments, in the generating the split data, sizes of the first data part and the second data part of the split data may be matched with sizes of the spare space in each of the first color pixels, each of the second color pixels and each of the third color pixels in the sparse image data.
In some embodiments, the generating the embedded sparse image data may include embedding the first data part and the second data part of the first pair into the spare space of the second pair of the sparse image data.
In some embodiments, the method may further include:
obtaining the embedded sparse image data from the image signal processor after the embedded sparse image data has been input to the image signal processor;
extracting the split data from the embedded sparse image data obtained from the image signal processor to generate the compressed data;
expanding the compressed data generated from the split data to reconstruct the residual data; and
reconstructing the dense image data based on the residual data reconstructed from the compressed data.
In some embodiments, the method may further include:
obtaining a generated image data during processing to generate the target image data based on the sparse image data from the image signal processor; and
combining the generated image data and the dense image data reconstructed from the residual data to generate a combined image data.
In some embodiments, the method may further include inputting the combined image data to the image signal processor.
In some embodiments, the first color may be green, the second color may be red and the third color may be blue.
In some embodiments, the sparse image data may be in conformity to a Bayer format.
In accordance with the present disclosure, an electrical device may include:
a camera assembly configured to capture an image of an object and to generate a sparse image data and a dense image data; and
a main processor configured to:
obtain the sparse image data and the dense image data, wherein the sparse image data includes a plurality of pixels of at least first color pixels, second color pixels and third color pixels, and the dense image data includes a plurality of pixels of the first color pixels;
generate a residual data based on the dense image data by calculating differentials between each two adjacent pixels in the dense image data;
generate a compressed data by compressing the residual data to reduce its data amount;
generate a split data by splitting each of the pixels of the compressed data into a first data part and a second data part; and
generate an embedded sparse image data by embedding the split data into the sparse image data.
In some embodiments, the main processor may be further configured to input the embedded sparse image data to an image signal processor which processes the sparse image data in the embedded sparse image data to generate the target image data.
In some embodiments, the each two adjacent pixels in the dense image data may constitute a first pair, and the first pair includes a first value of the first color pixel and a second value of the first pixel.
In some embodiments, the each two adjacent pixels in the sparse image data may constitute a second pair, and the second pair may include a third value of the first color pixel and a fourth value of the second color pixel or may include the third value of the first color pixel and the fourth value of the third color pixel.
In some embodiments, the first value of the first color pixel in the first pair of the dense image data may correspond to the third value of the first color pixel in the second pair which is located at a position corresponding to the first pair of the dense image data.
In some embodiments, the residual data may be generated by subtracting the second value of the first color pixel from the first value of the first color pixel in the first pair.
In some embodiments, the compressed data may be generated by reducing a number of bits of the residual data.
In some embodiment, the number of bits of the residual data may be reduced by converting the residual data to the compressed data based on a compression curve which defines a relationship between a value of the pixel of the residual data and a value of the pixel of the compressed data.
In some embodiments, each of the first color pixels, each of the second color pixels and each of the third color pixels in the sparse image data may have a spare space in which the sparse image data is not stored.
In some embodiments, when the split data is generated, sizes of the first data part and the second data part of the split data may be matched with sizes of the spare space in each of the first color pixels, each of the second color pixels and each of the third color pixels in the sparse image data.
In some embodiments, the embedded sparse image data may be generated by embedding the first data part and the second data part of the first pair into the spare space of the second pair of the sparse image data.
In some embodiment, the main processor may be further configured to:
obtain the embedded sparse image data from the image signal processor after the embedded sparse image data has been input to the image signal processor;
extract the split data from the embedded sparse image data obtained from the image signal processor to generate the compressed data;
expand the compressed data generated from the split data to reconstruct the residual data; and
reconstruct the dense image data based on the residual data reconstructed from the compressed data.
In some embodiments, the main processor may be further configured to:
obtain a generated image data during processing to generate the target image data based on the sparse image data from the image signal processor; and
combine the generated image data and the dense image data reconstructed from the residual data to generate a combined image data.
In some embodiments, the main processor may be further configured to input the combined image data to the image signal processor.
In some embodiments, the first color may be green, the second color may be red and the third color may be blue.
In some embodiments, the sparse image data may be in conformity to a Bayer format.
In accordance with the present disclosure, a non-transitory computer readable medium may include program instructions stored thereon for performing at least the following:
obtaining a sparse image data and a dense image data, wherein the sparse image data includes a plurality of pixels of at least first color pixels, second color pixels and third color pixels, and the dense image data includes a plurality of pixels of the first color pixels;
generating a residual data based on the dense image data by calculating differentials between each two adjacent pixels in the dense image data;
generating a compressed data by compressing the residual data to reduce its data amount;
generating a split data by splitting each of the pixels of the compressed data into a first data part and a second data part; and
generating an embedded sparse image data by embedding the split data into the sparse image data to generate a target image data.
These and/or other aspects and advantages of embodiments of the present disclosure will become apparent and more readily appreciated from the following descriptions made with reference to the drawings, in which:
FIG. 1 illustrates a plan view of a first side of an electrical device according to an embodiment of the present disclosure;
FIG. 2 illustrates a plan view of a second side of the electrical device according to the embodiment of the present disclosure;
FIG. 3 illustrates a block diagram of the electrical device according to the embodiment of the present disclosure;
FIG. 4 illustrates a flowchart of a target image generation process performed by the electrical device according to the embodiment of the present disclosure (part 1) ;
FIG. 5 illustrates a flowchart of the target image generation process performed by the electrical device according to the embodiment of the present disclosure (part 2) ;
FIG. 6 illustrates a schematic drawing to explain a mechanism to generate an embedded sparse image data to be input to an image signal processor in the embodiment of the present disclosure;
FIG. 7 illustrates a schematic drawing to explain how to generate a residual data and a compressed data in the embodiment of the present disclosure;
FIG. 8 illustrates one of the examples of a compression curve to compress the residual data to generate the compressed data in the embodiment of the present disclosure;
FIG 9 illustrates a schematic drawing to explain how to generate a split data from the compressed data in the embodiment of the present disclosure;
FIG. 10 illustrates a schematic drawing to explain a mechanism to generate a target image data in the embodiment of the present disclosure;
FIG. 11 illustrates a schematic drawing to explain how to reconstruct the residual data and values of pixels of the dense image data in the embodiment of the present disclosure; and
FIG. 12 illustrates one example of a generated image data based on a sparse image data, the dense image data reconstructed based on the embedded sparse image data obtained from the image signal processor, and a combined image data which is generated by combining the generated image data and the dense image data in the embodiment of the present disclosure.
Embodiments of the present disclosure will be described in detail and examples of the embodiments will be illustrated in the accompanying drawings. The same or similar elements and the elements having same or similar functions are denoted by like reference numerals throughout the descriptions. The embodiments described herein with reference to the drawings are explanatory, which aim to illustrate the present disclosure, but shall not be construed to limit the present disclosure.
FIG. 1 illustrates a plan view of a first side of an electrical device 10 according to an embodiment of the present disclosure and FIG. 2 illustrates a plan view of a second side of the electrical device 10 according to the embodiment of the present disclosure. The first side may be referred to as a back side of the electrical device 10 whereas the second side may be referred to as a front side of the electrical device 10.
As shown in FIG. 1 and FIG. 2, the electrical device 10 may include a display 20 and a camera assembly 30. In the present embodiment, the camera assembly 30 includes a first main camera 32, a second main camera 34 and a sub camera 36. The first main camera 32 and the second main camera 34 can capture an image in a first side of the electrical device 10 and the sub camera 36 can capture an image in the second side of the electrical device 10. Therefore, the first main camera 32 and the second main camera 34 are so-called out-cameras whereas the sub camera 36 is a so-called in-camera. As an example, the electrical device 10 can be a mobile phone, a tablet computer, a personal digital assistant, and so on.
Although the electrical device 10 according to the present embodiment has three cameras, the electrical device 10 may have less than three cameras or more than three cameras. For example, the electrical device 10 may have two, four, five, and so on, cameras.
FIG. 3 illustrates a block diagram of the electrical device 10 according to the present embodiment. As shown in FIG. 3, in addition to the display 20 and the camera assembly 30, the electrical device 10 may include a main processor 40, an image signal processor 42, a memory 44, a power supply circuit 46 and a communication circuit 48. The display 20, the camera assembly 30, the main processor 40, the image signal processor 42, the memory 44, the power supply circuit 46 and the communication circuit 48 are connected each other via a bus 50.
The main processor 40 executes one or more programs stored in the memory 44. The main processor 40 implements various applications and data processing of the electrical device 10 by executing the programs. The main processor 40 may be one or more computer processors. The main processor 40 is not limited to one CPU core, but it may have a plurality of CPU cores. The main processor 40 may be a main CPU of the electrical device 10, an image processing unit (IPU) or a DSP provided with the camera assembly 30.
The image signal processor 42 controls the camera assembly 30 and processes various kinds of image data captured by the camera assembly 30 to generate a target image data. For example, the image signal processor 42 can execute a de-mosaic process, a noise reduction process, an auto exposure process, an auto focus process, an auto white balance process, a high dynamic range process and so on, to the image data captured by the camera assembly 30.
In the present embodiment, the main processor 40 and the image signal processor 42 collaborate with each other to generate a target image data of the object captured by the camera assembly 30. That is, the main processor 40 and the image signal processor 42 are configured to capture the image of the object by the camera assembly 30 and execute various kinds of image processes to the captured image data.
The memory 44 stores a program to be executed by the main processor 40 and various kinds of data. For example, data of the captured image are stored in the memory 44.
The memory 44 may include a high-speed RAM memory, and/or a non-volatile memory such as a flash memory and a magnetic disk memory. That is, the memory 44 may include a non-transitory computer readable medium, in which the program is stored.
The power supply circuit 46 may have a battery such as a lithium-ion rechargeable battery and a battery management unit (BMU) for managing the battery.
The communication circuit 48 is configured to receive and transmit data to communicate with base stations of the telecommunication network system, the Internet or other devices via wireless communication. The wireless communication may adopt any communication standard or protocol, including but not limited to GSM (Global System for Mobile communication) , CDMA (Code Division Multiple Access) , LTE (Long Term Evolution) , LTE-Advanced, 5th generation (5G) . The communication circuit 48 may include an antenna and a RF (radio frequency) circuit.
FIG. 4 and FIG. 5 illustrate a flowchart of a target image generation process performed by the electrical device 10 according to the present embodiment. In the present embodiment, the target image generation process is executed by, for example, the main processor 40 in order to generate the target image data. However, the main processor 40 collaborates with the image signal processor 42 to generate the target image data. Therefore, the main processor 40 and the image signal processor 42 constitute an image processor in the present embodiment.
In addition, in the present embodiment, program instructions of the target image generation process are stored in the non-transitory computer readable medium of the memory 44. When the program instructions are read out from the memory 44 and executed in the main processor 40, the main processor 40 implements the target image generation process illustrated in FIG. 4 and FIG. 5.
As shown in FIG. 4, for example, the main processor 40 obtains a sparse image data and a dense image data (Step S10) . In the present embodiment, the main processor 40 obtains the sparse image data and the dense image data from the camera assembly 30. That is, the camera assembly captures an image of an object and generates both the sparse image data and the dense image data. In the present embodiment, the sparse image data includes a plurality of pixels which are composed of green pixels, red pixels and blue pixels. On the other hand, the dense image data includes a plurality of pixels of green pixels.
In order to generate the sparse image data and the dense image data with the camera assembly 30, the camera assembly 30 may have a specialized image sensor to capture the image of the object and generate the sparse image data and the dense image data with a single camera by executing a single imaging operation. In this case, for example, the first main camera 32 may capture the image of the object and generate both the sparse image data and the dense image data by executing the single imaging operation.
On the other hand, the camera assembly 30 may use two cameras to capture the image of the object and generate the sparse image data and the dense image data by executing a single imaging operation. In this case, for example, the first main camera 32 captures the image of the object and generates the sparse image data whereas the second main camera 34 captures the image of the object and generates the dense image data.
On the other hand, the camera assembly 30 may capture the image of the object and generate the sparse image data and the dense image data with a single camera by executing two imaging operations. For example, the sub camera 36 captures the image of the object by executing a first imaging operation to generate the sparse image data and then the sub camera 36 captures the image of the object by executing a second imaging operation immediately after the first imaging operation, to generate the dense image data.
FIG. 6 illustrates a schematic drawing to explain a mechanism to generate an embedded sparse image data to be input to the image signal processor 42.
As shown in FIG. 6, the sparse image data is in conformity to a Bayer format. Therefore, an arrangement of green, red and blue of a color filter of an image sensor of the camera assembly 30 to capture the image of the object is in conformity to a Bayer arrangement. In the Bayer format, the number of green pixels is twice as many as the number of red pixels or the number of blue pixels in the sparse image data. The sparse image data may also be referred to as RAW data from the camera assembly 30.
The dense image data is composed of the green pixels. This is because a brightness of the green is more sensitive than a brightness of the red or the blue for human eye. In the present embodiment, the dense image is captured to adjust the brightness of the target image data.
Next, as shown in FIG. 4, for example, the main processor 40 generates a residual data based on the dense image data (Step S12) . That is, in the present embodiment, in order to reduce a data amount, the residual data is generated by calculating differences between each two adjacent pixels in the dense image data.
FIG. 7 illustrates a schematic drawing to explain how to generate the residual data and a compressed data. As shown in FIG. 6 and FIG. 7, a plurality of green pixels P1 are included in both the sparse image data and the dense image data. On the other hand, a plurality of green pixels P2 are included in the dense image data but they are not included in the sparse image data. In FIG. 7, one of the green pixels P1 and one of the green pixels P2 are depicted as an example.
In general, the brightness of the each two adjacent pixels are approximate to or the same as the brightness of the other. That is, the difference between a value of the green pixel P1 and a value of the green pixel P2 adjacent to the green pixel P1 is generally small. Therefore, in the present embodiment, in order to reduce the data amount, the difference between the value of the green pixel P1 and the value of the green pixel P2 adjacent to the green pixel P1 is obtained by subtracting the value of the green pixel P2 from the value of the green pixel P1 adjacent to green pixel P2.
In other words, the each two adjacent pixels in the dense image data constitute a first pair to generate the residual data. In the example of the sparse image data in FIG. 6, there are 12 (3X4) first pairs to calculate the residual data. After the residual data is generated, the number of the pixels of the residual data is half of the number of the pixels of the dense image data.
For example, in the present embodiment, one pixel of the dense image data is composed of 10 bits. That is, a value of the one pixel of the dense image data is between 0 and 1023. On the other hand, one pixel of the residual data is composed of 11 bits, because a value of the one pixel of the residual data is between +1023 and -1024.
Next, as shown in FIG. 4, for example, the main processor 40 generates a compressed data based on the residual data (Step S14) . There are various ways to compress the residual data to reduce the number of bits of the residual data. Therefore, one example of the ways to compress the residual data will be explained herein.
FIG. 8 shows one of the examples of a compression curve to compress the residual data to generate the compressed data. That is, the residual data is converted to the compressed data based on the compression curve. The compression curve is also referred to as a tone curve to compress various data and defines a relationship between a value of the pixel of the residual data and a value of the pixel of the compressed data. The number of bits for the value of the pixel of the compressed data is smaller than the number of bits for the value of the pixel of the residual data.
As shown in FIG. 7 and FIG. 8, in the present embodiment, one pixel of 11 bits of the residual data is compressed to one pixel of 8 bits of the compressed data. That is, the value of one pixel of the residual data is between -1024 and +1023 whereas the value of one pixel of the compressed data is between -128 and +127.
As mentioned above, since the difference between the values of the each two adjacent pixels of the dense image data is generally small, the compression curve is substantially linear in a range in which an absolute value of the pixel of the residual data is small. On the other hand, the compression curve is substantially flat or constant in a range in which the absolute value of the pixel of the residual data is large. As a result, the compression curve is S-shaped.
By compressing the residual data based on the compression curve shown in FIG. 8, the pixel of 11 bits of the residual data can be compressed to the pixel of 8 bits of the compressed data. For example, if the value of the pixel of the residual data is 10, the value of the pixel of the residual data is also 10. Therefore, if the value of the pixel of the residual data is expanded, the value of the pixel of the residual data can be returned to 10. That is, in the range in which the absolute value of the pixel of the residual data is small, the compressed data can be returned to substantially the same residual data as the original one.
On the other hand, if the value of the pixel of the residual data is 1023, the value of the pixel of the compressed data is 127. In addition, if the value of the pixel of the residual data is 850, the value of the pixel of the compressed data is 126. That is, in the range in which the absolute value of the pixel of the residual data is large, the compressed data cannot be returned to the same residual data as the original one. In other words, when the original absolute value of the pixel of the residual data is large, the expanded value of the pixel of the residual data based on the compressed data is rough. However, since the possibility that the absolute value of the pixel of the residual data is large is lower than the possibility that the absolute value of the pixel of the residual data is small, a low reproducibility of the value in the range in which the absolute value of the pixel of the residual data is large is acceptable.
Next, as shown in FIG 4, for example, the main processor 40 generates a split data based on the compressed data (Step S16) . That is, since a size of the pixel of the compressed data is too large to embed it to the sparse image data, in the present embodiment, each of the pixels of the compressed data is split into two pieces of data, i.e., a first data part and a second data part.
FIG. 9 illustrates a schematic drawing to explain how to generate the split data from the compressed data. An upper part of FIG. 9 shows a comparative example of the related technology and a lower part of FIG. 9 shows an explanation of the present embodiment.
As shown in FIG. 9, the value of the pixel of the compressed data is expressed by 8 bits, and it is split into the first data part of 4 bits and the second data part of 4 bits.
In the present embodiment, an available space in the image signal processor 42 is composed of 14 bits for each of the pixels of the sparse image data, but each of the pixels of the sparse image data needs 10 bits. Therefore, 4 bits of the 14 bits are reserved bits and not used in the image signal processor 42. That is, a space of 4 bits of the 14 bits is a spare space in which the sparse image data is not stored.
Therefore, in the present embodiment, in order to be able to insert the value of the pixel of 8 bits of the compressed data into the 4 reserved bits of the sparse image data, the value of the pixel of 8 bits of the compressed data is divided into two 4 bits as the split data. As a result, a size of the first data part and a size of the second data part are matched with a size of the spare space of the sparse image data. In the present embodiment, each pixel of 8 bits is divided into the first part data of 4 bits and the second part data of 4 bits.
Next, as shown in FIG. 4, for example, the main processor 40 embeds the split data into the sparse image data to generate the embedded sparse image data (Step S18) . As shown in FIG. 9 and FIG. 6, the split data of 4 bits is embedded into the 4 reserved bits of the sparse image data. More specifically, each of the red pixels, each of the blue pixels and each of the green pixels of the sparse image data has 4 reserved bits which is the spare space for the image signal processor 42. Hereinafter, the sparse image data into which the split data has been embedded is also referred to as the embedded sparse image data.
In the sparse image data of the Bayer format, the red pixel R1 and the green pixel P1 can constitute a second pair and the blue pixel B1 and the green pixel P1 can also constitute the second pair. In the Bayer format, each two adjacent pixels includes the green pixel P1 as well as the red pixel R1 or the blue pixel B1.
The green pixel P1 in the second pair corresponds to the green pixel P1 in the first pair which is located at a position corresponding to the first pair of the dense image data. That is, when the position of the second pair of the sparse image data is identical to the position of the first pair of the dense image data, a value of the green pixel P1 in the second pair of the sparse image data is substantially the same as a value of the green pixel P1 in the first pair of the dense image data.
In the present embodiment, the first data part of the split data is embedded into the spare space of 4 bits of the green pixel P1 of the second pair, and the second data part of the split data is embedded into the spare space of 4 bits of the red pixel R1 or the blue pixel B1.
That is, the first data part and the second data part of the split data are embedded into the each two adjacent red and green pixels R1 and P1 of the second pair or the each two adjacent blue and green pixels B1 and P1 of the second pair. In the present embodiment, all of the first data parts and the second data parts of the split data are embedded into the spare spaces of the sparse image data.
In the present embodiment, the first pairs of the dense image data and the second pairs of the sparse image data have a one-to-one correspondence. Therefore, the first data part and the second data part are embedded into the two adjacent pixels of the second pair which corresponds to the first pair which is original to calculate their first data part and second data part. That is, the first data part and the second data part are inserted into the second pair corresponding to the position of the original first pair.
However, the split data may be embedded into the spare space of the sparse image data in any manner if it can specify where the first data parts and the second data parts of the split data are embedded in the sparse image data.
In the comparative example shown in the upper part of FIG. 9, the information of the green pixels P2 is discarded when the sparse image data is input to the image signal processor 42. However, in the present embodiment, the data of the green pixel P2 can also be embedded into the sparse image data and thus information of the green pixels P2 is not discarded.
Next, as shown in FIG. 4, for example, the main processor 40 inputs the embedded sparse image data to the image signal processor 42 (Step S20) . That is, the embedded sparse image data including the sparse image data and the split data is input to the image signal processor 42 to generate a target image data. Thereafter, the image signal processor 42 initiates processing the sparse image data in the embedded sparse image data to obtain the target image data.
Next, as shown in FIG. 5, for example, the main processor 40 obtains the embedded sparse image data from the image signal processor 42 (Step S30) . That is, the image signal processor 42 has one or more data output ports to output various kinds of data during processing and one or more data input ports to input various kinds of data to the image signal processor 42. Therefore, the main processor 40 obtains the embedded sparse image data via one of the data output ports of the image signal processor 42.
FIG. 10 illustrates a schematic drawing to explain a mechanism to generate a target image data in the present embodiment. As shown in FIG. 10, the embedded sparse image data can be obtained from the image signal processor 42 and the embedded sparse image data includes the sparse image data and the split data.
Incidentally, when the embedded sparse image data is during processing, the embedded sparse image data obtained from the image signal processor 42 may not be the same as the embedded sparse image data input to the image signal processor 42. However, since the split data is stored in the spare space in the sparse image data, it is acceptable for the target image generation process disclosed herein.
Next, as shown in FIG. 5, for example, the main processor 40 extracts the split data from the embedded sparse image data (Step S32) . In the present embodiment, each of the pixels of the sparse image data includes the split data of 4 bits. Therefore, the split data of 4 bits in each of the pixels shown in FIG. 9 are extracted from the embedded sparse image data.
Next, as shown in FIG. 5, for example, the main processor 40 joins the first data part and the second data part of the split data together to obtain the compressed data (Step S34) . As mentioned above, when generating the split data, the value of the pixel of 8 bits of the compressed data have been split into the first data part of 4 bits and the second data part of 4 bits. Therefore, the value of the pixel of 8 bits of the split data can be reconstructed by joining the first data part and the second data part which have been extracted from the same second pair. By joining the first data part and the second data part of the each pixels of the split data, the compressed data shown in FIG. 9 can be obtained again.
Next, as shown in FIG. 5, for example, the main processor 40 expands the compressed data to reconstruct the residual data (Step S36) . As mentioned above, the pixel of the residual data of 11 bits have been compressed to the pixel of 8 bits of the compressed data. Therefore, the pixel of 11 bits of the residual data can be reconstructed by expanding the compressed data.
FIG. 11 illustrates a schematic drawing to explain how to reconstruct the residual data and the value of the pixels of the dense image data. FIG. 11 shows an opposite procedure to generate the residual data and the compressed data explained with reference to FIG. 7.
As shown in FIG. 11, the compressed data can be expanded by using the compression curve shown in FIG. 8 to obtain the residual data again. That is, when generating the compression data, the compression data has been converted from the residual data by using the compression curve shown in FIG. 8. Therefore, the residual data can be obtained again by inversely converting the compressed data by using the compression curve shown in FIG. 8.
For example, if the value of the pixel of the compressed data is 10, the value of the pixel of the residual data is 10. Furthermore, if the value of the pixel of the compressed data is 127, the value of the pixel of the residual data is 1023. However, if the value of the pixel of the compressed data is 126, the value of the pixel of the residual data is 850.
As mentioned above, since the value of the pixel of the compressed data is expressed by 8 bits whereas the value of the pixel of the residual data is expressed by 11 bits, the reproducibility of the value is not so high if the value of the pixel of the residual data is large. However, the reproducibility of the value is high if the value of the pixel of the residual data is small.
Next, as shown in FIG. 5, for example, the main processor 40 reconstructs the dense image data based on the residual data (Step S38) . That is, as shown in FIG. 11, in order to calculate the value of the green pixel P2, the value of green pixel P1 of the sparse image data is added to the value for the green pixel P2 of the residual data.
As explained above, the dense image data includes the plurality of the first pairs, each of which includes the green pixel P1 and the green pixel P2. In addition, the value of the each pixel of the residual data indicates the difference between the value of the green pixel P1 and the value of the green pixel P2 in the each first pair. The value of the green pixel P1 can be obtained from the sparse image data from the image signal processor 42, and thus the value of the green pixel P2 can be calculated by adding the value for the green pixel P2 of the residual data.
Thereafter, the first pair of the dense image data can be obtained by merging the value of the green pixels P1 of the sparse image data and the value of the green pixels P2 calculated by adding the residual data and the values of the green pixels P1. By repeating and applying this process to each of the first pairs, the dense image data can be regenerated.
Next, as shown in FIG. 5, for example, the main processor 40 obtains a generated image data based on the sparse image data from one of the data output ports of the image signal processor 42 (Step S40) . As shown in FIG. 10, the generated image data during processing based on the sparse image data can be obtained from the image signal processor 42.
Next, as shown in FIG. 5, for example, the main processor 40 combines the dense image data reconstructed in the Step 38 and the generated image data obtained in the Step 40 to generate a combined image data (Step S42) .
FIG. 12 illustrates one example of the generated image data based on the sparse image data and the dense image data reconstructed in Step S38. As shown in FIG. 12, the generated image data has been generated on the basis of the sparse image data in the image signal processor 42. As a result, even if an image of the generated image data is displayed on the display 20, the brightness of the image might be slightly rough but it is a full colored image. On the contrary, if an image of the dense image data is displayed on the display 20, the brightness of the image can be fine enough because the color of the dense image data is green which is a light-sensitive color for human eye. Therefore, in the present embodiment, the dense image data is combined with the generated image data based on the sparse image data to generate the combined image data.
Next, as shown in FIG. 5 and FIG. 10, for example, the main processor 40 inputs the combined image data to one of the data input ports of the image signal processor 42 (Step S44) . Thereafter, the image signal processor continues processing for the combined image data, and the target image data is eventually output from the image signal processor 42.
For example, an image to be displayed on the display 20 may be generated based on the target image data. Alternatively, the target image data may be stored in the memory 44. There are a variety of formats for the target image data. For instance, the formats of the target image data are JPEG, TIFF, GIF or the like.
As described above, in accordance with the electrical device 10 according to the present embodiment, the dense image data can be embedded as the split data into the sparse image data which is input to the image signal processor 42, and then the dense image can be reconstructed based on the split data embedded in the sparse image data. As a result, the image based on the dense image data can be regenerated and the quality of the target image data can be improved by combining the generated image data based on the sparse image data and the dense image data reconstructed from the split data in the embedded sparse image data.
In addition, since the format of the embedded sparse image data is the same as the format of the sparse image data, a common image signal processor for the sparse image data can still be used as the image signal processor 42 for the embedded sparse image data. Therefore, it is not necessary to newly develop the image signal processor 42 to process the embedded sparse image data of the present embodiment to generate the target image data.
Incidentally, in the embodiment mentioned above, although the dense image data is generated in green, another color may be used to generate the dense image data. For example, yellow may be used to generate the dense image data. In this case, the color filter of the image sensor of the camera assembly 30 is composed of red, yellow and blue (RYB) , and the sparse image data is composed of red, yellow and blue whereas the dense image data is composed of yellow.
Moreover, the sparse image data may include more than three colors. For example, the sparse image data may include green pixels, red pixels, blue pixels and yellow pixels. That is, the sparse image data may include a plurality of pixels of at least three colors.
In the description of embodiments of the present disclosure, it is to be understood that terms such as "central" , "longitudinal" , "transverse" , "length" , "width" , "thickness" , "upper" , "lower" , "front" , "rear" , "back" , "left" , "right" , "vertical" , "horizontal" , "top" , "bottom" , "inner" , "outer" , "clockwise" and "counterclockwise" should be construed to refer to the orientation or the position as described or as shown in the drawings under discussion. These relative terms are only used to simplify description of the present disclosure, and do not indicate or imply that the device or element referred to must have a particular orientation, or constructed or operated in a particular orientation. Thus, these terms cannot be constructed to limit the present disclosure.
In addition, terms such as "first" and "second" are used herein for purposes of description and are not intended to indicate or imply relative importance or significance or to imply the number of indicated technical features. Thus, the feature defined with "first" and "second" may comprise one or more of this feature. In the description of the present disclosure, "a plurality of" means two or more than two, unless specified otherwise.
In the description of embodiments of the present disclosure, unless specified or limited otherwise, the terms "mounted" , "connected" , "coupled" and the like are used broadly, and may be, for example, fixed connections, detachable connections, or integral connections; may also be mechanical or electrical connections; may also be direct connections or indirect connections via intervening structures; may also be inner communications of two elements, which can be understood by those skilled in the art according to specific situations.
In the embodiments of the present disclosure, unless specified or limited otherwise, a structure in which a first feature is "on" or "below" a second feature may include an embodiment in which the first feature is in direct contact with the second feature, and may also include an embodiment in which the first feature and the second feature are not in direct contact with each other, but are contacted via an additional feature formed therebetween. Furthermore, a first feature "on" , "above" or "on top of" a second feature may include an embodiment in which the first feature is right or obliquely "on" , "above" or "on top of" the second feature, or just means that the first feature is at a height higher than that of the second feature; while a first feature "below" , "under" or "on bottom of" a second feature may include an embodiment in which the first feature is right or obliquely "below" , "under" or "on bottom of" the second feature, or just means that the first feature is at a height lower than that of the second feature.
Various embodiments and examples are provided in the above description to implement different structures of the present disclosure. In order to simplify the present disclosure, certain elements and settings are described in the above. However, these elements and settings are only by way of example and are not intended to limit the present disclosure. In addition, reference numbers and/or reference letters may be repeated in different examples in the present disclosure. This repetition is for the purpose of simplification and clarity and does not refer to relations between different embodiments and/or settings. Furthermore, examples of different processes and materials are provided in the present disclosure. However, it would be appreciated by those skilled in the art that other processes and/or materials may be also applied.
Reference throughout this specification to "an embodiment" , "some embodiments" , "an exemplary embodiment" , "an example" , "a specific example" or "some examples" means that a particular feature, structure, material, or characteristics described in connection with the embodiment or example is included in at least one embodiment or example of the present disclosure. Thus, the appearances of the above phrases throughout this specification are not necessarily referring to the same embodiment or example of the present disclosure. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments or examples.
Any process or method described in a flow chart or described herein in other ways may be understood to include one or more modules, segments or portions of codes of executable instructions for achieving specific logical functions or steps in the process, and the scope of a preferred embodiment of the present disclosure includes other implementations, in which it should be understood by those skilled in the art that functions may be implemented in a sequence other than the sequences shown or discussed, including in a substantially identical sequence or in an opposite sequence.
The logic and/or step described in other manners herein or shown in the flow chart, for example, a particular sequence table of executable instructions for realizing the logical function, may be specifically achieved in any computer readable medium to be used by the instruction execution system, device or equipment (such as the system based on computers, the system comprising processors or other systems capable of obtaining the instruction from the instruction execution system, device and equipment and executing the instruction) , or to be used in combination with the instruction execution system, device and equipment. As to the specification, "the computer readable medium" may be any device adaptive for including, storing, communicating, propagating or transferring programs to be used by or in combination with the instruction execution system, device or equipment. More specific examples of the computer readable medium comprise but are not limited to: an electronic connection (an electronic device) with one or more wires, a portable computer enclosure (a magnetic device) , a random access memory (RAM) , a read only memory (ROM) , an erasable programmable read-only memory (EPROM or a flash memory) , an optical fiber device and a portable compact disk read-only memory (CDROM) . In addition, the computer readable medium may even be a paper or other appropriate medium capable of printing programs thereon, this is because, for example, the paper or other appropriate medium may be optically scanned and then edited, decrypted or processed with other appropriate methods when necessary to obtain the programs in an electric manner, and then the programs may be stored in the computer memories.
It should be understood that each part of the present disclosure may be realized by the hardware, software, firmware or their combination. In the above embodiments, a plurality of steps or methods may be realized by the software or firmware stored in the memory and executed by the appropriate instruction execution system. For example, if it is realized by the hardware, likewise in another embodiment, the steps or methods may be realized by one or a combination of the following techniques known in the art: a discrete logic circuit having a logic gate circuit for realizing a logic function of a data signal, an application-specific integrated circuit having an appropriate combination logic gate circuit, a programmable gate array (PGA) , a field programmable gate array (FPGA) , etc.
Those skilled in the art shall understand that all or parts of the steps in the above exemplifying method of the present disclosure may be achieved by commanding the related hardware with programs. The programs may be stored in a computer readable storage medium, and the programs comprise one or a combination of the steps in the method embodiments of the present disclosure when run on a computer.
In addition, each function cell of the embodiments of the present disclosure may be integrated in a processing module, or these cells may be separate physical existence, or two or more cells are integrated in a processing module. The integrated module may be realized in a form of hardware or in a form of software function modules. When the integrated module is realized in a form of software function module and is sold or used as a standalone product, the integrated module may be stored in a computer readable storage medium.
The storage medium mentioned above may be read-only memories, magnetic disks, CD, etc.
Although embodiments of the present disclosure have been shown and described, it would be appreciated by those skilled in the art that the embodiments are explanatory and cannot be construed to limit the present disclosure, and changes, modifications, alternatives and variations can be made in the embodiments without departing from the scope of the present disclosure.
Claims (33)
- A method of generating a target image data, comprising:obtaining a sparse image data and a dense image data, wherein the sparse image data includes a plurality of pixels of at least first color pixels, second color pixels and third color pixels, and the dense image data includes a plurality of pixels of the first color pixels;generating a residual data based on the dense image data by calculating differentials between each two adjacent pixels in the dense image data;generating a compressed data by compressing the residual data to reduce its data amount;generating a split data by splitting each of the pixels of the compressed data into a first data part and a second data part; andgenerating an embedded sparse image data by embedding the split data into the sparse image data.
- The method according to claim 1, further comprising inputting the embedded sparse image data to an image signal processor which processes the sparse image data in the embedded sparse image data to generate the target image data.
- The method according to claim 1, wherein, the each two adjacent pixels in the dense image data constitutes a first pair, and the first pair includes a first value of the first color pixel and a second value of the first pixel.
- The method according to claim 3, wherein the each two adjacent pixels in the sparse image data constitutes a second pair, and the second pair includes a third value of the first color pixel and a fourth value of the second color pixel or includes the third value of the first color pixel and the fourth value of the third color pixel.
- The method according to claim 4, wherein the first value of the first color pixel in the first pair of the dense image data corresponds to the third value of the first color pixel in the second pair which is located at a position corresponding to the first pair of the dense image data.
- The method according to claim 5, wherein the generating the residual data comprises subtracting the second value of the first color pixel from the first value of the first color pixel in the first pair.
- The method according to claim 6, wherein the generating compressed data comprises reducing a number of bits of the residual data.
- The method according to claim 7, wherein the reducing the number of bits of the residual data comprises converting the residual data to the compressed data based on a compression curve which defines a relationship between a value of the pixel of the residual data and a value of the pixel of the compressed data.
- The method according to claim 8, wherein each of the first color pixels, each of the second color pixels and each of the third color pixels in the sparse image data has a spare space in which the sparse image data is not stored.
- The method according to claim 9, wherein, in the generating the split data, sizes of the first data part and the second data part of the split data are matched with sizes of the spare space in each of the first color pixels, each of the second color pixels and each of the third color pixels in the sparse image data.
- The method according to claim 10, wherein the generating the embedded sparse image data comprises embedding the first data part and the second data part of the first pair into the spare space of the second pair of the sparse image data.
- The method according to claim 2, further comprising:obtaining the embedded sparse image data from the image signal processor after the embedded sparse image data has been input to the image signal processor;extracting the split data from the embedded sparse image data obtained from the image signal processor to generate the compressed data;expanding the compressed data generated from the split data to reconstruct the residual data; andreconstructing the dense image data based on the residual data reconstructed from the compressed data.
- The method according to claim 12, further comprising:obtaining a generated image data during processing to generate the target image data based on the sparse image data from the image signal processor; andcombining the generated image data and the dense image data reconstructed from the residual data to generate a combined image data.
- The method according to claim 13, further comprising inputting the combined image data to the image signal processor.
- The method according to any one of claims 1-14, wherein the first color is green, the second color is red and the third color is blue.
- The method according to claim 15, wherein the sparse image data is in conformity to a Bayer format.
- An electrical device, comprising:a camera assembly configured to capture an image of an object and to generate a sparse image data and a dense image data; anda main processor configured to:obtain the sparse image data and the dense image data, wherein the sparse image data includes a plurality of pixels of at least first color pixels, second color pixels and third color pixels, and the dense image data includes a plurality of pixels of the first color pixels;generate a residual data based on the dense image data by calculating differentials between each two adjacent pixels in the dense image data;generate a compressed data by compressing the residual data to reduce its data amount;generate a split data by splitting each of the pixels of the compressed data into a first data part and a second data part; andgenerate an embedded sparse image data by embedding the split data into the sparse image data.
- The electrical device according to claim 17, wherein the main processor is further configured to input the embedded sparse image data to an image signal processor which processes the sparse image data in the embedded sparse image data to generate the target image data.
- The electrical device according to claim 17, wherein the each two adjacent pixels in the dense image data constitute a first pair, and the first pair includes a first value of the first color pixel and a second value of the first pixel.
- The electrical device according to claim 19, wherein the each two adjacent pixels in the sparse image data constitute a second pair, and the second pair includes a third value of the first color pixel and a fourth value of the second color pixel or includes the third value of the first color pixel and the fourth value of the third color pixel.
- The electrical device according to claim 20, wherein the first value of the first color pixel in the first pair of the dense image data corresponds to the third value of the first color pixel in the second pair which is located at a position corresponding to the first pair of the dense image data.
- The electrical device according to claim 21, wherein the residual data is generated by subtracting the second value of the first color pixel from the first value of the first color pixel in the first pair.
- The electrical device according to claim 22, wherein the compressed data is generated by reducing a number of bits of the residual data.
- The electrical device according to claim 23, wherein the number of bits of the residual data is reduced by converting the residual data to the compressed data based on a compression curve which defines a relationship between a value of the pixel of the residual data and a value of the pixel of the compressed data.
- The electrical device according to claim 24, wherein each of the first color pixels, each of the second color pixels and each of the third color pixels in the sparse image data has a spare space in which the sparse image data is not stored.
- The electrical device according to claim 25, wherein, when the split data is generated, sizes of the first data part and the second data part of the split data are matched with sizes of the spare space in each of the first color pixels, each of the second color pixels and each of the third color pixels in the sparse image data.
- The electrical device according to claim 26, wherein the embedded sparse image data is generated by embedding the first data part and the second data part of the first pair into the spare space of the second pair of the sparse image data.
- The electrical device according to claim 18, wherein the main processor is further configured to:obtain the embedded sparse image data from the image signal processor after the embedded sparse image data has been input to the image signal processor;extract the split data from the embedded sparse image data obtained from the image signal processor to generate the compressed data;expand the compressed data generated from the split data to reconstruct the residual data; andreconstruct the dense image data based on the residual data reconstructed from the compressed data.
- The electrical device according to claim 28, wherein the main processor is further configured to:obtain a generated image data during processing to generate the target image data based on the sparse image data from the image signal processor; andcombine the generated image data and the dense image data reconstructed from the residual data to generate a combined image data.
- The electrical device according to claim 29, wherein the main processor is further configured to input the combined image data to the image signal processor.
- The electrical device according to any one of claims 17-30, wherein the first color is green, the second color is red and the third color is blue.
- The electrical device according to claim 31, wherein the sparse image data is in conformity to a Bayer format.
- A non-transitory computer readable medium comprising program instructions stored thereon for performing at least the following:obtaining a sparse image data and a dense image data, wherein the sparse image data includes a plurality of pixels of at least first color pixels, second color pixels and third color pixels, and the dense image data includes a plurality of pixels of the first color pixels;generating a residual data based on the dense image data by calculating differentials between each two adjacent pixels in the dense image data;generating a compressed data by compressing the residual data to reduce its data amount;generating a split data by splitting each of the pixels of the compressed data into a first data part and a second data part; andgenerating an embedded sparse image data by embedding the split data into the sparse image data to generate a target image data.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/094714 WO2021243709A1 (en) | 2020-06-05 | 2020-06-05 | Method of generating target image data, electrical device and non-transitory computer readable medium |
CN202080101774.7A CN115918104A (en) | 2020-06-05 | 2020-06-05 | Target image data generation method, electronic device, and non-transitory computer-readable medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/094714 WO2021243709A1 (en) | 2020-06-05 | 2020-06-05 | Method of generating target image data, electrical device and non-transitory computer readable medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021243709A1 true WO2021243709A1 (en) | 2021-12-09 |
Family
ID=78830034
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/094714 WO2021243709A1 (en) | 2020-06-05 | 2020-06-05 | Method of generating target image data, electrical device and non-transitory computer readable medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115918104A (en) |
WO (1) | WO2021243709A1 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116708511B (en) * | 2023-07-18 | 2024-02-02 | 广东车卫士信息科技有限公司 | Method, equipment and medium based on microcontroller integrated vehicle-machine interconnection technology |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101494788A (en) * | 2009-01-23 | 2009-07-29 | 炬才微电子(深圳)有限公司 | Method and apparatus for compressing and decompressing video image |
US20120134597A1 (en) * | 2010-11-26 | 2012-05-31 | Microsoft Corporation | Reconstruction of sparse data |
CN104766275A (en) * | 2014-01-02 | 2015-07-08 | 株式会社理光 | Method and device for making sparse disparity map dense |
US20190318196A1 (en) * | 2019-06-28 | 2019-10-17 | Intel Corporation | Guided sparse feature matching via coarsely defined dense matches |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102010010736A1 (en) * | 2010-03-09 | 2011-09-15 | Arnold & Richter Cine Technik Gmbh & Co. Betriebs Kg | Method of compressing image data |
-
2020
- 2020-06-05 CN CN202080101774.7A patent/CN115918104A/en active Pending
- 2020-06-05 WO PCT/CN2020/094714 patent/WO2021243709A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101494788A (en) * | 2009-01-23 | 2009-07-29 | 炬才微电子(深圳)有限公司 | Method and apparatus for compressing and decompressing video image |
US20120134597A1 (en) * | 2010-11-26 | 2012-05-31 | Microsoft Corporation | Reconstruction of sparse data |
CN104766275A (en) * | 2014-01-02 | 2015-07-08 | 株式会社理光 | Method and device for making sparse disparity map dense |
US20190318196A1 (en) * | 2019-06-28 | 2019-10-17 | Intel Corporation | Guided sparse feature matching via coarsely defined dense matches |
Also Published As
Publication number | Publication date |
---|---|
CN115918104A (en) | 2023-04-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10339632B2 (en) | Image processing method and apparatus, and electronic device | |
US10109038B2 (en) | Image processing method and apparatus, and electronic device | |
US8675984B2 (en) | Merging multiple exposed images in transform domain | |
EP3327665B1 (en) | Image processing method and apparatus, and electronic device | |
WO2024027287A9 (en) | Image processing system and method, and computer-readable medium and electronic device | |
US20230177654A1 (en) | Method of removing noise in image, electrical device, and storage medium | |
WO2021243709A1 (en) | Method of generating target image data, electrical device and non-transitory computer readable medium | |
WO2022047614A1 (en) | Method of generating target image data, electrical device and non-transitory computer readable medium | |
WO2021253166A1 (en) | Method of generating target image data and electrical device | |
US20230239581A1 (en) | Electrical device, method of generating image data, and non-transitory computer readable medium | |
WO2022183437A1 (en) | Method of generating embedded image data, image sensor, electrical device and non-transitory computer readable medium | |
WO2022174460A1 (en) | Sensor, electrical device, and non-transitory computer readable medium | |
TWI286907B (en) | Image processor of imaging apparatus | |
CN113364964B (en) | Image processing method, image processing apparatus, storage medium, and terminal device | |
WO2021138867A1 (en) | Method for electronic device with a plurality of cameras and electronic device | |
WO2022094970A1 (en) | Electrical device, method of generating image data, and non-transitory computer readable medium | |
WO2022246606A1 (en) | Electrical device, method of generating image data, and non-transitory computer readable medium | |
WO2022016385A1 (en) | Method of generating corrected pixel data, electrical device and non-transitory computer readable medium | |
WO2024020958A1 (en) | Method of generating an image, electronic device, apparatus, and computer readable storage medium | |
WO2021159295A1 (en) | Method of generating captured image and electrical device | |
JP2020108001A (en) | Image processing device, imaging device, image processing method, and image processing program | |
WO2022222075A1 (en) | Method of generating image data, electronic device, apparatus, and computer readable medium | |
WO2021120107A1 (en) | Method of generating captured image and electrical device | |
JP4256831B2 (en) | Image processing apparatus and image processing method | |
CN112135150A (en) | Image compression and decompression method, readable medium and electronic device thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20939035 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20939035 Country of ref document: EP Kind code of ref document: A1 |