CN110944160A - Image processing method and electronic equipment - Google Patents

Image processing method and electronic equipment Download PDF

Info

Publication number
CN110944160A
CN110944160A CN201911077391.6A CN201911077391A CN110944160A CN 110944160 A CN110944160 A CN 110944160A CN 201911077391 A CN201911077391 A CN 201911077391A CN 110944160 A CN110944160 A CN 110944160A
Authority
CN
China
Prior art keywords
image
pyramid
target
electronic device
acquiring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911077391.6A
Other languages
Chinese (zh)
Other versions
CN110944160B (en
Inventor
翁迪望
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911077391.6A priority Critical patent/CN110944160B/en
Publication of CN110944160A publication Critical patent/CN110944160A/en
Application granted granted Critical
Publication of CN110944160B publication Critical patent/CN110944160B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths

Abstract

The embodiment of the invention provides an image processing method and electronic equipment, which are applied to the field of communication and used for solving the problem that the quality of an image obtained by the electronic equipment through an image sensor is poor. The method comprises the following steps: acquiring first original data through an image sensor in a first working mode to obtain a first image; acquiring second original data through an image sensor in a second working mode to obtain a second image, wherein the first image and the second image have different resolutions; and fusing the first image and the second image to obtain a target image. The method is particularly applied to the process of processing the image based on the image sensor.

Description

Image processing method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image processing method and electronic equipment.
Background
At present, users have higher and higher requirements for shooting images of electronic devices such as mobile phones or tablet computers, and if the users demand to shoot images, the definition of the images is higher and higher, and the resolution is higher and higher.
The electronic equipment acquires and processes data through the image sensor to realize image shooting. However, limited by the performance of current image sensors, the resolution or sharpness of the image obtained by the image sensor is low for the electronic device. That is, the quality of the image obtained by the electronic device through the image sensor is poor.
Disclosure of Invention
The embodiment of the invention provides an image processing method and electronic equipment, and aims to solve the problem that the quality of an image obtained by the electronic equipment through an image sensor is poor.
In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, where the method includes: acquiring first original data through an image sensor in a first working mode to obtain a first image; acquiring second original data through an image sensor in a second working mode to obtain a second image, wherein the first image and the second image have different resolutions; and fusing the first image and the second image to obtain a target image.
In a second aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes: a first processing module and a second processing module; the first processing module is used for acquiring first original data through the image sensor in a first working mode to obtain a first image; acquiring second original data through an image sensor in a second working mode to obtain a second image, wherein the first image and the second image have different resolutions; and the second processing module is used for fusing the first image and the second image acquired by the first processing module to obtain a target image.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the computer program implements the steps of the image processing method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the image processing method according to the first aspect.
In the embodiment of the invention, in a first working mode, first original data can be acquired through an image sensor to obtain a first image; acquiring second original data through the image sensor in a second working mode to obtain a second image; furthermore, the first image and the second image with different resolutions can be fused to obtain the target image. The fused target image not only has the characteristic of high signal-to-noise ratio of the image with lower resolution, but also has the characteristics of high definition and strong detail resolution capability of the image with higher standard division rate, so that compared with the first image and the second image, the fused target image has the characteristics of high signal-to-noise ratio, high definition and strong detail resolution capability. Namely, the quality of the image obtained by the electronic equipment through the image sensor is improved.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a Quad Bayer color filter array according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a Bayer color filter array according to an embodiment of the invention;
fig. 5 is a schematic diagram illustrating an image format conversion process according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating another image processing method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of another image format conversion process according to an embodiment of the present invention;
FIG. 8 is a flowchart illustrating another image processing method according to an embodiment of the present invention;
FIG. 9 is a flowchart illustrating another image processing method according to an embodiment of the present invention;
FIG. 10 is a flowchart illustrating another image processing method according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of a possible electronic device according to an embodiment of the present invention;
fig. 12 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that "/" in this context means "or", for example, A/B may mean A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. "plurality" means two or more than two.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first image and the second image, etc. are for distinguishing different images, rather than for describing a particular order of the images.
Currently, in the process of capturing an image, an electronic device may acquire data (denoted as raw data) through a 48 Mega-Pixels (MP) Quad Bayer image sensor, and process the initial image through a certain image reading mode to obtain a result image. Wherein, the Quad Bayer image sensor adopts a Quad Bayer color filter array. The initial image may be an image in which adjacent 2 × 2 pixels use the same color and the resolution is 48 MP.
Illustratively, the image readout mode may include the following discarding (Binning) mode and removing (Remosaic) mode:
binning mode: adjacent four pixels in the initial image are combined into one pixel block to generate a resultant image with a resolution of 12 MP.
Although the signal-to-noise ratio of the resulting image obtained by the Binning mode is high and the noise is small, the image definition is insufficient and the detail resolution capability is poor.
Remosaic mode: the initial image was converted to Bayer format by the Remosaic (Remosaic) algorithm to yield a resultant image with a resolution of 48 MP.
Although the resulting image obtained through the Remosaic mode has high definition and strong detail resolution capability, the signal-to-noise ratio of the resulting image is low and the noise point is large due to the small size and weak light sensing capability of a single pixel.
In order to solve the above problem, in the image processing method provided in the embodiment of the present invention, in a first working mode, first raw data may be acquired by an image sensor to obtain a first image; acquiring second original data through the image sensor in a second working mode to obtain a second image; furthermore, the first image and the second image with different resolutions can be fused to obtain the target image. The fused target image not only has the characteristic of high signal-to-noise ratio of the image with lower resolution, but also has the characteristics of high definition and strong detail resolution capability of the image with higher standard division rate, so that compared with the first image and the second image, the fused target image has the characteristics of high signal-to-noise ratio, high definition and strong detail resolution capability. Namely, the quality of the image obtained by the electronic equipment through the image sensor is improved.
The electronic device in the embodiment of the invention can be a mobile electronic device or a non-mobile electronic device. The mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc.; the non-mobile electronic device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiments of the present invention are not particularly limited.
It should be noted that, in the image Processing method provided in the embodiment of the present invention, the execution main body may be an electronic device, or a Central Processing Unit (CPU) of the electronic device, or a control module in the electronic device for executing the image Processing method. In the embodiment of the present invention, an electronic device executes an image processing method as an example, and the image processing method provided in the embodiment of the present invention is described.
The electronic device in the embodiment of the present invention may be an electronic device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment to which the image processing method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application. For example, applications such as a system setup application, a system chat application, and a system camera application. And the third-party setting application, the third-party camera application, the third-party chatting application and other application programs.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the image processing method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the image processing method may operate based on the android operating system shown in fig. 1. Namely, the processor or the electronic device can implement the image processing method provided by the embodiment of the invention by running the software program in the android operating system.
The following describes the image processing method provided by the embodiment of the present invention in detail with reference to the flowchart of the image processing method shown in fig. 2. Wherein, although the logical order of the image processing methods provided by embodiments of the present invention is shown in a method flow diagram, in some cases, the steps shown or described may be performed in an order different than here. For example, the image processing method shown in fig. 2 may include steps 201 to 203:
step 201, in a first working mode, the electronic device acquires first original data through an image sensor to obtain a first image.
Step 202, the electronic device acquires second original data through the image sensor in a second working mode to obtain a second image.
Wherein the first image and the second image have different resolutions.
It is understood that the first operation mode and the second operation mode are both drawing modes of the image sensor in the electronic device, and the first operation mode is different from the second operation mode.
Optionally, the first operating mode is to merge N × N pixels that are adjacent and have the same color in the image, and the second operating mode is to control the size of a single pixel in the image to be unchanged; or the second working mode is to combine the adjacent N × N pixels with the same color in the image, and the first working mode is to control the size of a single pixel in the image to be unchanged; wherein N is a positive integer greater than or equal to 2. For example, N is a positive integer such as 2, 3, or 4.
It is to be understood that "merging N × N pixels that are adjacent and have the same color in an image" refers to merging pixels in a square matrix of N × N pixels that are adjacent and have the same color in an image into one large pixel.
In the embodiment of the present invention, the execution order of obtaining the image by the electronic device according to the mode of "merging adjacent N × N pixels having the same color in the image" and obtaining the image according to the mode of "keeping the size of a single pixel in the control image unchanged" is not limited, and may be any order that can be realized.
It can be understood that the electronic device obtains an image with low resolution according to a mode of combining N × N pixels adjacent to each other and having the same color in the image, and the image has high signal-to-noise ratio and low noise, but has insufficient image definition and poor detail resolution. The electronic equipment obtains an image with high resolution according to a mode of controlling the size of a single pixel in the image to be unchanged, and the image has high definition and strong detail resolution, but the signal-to-noise ratio of the image is low and the noise point is large due to the fact that the size of the single pixel is small and the light sensing capability is weak.
It should be noted that, in the related art, one electronic device can acquire data and obtain an image through an image sensor only in one operating mode at a time, for example, an image can be obtained only in a mode of "merging adjacent N × N pixels with the same color in the image", or an image can be obtained only in a mode of "controlling the size of a single pixel in the image to be unchanged", that is, images in two operating modes need to be obtained through two electronic devices. In the embodiment of the invention, the electronic equipment can support the data acquisition through the image sensor and obtain two images under two different working modes. Therefore, the convenience of acquiring the image by the electronic equipment through the data acquired by the image sensor is improved, for example, the convenience of acquiring two images (such as a first image and a second image) with different resolutions through the same image sensor by the electronic equipment is improved.
In the embodiment of the present invention, the first raw data and the second raw data may be data for the same photographic subject (e.g., a person or a still), that is, the first image and the second image are images for the same photographic subject.
Optionally, in the case of acquiring the first raw data, the exposure parameter of the image sensor is a first exposure parameter; under the condition of collecting second original data, the exposure parameter of the image sensor is a second exposure parameter; wherein the first exposure parameter is the same as the second exposure parameter. Under the condition that the two exposure parameters are the same, the target image obtained by subsequent fusion has better effect.
Optionally, the first raw data and the second raw data may be two images continuously acquired by the electronic device and having the same exposure parameter, that is, a time interval between a time of acquiring the first raw data and a time of acquiring the second raw data is small, for example, the interval is less than 0.5 second. Obviously, the first raw data and the second raw data are data acquired for the same photographic subject in the same photographic scene. Under the condition of smaller interval, the same shooting object can be ensured to correspond to the original data acquired twice, so that the subsequent image fusion is more convenient.
For example, the Exposure parameter may be an Exposure Value (EV), which is an integral of the illuminance of light received by a certain surface element of the object in the illumination time.
It is understood that the first exposure parameter is the same as the second exposure parameter, such that the brightness of the image represented by the first raw data is consistent with the brightness of the image represented by the second raw data, and further such that the brightness of the first image is consistent with the brightness of the second image.
And step 203, the electronic equipment fuses the first image and the second image to obtain a target image.
It can be understood that, in the process of fusing the first image and the second image with different resolutions by the electronic device, the fused target image can have the image characteristics of the two images at the same time, so that the target image not only has the characteristic of high signal-to-noise ratio of the image with lower resolution, but also has the characteristics of high definition and strong detail resolution of the image with higher scale division rate.
It should be noted that, in the image processing method provided by the embodiment of the present invention, in the first working mode, the first raw data is acquired by the image sensor to obtain the first image; acquiring second original data through the image sensor in a second working mode to obtain a second image; furthermore, the first image and the second image with different resolutions can be fused to obtain the target image. The fused target image not only has the characteristic of high signal-to-noise ratio of the image with lower resolution, but also has the characteristics of high definition and strong detail resolution capability of the image with higher standard division rate, so that compared with the first image and the second image, the fused target image has the characteristics of high signal-to-noise ratio, high definition and strong detail resolution capability. Namely, the quality of the image obtained by the electronic equipment through the image sensor is improved.
Optionally, in the embodiment of the present invention, the image sensor may be a Quad Bayer image sensor. That is, a Quad Bayer image sensor, which acquires an image using a Quad Bayer color filter array, may be provided in the electronic device. In this case, N is 2.
It is understood that the first raw data and the second raw data are both data in the Quad Bayer format.
Wherein the images in the Quad Bayer format (e.g., the image represented by the first raw data and the image represented by the second raw data) may be data (i.e., images) acquired by an electronic device via a Quad Bayer image sensor employing a Quad Bayer color filter array.
Specifically, the Quad Bayer pattern image is a color image, such as an RGB pattern image. RGB represents the colors of Red, Green and Blue channels, and the RGB format is a color standard in the industry, and various colors are obtained by changing the three color channels of Red (Red, R), Green (Green, G) and Blue (Blue, B) and superimposing them on each other.
FIG. 3 is a schematic diagram of a Quad Bayer color filter array. In the array shown in fig. 3, adjacent pixels of 2 × 2 size have the same color, for example, R pixels have 2 × 2 values, Gr (Gr represents a G pixel in the same row as the R pixel) pixels have 2 × 2 values, Gb (Gb represents a G pixel in the same row as the B pixel) pixels have 2 × 2 values, and B pixels also have 2 × 2 values, which are arranged in the manner shown in fig. 3.
In addition, the resolution of the image represented by the first original data is the same as the resolution of the image represented by the second original data, for example, the resolution is 48MP or 64MP, which is not limited in the embodiment of the present invention. In the following embodiments, an image processing method provided by an embodiment of the present invention is described by taking an example in which a resolution of an image represented by first raw data and a resolution of an image represented by second raw data are 48 MP.
Optionally, in the following embodiments, the first operating mode is to merge N × N pixels that are adjacent and have the same color in the image, and the second operating mode is to control the size of a single pixel in the image to be unchanged, which is taken as an example to explain that the embodiments of the present invention provide an image processing method.
Illustratively, the first operation mode is a Binning mode, and the second operation mode is a Remosaic mode.
Specifically, in the embodiment of the present invention, the Binning pattern is used to indicate that four pixels, which have the same color and a size of 2 × 2 and are adjacent to each other, in the first raw data are merged into one pixel, and the first image is controlled to be an image in a Bayer format. I.e. N equals 2.
Specifically, the odd scan lines in the Bayer pattern output successively adjacent R and G, i.e., output RGRG … …, and the even scan lines output successively adjacent G and B, i.e., output GBGB … …. As shown in fig. 4, a schematic diagram of a color filter array in Bayer format, consisting of half G, R of 1/4, B of 1/4.
Wherein Binning is used for adding the charges induced by the adjacent pixels together and reading out in a one-pixel mode, so as to realize that the adjacent four pixels with the same color and the size of 2 × 2 in the first original data are combined into one pixel.
Fig. 5 is a schematic diagram illustrating an image format conversion process according to an embodiment of the present invention. The electronic device subjects the Quad Bayer pattern array (e.g., the first raw data image array) shown in fig. 5 (a) to a Binning process, such as merging adjacent 2 × 2 pixels R in the array shown in fig. 5 (a) into one pixel, to obtain a Bayer pattern array (e.g., the first image array) shown in fig. 5 (b). Wherein the size of a single pixel in the array shown in fig. 5 (b) is four times the size of a single pixel in the array shown in fig. 5 (a).
Further, optionally, the electronic device according to the embodiment of the present invention may further include an Image Signal Processing (ISP) unit, which is mainly used to process an output Signal of a front-end Image sensor (e.g., a Quad Bayer Image sensor).
In the embodiment of the present invention, in the Binning mode, the electronic device may perform Binning on the image, and process the Binning image through the ISP to obtain a processed image.
Specifically, in the Binning mode, the electronic device may perform Binning on first raw data in the Quad Bayer format, and then perform the Binning on the first raw data through ISP processing, so as to obtain a first image in the Bayer format.
Optionally, the step of processing the image (and the data of the image) by the ISP may include a white balance process, a demosaicing process, a Tone mapping (Tone mapping) process, and the like, which is not particularly limited in this embodiment of the present invention.
It should be noted that the white balance processing is used to adjust the color temperature of the image to solve the problem of color cast of the image. Demosaicing (demosaicing) is a digital image processing algorithm that aims to reconstruct a full-Color image from incomplete Color samples output from photosensitive elements (such as the Quad Bayer image sensor mentioned above) covered with a Color Filter Array (CFA). Demosaicing is also known as color filter array interpolation (CFA interpolation) or color reconstruction (Colorreconstruction). Tone mapping (Tone mapping) processes are used to perform a large contrast attenuation on an image to transform the scene brightness to a range where the image can be displayed, while maintaining information such as image details and image colors that are important for representing the original scene.
For example, in the Binning model, the electronic device may process 48MP of first raw data employing Quad Bayer into a first image employing Bayer with a resolution of 12MP, i.e., a single pixel size in the first image is four times the single pixel size in the first image. Thus, the image signal-to-noise ratio of the first image obtained by the electronic device is high.
It should be noted that, in the embodiment of the present invention, a format of the first image obtained by the electronic device through the Binning mode processing may be a YUV format.
The YUV format is a color coding method. YUV is a kind of compiled true-color space (colorspace), and the proper terms such as Y' UV, YUV, YCbCr, YPbPr, etc. may be called YUV, overlapping each other. "Y" represents brightness (Luma or Luma), i.e., a gray scale value; "U" and "V" represent Chrominance (or Chroma) and are used to describe the color and saturation of an image for specifying the color of a pixel.
Optionally, the YUV format mentioned in the embodiment of the present invention may be a YUV420 format, an NV21 format, an NV12 format, or the like, which is not limited to this, and may be determined according to actual requirements.
Illustratively, as shown in fig. 6, the step 201 may be implemented by the step a1 to the step a 4:
step A1, the electronic device performs Binning on the image represented by the first raw data to obtain an image M1.
Step A2, the electronic device performs white balance processing on the image M1 through the ISP to obtain an image M2.
Step A3, the electronic device demosaics the image M2 through the ISP to obtain an image M3.
Step A4, the electronic device performs tone mapping processing on the image M3 through the ISP to obtain a first image.
The description of the steps a1 to a4 may refer to the related description in the above embodiments, and the description of the embodiments of the present invention is not repeated here.
In the embodiment of the present invention, the Remosaic mode is used to control the size of a single pixel in the second image to be the same as the size of a single pixel in the image represented by the second raw data, and control the second image to be an image in a Bayer format.
Fig. 7 is a schematic diagram illustrating an image format conversion process according to an embodiment of the present invention. The electronic device performs Remosaic processing on the Quad Bayer-format array (e.g., the array of images represented by the second raw data) shown in fig. 7 (a), and may obtain a Bayer-format array (e.g., the array of second images) shown in fig. 7 (b). Here, the size of a single pixel in the array shown in fig. 7 (b) is the same as the size of a single pixel in the array shown in fig. 7 (a).
In this embodiment of the present invention, in the Remosaic mode, the electronic device may perform Remosaic processing on the image, and then process the Remosaic image by using the ISP to obtain a processed image.
Specifically, in the Remosaic mode, the electronic device may perform Remosaic processing on a third image in the Quad Bayer format, and then process the first image after Remosaic through ISP processing, so as to obtain a fourth image in the Bayer format.
For example, in Remosaic mode, the electronic device may process 48MP second raw data with Quad Bayer into a second image with a resolution of 48MP and with Bayer, i.e., the size of a single pixel in the second image is the same as the size of a single pixel in the image represented by the second raw data. Therefore, the second image obtained by the electronic equipment has high definition and high detail resolution capability.
It should be noted that, in the embodiment of the present invention, a format of the second image obtained by the electronic device through Remosaic mode processing may be a YUV format.
For example, as shown in fig. 8, the step 202 may be implemented by the step B1 to the step B4:
and step B1, the electronic equipment conducts Remosaic processing on the image represented by the second original data to obtain an image N1.
And B2, the electronic equipment performs white balance processing on the image N1 through the ISP to obtain an image N2.
And B3, the electronic equipment demosaicing the image N2 through the ISP to obtain an image N3.
And B4, the electronic equipment performs tone mapping processing on the image N3 through the ISP to obtain a second image.
The descriptions of the steps B1 through B4 may refer to the descriptions in the foregoing embodiments, and are not repeated herein.
It should be noted that, with the image processing method provided in the embodiment of the present invention, the electronic device may process, according to a Binning pattern, first raw data acquired by a Quad Bayer image sensor into a first image in a Quad Bayer format, and process, according to a Remosaic pattern, second raw data acquired by the Quad Bayer image sensor into a second image in a Bayer format, so that the first image and the second image may be fused to obtain a target image. Therefore, the target image with high signal-to-noise ratio, high definition and strong detail resolution capability can be obtained based on the Quad Bayer image sensor, namely, the quality of the image processed based on the Quad Bayer image sensor is improved.
Optionally, in this embodiment of the present invention, the step 203 may be implemented by the step 203':
and step 203', the electronic equipment fuses the first image and the second image on the brightness channel to obtain a target image.
In general, the higher the sharpness of an image, the more detailed information, the stronger the detail resolving power. The detail information of the image refers to the change of the gray scale in the image, and includes isolated points, thin lines, abrupt changes in the screen, and the like of the image.
Specifically, the electronic device may use the luminance image of the first image as a reference image, and fuse detail information in the luminance image of the second image to the reference image to obtain the target image.
It can be understood that, because the detail information of the image is mainly embodied in the luminance channel (i.e., the Y channel of the YUV image), the first image and the second image can be fused in the luminance channel to obtain the target image containing more detail information, i.e., the target image with higher definition and stronger detail resolution.
In a possible implementation manner, as shown in fig. 9, in the image processing method provided by the embodiment of the present invention, the step 203 or the step 203' may be implemented by the following steps 301 to 309:
step 301, the electronic device obtains the luminance component of the first image to obtain a third image, and obtains the luminance component of the second image to obtain a fourth image.
Specifically, the electronic device may obtain a Y-channel component in the first image in the YUV format, to obtain a third image (i.e., a luminance image of the first image); and acquiring a Y-channel component in the YUV-format second image to obtain a fourth image (namely a brightness image of the second image).
And 302, the electronic equipment performs global registration on the third image and the fourth image to obtain a homography matrix.
Optionally, the step 302 may be implemented by the step 302 a:
step 302a, the electronic device performs feature point detection and RANdom SAmple Consensus (RANSAC) screening on the third image and the fourth image respectively to obtain a homography matrix (e.g., homography matrix H).
The feature point detection of the image is to extract features of the image (i.e. the third image and the fourth image), generate a feature descriptor, and finally match the features of the two images according to the similarity of the descriptor. The features of the image may be mainly classified into points, lines (edges), regions (faces), and the like, or may be classified into local features and global features. The extraction of the region (face) features is troublesome and time-consuming, so point features and edge features are mainly used.
RANSAC screening, specifically, estimating parameters of a mathematical model from a group of observed data including outliers in an iterative manner. For example, the electronic device may estimate correct data of the image for the third image and the fourth image, randomly select sample data according to the occurrence probability of the correct data, and perform randomness simulation according to a law of large numbers to approximately obtain a correct result, thereby implementing RANSAC screening.
Optionally, in the embodiment of the present invention, in order to improve the performance of the algorithm, the electronic device may perform downsampling on the third image and the fourth image, and then perform subsequent processing on the two downsampled images, such as feature point detection and RANSAC screening. The downsampling magnification of the downsampling can be selected in a compromise mode according to the actual algorithm performance requirement and the image effect requirement, and the embodiment of the invention is not particularly limited in this respect.
It should be noted that, in the embodiment of the present invention, the method for the electronic device to perform global registration on the third image and the fourth image is not limited to the above-mentioned exemplary method, and may also be other global registration methods, which is not specifically limited in this embodiment of the present invention. For example, the global registration method may also be an image gray scale and template based method, or an image domain transformation based method, etc.
And step 303, the electronic device acquires a target pixel value of the fourth image according to the homography matrix by taking the third image as a reference.
And the target pixel value of the fourth image is the pixel value corresponding to the third image found by the electronic equipment through the homography matrix.
And step 304, the electronic equipment performs an interpolation algorithm on the fourth image according to the target pixel value to obtain a fifth image.
Optionally, the above steps 303 and 304 may implement that the electronic device warp (warps) the content in the fourth image into the third image, so as to perform warp transformation on the content in the fourth image.
Optionally, the interpolation algorithm may be a near-neighborhood interpolation method, a bilinear interpolation method, or the like, which is not limited in the embodiment of the present invention and may be determined according to actual requirements.
And 305, the electronic device performs local alignment on the fifth image and the third image to obtain a sixth image.
The electronic device may use the fifth image as a reference image, and locally align the fifth image and the third image to obtain a sixth image.
Optionally, the method for locally aligning the image in the embodiment of the present invention may be a block matching method based on a pyramid, or a motion vector detection method based on a dense optical flow, and the like, which is not specifically limited in the embodiment of the present invention.
Specifically, in a scenario where the local alignment method is a pyramid-based block matching method, the step 304 may be implemented by the following steps C1 to C3:
and step C1, the electronic device constructs the fifth image into a first target pyramid and constructs the third image into a second target pyramid.
And step C2, the electronic equipment acquires the running vector of the first target gold sub-tower and acquires the motion vector of the second target gold sub-tower.
And step C3, the electronic device performs local alignment on the first target pyramid according to the operation vector of the first target pyramid and the operation vector of the second target pyramid by taking the second target pyramid as a reference, so as to obtain a sixth image.
It should be noted that a pyramid of an image is a series of sets of images arranged in a pyramid shape with gradually decreasing resolution. The bottom of the pyramid is a high resolution representation of the image to be processed, while the top is an approximation of the low resolution. When moving to the upper layer of the pyramid, the size and resolution are reduced. Since the size of the base level J (i.e., the bottom image of the pyramid) is 2J×2JOr NxN (J ═ log)2N), the size of the intermediate stage j is 2j×2jWherein J is more than or equal to 0 and less than or equal to J. The complete pyramid consists of J +1 resolution levels, 2J×2JTo 20×20However, most pyramids have only P +1 levels, where J is J-P, …, J-2, J-1, J, and P is 1. ltoreq. P.ltoreq.J. That is, they are generally limited to using only P levels to reduce the size of the original image approximation.
Specifically, in the pyramid-based block matching method, the electronic device may perform coarse motion vector search on a low-resolution image in the pyramid for the first target pyramid and the second target pyramid, then transmit the detected candidate motion vectors to a next layer of the pyramid, and perform fine motion vector search on a high-resolution image to obtain a final motion vector of each image block. And then, locally aligning the images according to the motion vector, and obtaining an aligned sixth image.
Among the dense optical flow-based motion vector detection methods, LK (Lucas-Kanade) optical flow, Fanrneback optical flow, Horn-Schunck optical flow, SinpleFlow optical flow, and the like are relatively common optical flow methods at present. Where dense optical flow requires interpolation between relatively easy to track pixels using some interpolation method to resolve those motion-ambiguous pixels, it is computationally expensive. While for sparse optical flow it is calculated by specifying a set of points (easily trackable points, such as corner points) before being tracked, and then tracking the motion using, for example, a pyramid LK optical flow algorithm.
Taking LK optical flow as an example:
it is based on the following assumptions:
(1) the brightness is constant, namely the brightness of the same point does not change along with the change of time. This is an assumption of basic optical flow (all optical flow variants must be satisfied) for obtaining the basic equations of optical flow;
(2) small motion, that is, the change of time does not cause the drastic change of position, so that the gray scale can solve the partial derivative of the position, which is also an indispensable assumption of an optical flow method;
(3) the space is consistent, the adjacent points on one scene are projected to the image and are also adjacent points, and the speeds of the adjacent points are consistent. This is a specific assumption of the Lucas-Kanade optical flow method, because the fundamental equation constraint of the optical flow method is only one, and the velocity in the x and y directions is required, and two unknown variables exist. Assuming that similar motion is performed in the neighborhood of the feature point, n equations can be connected to calculate the speed in the x and y directions (n is the total number of the feature point neighborhood, including the feature point). The equations are then solved.
Optionally, in a scene in which the local alignment method is a dense optical flow-based motion vector detection method, the electronic device may connect multiple equations for feature points in the fifth image and the third image, solve the multiple equations to obtain a running vector determined in the fifth image, and then perform local alignment on the fifth image and the third image to obtain a sixth image.
It will be appreciated that locally aligning the fifth image with the third image may to some extent avoid that different content in the fifth image and the third image results in more distortion or noise in the third image.
Step 306, the electronic device performs ghost detection on the sixth image to determine a ghost area of the sixth image.
And 307, the electronic device fills the ghost area with the third image to obtain a seventh image.
Note that the ghost area in the image is an area where a ghost is present during the shooting. Specifically, when the electronic device initially models the background for an image (e.g., the image represented by the first raw data or the image represented by the second raw data), the moving objects may be in the background, and they may generate ghost images after moving, so that a ghost area may exist in the sixth image. In addition, when the moving object in the shooting scene changes from moving to stationary, and then starts to move, a ghost image is generated, and therefore a ghost area exists in the image (such as the sixth image). Other ghost-like situations are objects left in the background, or moving objects that stop moving.
Specifically, the electronic device may perform ghost detection on the sixth image through the following steps: the electronic device judges foreground motion attributes obtained by the background difference in the sixth image, so as to distinguish the moving target from the ghost, namely, determine a ghost area of the sixth image. The electronic device may then update the pixels of the ghosted region to the background using adaptive background maintenance and updating.
The electronic device may obtain a foreground block in the sixth image by using a background difference method, and then classify the foreground block into a moving object, a ghost and a shadow. Specifically, the electronic device may approximate the optical flow of the foreground pixels using a spatio-temporal difference equation to determine an average optical flow for each foreground block to distinguish moving objects from ghosts. The motion target block should have motion that cannot be ignored, while the ghost block is stationary with an average optical flow of almost zero.
In particular, the ghost area is filled with the third image, and the ghost area in the sixth image may be filled with the background in the third image.
And 308, the electronic equipment performs multi-resolution fusion on the seventh image and the third image to obtain an eighth image.
Optionally, the multi-resolution fusion method may adopt a method based on a laplacian pyramid, a method based on wavelet decomposition, or the like, which is not specifically limited in this embodiment of the present invention and may be determined according to actual requirements.
The multi-resolution fusion method comprises the steps of decomposing a plurality of images to different scales and different resolutions, and fusing low-frequency energy information and high-frequency detail information respectively.
For example, in a scenario where the multi-resolution fusion method is a laplacian pyramid-based method, the step 307 can be implemented by steps D1 to D7 shown in fig. 10:
and D1, the electronic device constructs a Laplacian image pyramid of the seventh image to obtain a first pyramid, and constructs a Laplacian image pyramid of the third image to obtain a second pyramid.
The laplacian image pyramid is used for reconstructing an upper-layer non-sampled image from a lower-layer image of the pyramid, and the image can be restored to the maximum extent in digital image processing, namely, prediction residual.
Specifically, the process of constructing the laplacian image pyramid is to convolve G _ i (the ith layer is denoted as G _ i) by using a gaussian kernel, delete all even rows and even columns, and obtain G _ i +1, that is, generate the (i +1) th layer (the (i +1) th layer is denoted as G _ i +1) from the ith layer of the pyramid. Of course, the newly acquired image area becomes one fourth of the source image. The entire laplacian image pyramid can be constructed by performing the above-described operation on the input image (i.e., G _0) (the seventh image or the third image).
And D2, the electronic device performs texture intensity detection on the seventh image to obtain a ninth image, and performs texture intensity detection on the third image to obtain a tenth image.
Wherein the image texture describes the spatial color distribution and the light intensity distribution of the image or of a small region therein.
It is understood that the electronic device may perform texture intensity detection on a source image (e.g., the seventh image) to obtain a result image (e.g., the ninth image) that reflects the characteristics of the edge and other portions of the source image.
Specifically, the value of the pixel in the ninth image may reflect the value of the texture intensity of the pixel, and the value of the pixel in the tenth image may reflect the value of the texture intensity of the pixel.
And D3, converting the texture intensity of the ninth image into a weight value by the electronic equipment to obtain an eleventh image, and converting the texture intensity of the tenth image into a weight value by the electronic equipment to obtain a twelfth image.
Specifically, the value of the pixel in the eleventh image may reflect the weight value of the pixel, and the value of the pixel in the tenth image may reflect the weight value of the pixel.
It should be noted that, the weighted value of each region (e.g., each pixel) in the eleventh image and the weighted value of each region (e.g., each pixel) in the twelfth image are normalized, that is, the sum of the weighted values of each region in the eleventh image and the corresponding region in the twelfth image is 1.
And D4, the electronic equipment constructs the Gaussian pyramid of the eleventh image to obtain a third pyramid, and constructs the Gaussian pyramid of the twelfth image to obtain a fourth pyramid.
The step 307a and the steps 307b to 307d may be executed in parallel.
The Gaussian pyramid obtains some downsampled images through Gaussian smoothing and subsampling, namely K + 1-level Gaussian images can be obtained through smoothing and subsampling by the K-th-level Gaussian pyramid, the Gaussian pyramid comprises a series of low-pass filters, and the cutoff frequency of the low-pass filters is gradually increased by a factor of 2 from the upper layer to the lower layer, so that the Gaussian pyramid can span a large frequency range.
Similarly, for the process of constructing the gaussian pyramid, reference may be made to the process of constructing the laplacian pyramid in the above embodiment, which is not described again in the embodiments of the present invention.
And D5, the electronic equipment constructs a Laplacian image pyramid according to the first pyramid and the third pyramid to obtain a fifth pyramid, and constructs the Laplacian image pyramid according to the second pyramid and the fourth pyramid to obtain a sixth pyramid.
Wherein the electronic device may multiply respective layers in the first and third gold towers to construct a fifth gold tower; multiplying the respective layers in the second gold tower and the fourth gold tower to construct a sixth gold tower.
And D6, the electronic equipment constructs a Laplacian image pyramid according to the fifth pyramid and the sixth pyramid to obtain a seventh pyramid.
Wherein the electronic device may overlay respective layers in the fifth gold tower and the sixth gold tower to obtain a seventh gold tower.
And D7, carrying out Laplacian image pyramid reconstruction on the seventh pyramid by the electronic equipment to obtain an eighth image.
And the electronic equipment reconstructs the seventh pyramid into an image to obtain an eighth image.
In addition, in the scenario of the multi-resolution fusion method based on wavelet decomposition, first, the electronic device may perform wavelet decomposition on the registered source images (i.e., the third image and the seventh image), which is equivalent to filtering with a set of high-low pass filters, and separate out high-frequency information and low-frequency information of the source images. Secondly, the electronic equipment can adopt different fusion strategies to extract the characteristic information in respective transform domains according to the obtained information characteristics of the high-frequency information and the low-frequency information obtained by decomposing each layer, and respectively perform fusion. And finally, performing inverse transformation on the processed wavelet coefficient to reconstruct an image, thereby obtaining a fused image (namely an eighth image).
Step 309, the electronic device obtains a target image according to the color images of the eighth image and the second image.
Specifically, the electronic device may superimpose the color image of the eighth image and the color image of the second image to obtain the target image.
Wherein the electronic device takes the eighth image as the luminance component (i.e., Y component) of the target image and the color image of the second image as the color component (i.e., UV component) of the target image.
It should be noted that, in the embodiment of the present invention, the electronic device may fuse the luminance image of the second image and the luminance image of the fourth image through a plurality of image processing methods, such as a plurality of processing steps of global registration, local alignment, multi-resolution fusion, and the like, to obtain a luminance image with higher definition, and further superimpose the luminance image and the color image of the second image to obtain a target image with higher definition.
Fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device shown in fig. 11 includes: a first processing module 11a and a second processing module 11 b; the first processing module 11a is configured to, in a first working mode, acquire first raw data through an image sensor to obtain a first image; acquiring second original data through an image sensor in a second working mode to obtain a second image, wherein the first image and the second image have different resolutions; and the second processing module 11b is configured to fuse the first image and the second image obtained by the first processing module 11a to obtain a target image.
Optionally, the second processing module 11b is specifically configured to fuse the first image and the second image on the luminance channel to obtain the target image.
Optionally, the first operating mode is to merge N × N pixels that are adjacent and have the same color in the image, and the second operating mode is to control the size of a single pixel in the image to be unchanged; or the second working mode is to combine the adjacent N × N pixels with the same color in the image, and the first working mode is to control the size of a single pixel in the image to be unchanged; wherein N is a positive integer greater than or equal to 2.
Optionally, in the case of acquiring the first raw data, the exposure parameter of the image sensor is a first exposure parameter; under the condition of collecting second original data, the exposure parameter of the image sensor is a second exposure parameter; wherein the first exposure parameter is the same as the second exposure parameter.
Optionally, the first operating mode is to merge N × N pixels that are adjacent and have the same color in the image, and the second operating mode is to control the size of a single pixel in the image to be unchanged; the second processing module 11b is specifically configured to obtain a luminance component of the first image to obtain a third image, and obtain a luminance component of the second image to obtain a fourth image; carrying out global registration on the third image and the fourth image to obtain a homography matrix; taking the third image as a reference, and acquiring a target pixel value of the fourth image according to the homography matrix; performing an interpolation algorithm on the fourth image according to the target pixel value to obtain a fifth image; locally aligning the fifth image and the third image to obtain a sixth image; executing ghost detection on the sixth image, and determining a ghost area of the sixth image; filling a ghost area by using the third image to obtain a seventh image; performing multi-resolution fusion on the seventh image and the third image to obtain an eighth image; and obtaining a target image according to the color images of the eighth image and the second image.
Optionally, the second processing module 11b is specifically configured to construct a laplacian image pyramid of the seventh image to obtain a first pyramid, and construct a laplacian image pyramid of the third image to obtain a second pyramid; performing texture intensity detection on the seventh image to obtain a ninth image, and performing texture intensity detection on the fifth image to obtain a tenth image; converting the texture intensity of the ninth image into a weight value to obtain an eleventh image, and converting the texture intensity of the tenth image into a weight value to obtain a twelfth image; constructing the Gaussian pyramid of the eleventh image to obtain a third pyramid, and constructing the Gaussian pyramid of the twelfth image to obtain a fourth pyramid; constructing a Laplacian image pyramid according to the first pyramid and the third pyramid to obtain a fifth pyramid, and constructing a Laplacian image pyramid according to the second pyramid and the fourth pyramid to obtain a sixth pyramid; constructing a Laplacian image pyramid according to the fifth pyramid and the sixth pyramid to obtain a seventh pyramid; and performing Laplacian image pyramid reconstruction on the seventh pyramid to obtain an eighth image.
Optionally, the second processing module 11b is specifically configured to perform feature point detection and random sampling consensus RANSAC screening on the third image and the fourth image, respectively, to obtain a homography matrix.
Optionally, the second processing module 11b is specifically configured to construct the fifth image into a first target pyramid, and construct the third image into a second target pyramid; acquiring a running vector of a first target golden sub tower, and acquiring a motion vector of a second target golden sub tower; and taking the second target pyramid as a reference, and locally aligning the first target pyramid according to the operation vector of the first target pyramid and the operation vector of the second target pyramid to obtain a sixth image.
The electronic device 11 provided in the embodiment of the present invention can implement each process implemented by the electronic device in the above method embodiments, and is not described here again to avoid repetition.
It should be noted that, in the electronic device provided in the embodiment of the present invention, in the first working mode, the first raw data may be acquired by the image sensor to obtain the first image; acquiring second original data through the image sensor in a second working mode to obtain a second image; furthermore, the first image and the second image with different resolutions can be fused to obtain the target image. The fused target image not only has the characteristic of high signal-to-noise ratio of the image with lower resolution, but also has the characteristics of high definition and strong detail resolution capability of the image with higher standard division rate, so that compared with the first image and the second image, the fused target image has the characteristics of high signal-to-noise ratio, high definition and strong detail resolution capability. Namely, the quality of the image obtained by the electronic equipment through the image sensor is improved.
Fig. 12 is a schematic diagram of a hardware structure of an electronic device 100 according to an embodiment of the present invention, where the electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 12 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, a pedometer, and the like.
The processor 110 is configured to, in a first working mode, acquire first raw data through an image sensor to obtain a first image; acquiring second original data through an image sensor in a second working mode to obtain a second image, wherein the first image and the second image have different resolutions; and fusing the first image and the second image to obtain a target image.
It should be noted that, in the electronic device provided in the embodiment of the present invention, in the first working mode, the first raw data may be acquired by the image sensor to obtain the first image; acquiring second original data through the image sensor in a second working mode to obtain a second image; furthermore, the first image and the second image with different resolutions can be fused to obtain the target image. The fused target image not only has the characteristic of high signal-to-noise ratio of the image with lower resolution, but also has the characteristics of high definition and strong detail resolution capability of the image with higher standard division rate, so that compared with the first image and the second image, the fused target image has the characteristics of high signal-to-noise ratio, high definition and strong detail resolution capability. Namely, the quality of the image obtained by the electronic equipment through the image sensor is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 102, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The electronic device 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the electronic device 100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 12, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 100 or may be used to transmit data between the electronic apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the electronic device. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The electronic device 100 may further include a power source 111 (such as a battery) for supplying power to each component, and preferably, the power source 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 100 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 110, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program, when executed by the processor 110, implements each process of the foregoing method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the processes of the method embodiments, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (18)

1. An image processing method applied to an electronic device, the method comprising:
acquiring first original data through an image sensor in a first working mode to obtain a first image;
acquiring second original data through the image sensor in a second working mode to obtain a second image, wherein the first image and the second image have different resolutions;
and fusing the first image and the second image to obtain a target image.
2. The method of claim 1, wherein said fusing the first image and the second image to obtain a target image comprises:
and fusing the first image and the second image on a brightness channel to obtain the target image.
3. The method of claim 1, wherein the first operation mode is merging of adjacent nxn pixels of the same color in the image, and the second operation mode is controlling the size of a single pixel in the image to be unchanged;
or the second working mode is to combine adjacent N × N pixels with the same color in the image, and the first working mode is to control the size of a single pixel in the image to be unchanged;
wherein N is a positive integer greater than or equal to 2.
4. The method of claim 1,
under the condition of acquiring the first original data, the exposure parameter of the image sensor is a first exposure parameter;
under the condition of acquiring the second original data, the exposure parameter of the image sensor is a second exposure parameter;
wherein the first exposure parameter is the same as the second exposure parameter.
5. The method of claim 3, wherein the first operation mode is merging of adjacent and same color nxn pixels in the image, and the second operation mode is controlling the size of a single pixel in the image to be unchanged;
the fusing the first image and the second image to obtain the target image includes:
acquiring the brightness component of the first image to obtain a third image, and acquiring the brightness component of the second image to obtain a fourth image;
carrying out global registration on the third image and the fourth image to obtain a homography matrix;
taking the third image as a reference, and acquiring a target pixel value of the fourth image according to the homography matrix;
performing an interpolation algorithm on the fourth image according to the target pixel value to obtain a fifth image;
locally aligning the fifth image and the third image to obtain a sixth image;
executing ghost detection on the sixth image, and determining a ghost area of the sixth image;
filling the ghost area with the third image to obtain a seventh image;
performing multi-resolution fusion on the seventh image and the third image to obtain an eighth image;
and obtaining the target image according to the color images of the eighth image and the second image.
6. The method of claim 5, wherein the performing multi-resolution fusion on the seventh image and the third image to obtain an eighth image comprises:
constructing a Laplacian image pyramid of the seventh image to obtain a first pyramid, and constructing a Laplacian image pyramid of the third image to obtain a second pyramid;
performing texture intensity detection on the seventh image to obtain a ninth image, and performing texture intensity detection on the fifth image to obtain a tenth image;
converting the texture intensity of the ninth image into a weight value to obtain an eleventh image, and converting the texture intensity of the tenth image into a weight value to obtain a twelfth image;
constructing the Gaussian pyramid of the eleventh image to obtain a third pyramid, and constructing the Gaussian pyramid of the twelfth image to obtain a fourth pyramid;
constructing a Laplacian image pyramid according to the first pyramid and the third pyramid to obtain a fifth pyramid, and constructing a Laplacian image pyramid according to the second pyramid and the fourth pyramid to obtain a sixth pyramid;
constructing a Laplacian image pyramid according to the fifth pyramid and the sixth pyramid to obtain a seventh pyramid;
and performing Laplacian image pyramid reconstruction on the seventh pyramid to obtain the eighth image.
7. The method of claim 5, wherein the global registration of the third image and the fourth image to obtain a homography matrix comprises:
and respectively carrying out feature point detection and random sampling consistency RANSAC screening on the third image and the fourth image to obtain the homography matrix.
8. The method of claim 5, wherein the locally aligning the fifth image with the third image to obtain a sixth image comprises:
constructing the fifth image into a first target golden sub-tower and constructing the third image into a second target golden sub-tower;
acquiring a running vector of the first target pyramid, and acquiring a motion vector of the second target pyramid;
and taking the second target pyramid as a reference, and locally aligning the first target pyramid according to the operation vector of the first target pyramid and the operation vector of the second target pyramid to obtain the sixth image.
9. An electronic device, characterized in that the electronic device comprises: a first processing module and a second processing module;
the first processing module is used for acquiring first original data through the image sensor in a first working mode to obtain a first image; acquiring second original data through the image sensor in a second working mode to obtain a second image, wherein the first image and the second image have different resolutions;
the second processing module is configured to fuse the first image and the second image obtained by the first processing module to obtain a target image.
10. The electronic device according to claim 9, wherein the second processing module is configured to fuse the first image and the second image on a luminance channel to obtain the target image.
11. The electronic device of claim 9, wherein the first operation mode is to merge N × N pixels in the image that are adjacent and have the same color, and the second operation mode is to control the size of a single pixel in the image to be unchanged;
or the second working mode is to combine adjacent N × N pixels with the same color in the image, and the first working mode is to control the size of a single pixel in the image to be unchanged;
wherein N is a positive integer greater than or equal to 2.
12. The electronic device of claim 9,
under the condition of acquiring the first original data, the exposure parameter of the image sensor is a first exposure parameter;
under the condition of acquiring the second original data, the exposure parameter of the image sensor is a second exposure parameter;
wherein the first exposure parameter is the same as the second exposure parameter.
13. The electronic device of claim 11, wherein the first operation mode is to merge N × N pixels in the image that are adjacent and have the same color, and the second operation mode is to control the size of a single pixel in the image to be unchanged;
the second processing module is specifically configured to obtain a luminance component of the first image to obtain a third image, and obtain a luminance component of the second image to obtain a fourth image;
carrying out global registration on the third image and the fourth image to obtain a homography matrix;
taking the third image as a reference, and acquiring a target pixel value of the fourth image according to the homography matrix;
performing an interpolation algorithm on the fourth image according to the target pixel value to obtain a fifth image;
locally aligning the fifth image and the third image to obtain a sixth image;
executing ghost detection on the sixth image, and determining a ghost area of the sixth image;
filling the ghost area with the third image to obtain a seventh image;
performing multi-resolution fusion on the seventh image and the third image to obtain an eighth image;
and obtaining the target image according to the color images of the eighth image and the second image.
14. The electronic device according to claim 13, wherein the second processing module is specifically configured to construct a laplacian image pyramid of the seventh image to obtain a first pyramid, and construct a laplacian image pyramid of the third image to obtain a second pyramid;
performing texture intensity detection on the seventh image to obtain a ninth image, and performing texture intensity detection on the fifth image to obtain a tenth image;
converting the texture intensity of the ninth image into a weight value to obtain an eleventh image, and converting the texture intensity of the tenth image into a weight value to obtain a twelfth image;
constructing the Gaussian pyramid of the eleventh image to obtain a third pyramid, and constructing the Gaussian pyramid of the twelfth image to obtain a fourth pyramid;
constructing a Laplacian image pyramid according to the first pyramid and the third pyramid to obtain a fifth pyramid, and constructing a Laplacian image pyramid according to the second pyramid and the fourth pyramid to obtain a sixth pyramid;
constructing a Laplacian image pyramid according to the fifth pyramid and the sixth pyramid to obtain a seventh pyramid;
and performing Laplacian image pyramid reconstruction on the seventh pyramid to obtain the eighth image.
15. The electronic device of claim 13, wherein the second processing module is specifically configured to perform feature point detection and random sample consensus (RANSAC) screening on the third image and the fourth image, respectively, to obtain the homography matrix.
16. The electronic device according to claim 13, wherein the second processing module is specifically configured to construct the fifth image as a first target pyramid, and construct the third image as a second target pyramid;
acquiring a running vector of the first target pyramid and acquiring a motion vector of the second target pyramid;
and taking the second target pyramid as a reference, and locally aligning the first target pyramid according to the operation vector of the first target pyramid and the operation vector of the second target pyramid to obtain the sixth image.
17. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 8.
18. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 8.
CN201911077391.6A 2019-11-06 2019-11-06 Image processing method and electronic equipment Active CN110944160B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911077391.6A CN110944160B (en) 2019-11-06 2019-11-06 Image processing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911077391.6A CN110944160B (en) 2019-11-06 2019-11-06 Image processing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN110944160A true CN110944160A (en) 2020-03-31
CN110944160B CN110944160B (en) 2022-11-04

Family

ID=69907375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911077391.6A Active CN110944160B (en) 2019-11-06 2019-11-06 Image processing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN110944160B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096010A (en) * 2021-03-18 2021-07-09 Oppo广东移动通信有限公司 Image reconstruction method and apparatus, and storage medium
CN113572980A (en) * 2020-04-28 2021-10-29 华为技术有限公司 Photographing method and device, terminal equipment and storage medium
CN114693580A (en) * 2022-05-31 2022-07-01 荣耀终端有限公司 Image processing method and related device
CN115225832A (en) * 2021-04-21 2022-10-21 海信集团控股股份有限公司 Image acquisition equipment, image encryption processing method, equipment and medium
WO2022262291A1 (en) * 2021-06-15 2022-12-22 荣耀终端有限公司 Image data calling method and system for application, and electronic device and storage medium
CN115550541A (en) * 2022-04-22 2022-12-30 荣耀终端有限公司 Camera parameter configuration method and electronic equipment
US11758288B2 (en) 2020-10-21 2023-09-12 Samsung Electronics Co., Ltd. Device for improving image resolution in camera system having lens that permits distortion and operation method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102090068A (en) * 2008-08-01 2011-06-08 伊斯曼柯达公司 Improved image formation using different resolution images
US20120257079A1 (en) * 2011-04-06 2012-10-11 Dolby Laboratories Licensing Corporation Multi-Field CCD Capture for HDR Imaging
WO2018137267A1 (en) * 2017-01-25 2018-08-02 华为技术有限公司 Image processing method and terminal apparatus
CN109754377A (en) * 2018-12-29 2019-05-14 重庆邮电大学 A kind of more exposure image fusion methods

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102090068A (en) * 2008-08-01 2011-06-08 伊斯曼柯达公司 Improved image formation using different resolution images
US20120257079A1 (en) * 2011-04-06 2012-10-11 Dolby Laboratories Licensing Corporation Multi-Field CCD Capture for HDR Imaging
WO2018137267A1 (en) * 2017-01-25 2018-08-02 华为技术有限公司 Image processing method and terminal apparatus
CN109754377A (en) * 2018-12-29 2019-05-14 重庆邮电大学 A kind of more exposure image fusion methods

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113572980A (en) * 2020-04-28 2021-10-29 华为技术有限公司 Photographing method and device, terminal equipment and storage medium
WO2021218551A1 (en) * 2020-04-28 2021-11-04 华为技术有限公司 Photographing method and apparatus, terminal device, and storage medium
CN113572980B (en) * 2020-04-28 2022-10-11 华为技术有限公司 Photographing method and device, terminal equipment and storage medium
EP4131928A4 (en) * 2020-04-28 2023-10-04 Huawei Technologies Co., Ltd. Photographing method and apparatus, terminal device, and storage medium
US11758288B2 (en) 2020-10-21 2023-09-12 Samsung Electronics Co., Ltd. Device for improving image resolution in camera system having lens that permits distortion and operation method thereof
CN113096010A (en) * 2021-03-18 2021-07-09 Oppo广东移动通信有限公司 Image reconstruction method and apparatus, and storage medium
CN115225832A (en) * 2021-04-21 2022-10-21 海信集团控股股份有限公司 Image acquisition equipment, image encryption processing method, equipment and medium
WO2022262291A1 (en) * 2021-06-15 2022-12-22 荣耀终端有限公司 Image data calling method and system for application, and electronic device and storage medium
CN115550541A (en) * 2022-04-22 2022-12-30 荣耀终端有限公司 Camera parameter configuration method and electronic equipment
CN115550541B (en) * 2022-04-22 2024-04-09 荣耀终端有限公司 Camera parameter configuration method and electronic equipment
CN114693580A (en) * 2022-05-31 2022-07-01 荣耀终端有限公司 Image processing method and related device

Also Published As

Publication number Publication date
CN110944160B (en) 2022-11-04

Similar Documents

Publication Publication Date Title
CN110944160B (en) Image processing method and electronic equipment
US20220207680A1 (en) Image Processing Method and Apparatus
US10497097B2 (en) Image processing method and device, computer readable storage medium and electronic device
CN110136183B (en) Image processing method and device and camera device
CN108605099B (en) Terminal and method for terminal photographing
WO2020192483A1 (en) Image display method and device
KR102474715B1 (en) Parallax Mask Fusion of Color and Mono Images for Macrophotography
CN107851307B (en) Method and system for demosaicing bayer-type image data for image processing
CN112449120B (en) High dynamic range video generation method and device
CN108391060B (en) Image processing method, image processing device and terminal
CN107566749B (en) Shooting method and mobile terminal
CN111145192B (en) Image processing method and electronic equipment
US20140320602A1 (en) Method, Apparatus and Computer Program Product for Capturing Images
CN106664351A (en) Method and system of lens shading color correction using block matching
KR20190082080A (en) Multi-camera processor with feature matching
CN109104578B (en) Image processing method and mobile terminal
CN109005314B (en) Image processing method and terminal
CN110766610B (en) Reconstruction method of super-resolution image and electronic equipment
CN110944163A (en) Image processing method and electronic equipment
CN108320265B (en) Image processing method, terminal and computer readable storage medium
CN113284063A (en) Image processing method, image processing apparatus, electronic device, and readable storage medium
WO2022115996A1 (en) Image processing method and device
CN112150357B (en) Image processing method and mobile terminal
CN109729264B (en) Image acquisition method and mobile terminal
WO2017094504A1 (en) Image processing device, image processing method, image capture device, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant