CN112308771A - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN112308771A
CN112308771A CN201910703737.2A CN201910703737A CN112308771A CN 112308771 A CN112308771 A CN 112308771A CN 201910703737 A CN201910703737 A CN 201910703737A CN 112308771 A CN112308771 A CN 112308771A
Authority
CN
China
Prior art keywords
image
processed
images
pixel
format
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910703737.2A
Other languages
Chinese (zh)
Other versions
CN112308771B (en
Inventor
秦帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910703737.2A priority Critical patent/CN112308771B/en
Publication of CN112308771A publication Critical patent/CN112308771A/en
Application granted granted Critical
Publication of CN112308771B publication Critical patent/CN112308771B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4015Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image processing method and device and electronic equipment, and belongs to the technical field of electronic equipment. The electronic device may obtain at least two frames of images to be processed in a four-pixel integration format, then use one frame of image to be processed in the at least two frames of images to be processed in the four-pixel integration format as a reference image, then perform information compensation on the reference image based on the images to be processed except the reference image to obtain a first image, and finally perform mosaic rearrangement processing on the first image to obtain a target image in a bayer format. Because each acquired image to be processed comprises different information, the finally obtained target image comprises more image information, and the richness of the image can be further improved.

Description

Image processing method and device and electronic equipment
Technical Field
The invention belongs to the technical field of electronic equipment, and particularly relates to an image processing method and device and electronic equipment.
Background
With the continuous development of electronic device technology, the electronic devices are applied more and more widely, and users often use the electronic devices to capture images. Specifically, when capturing an image, the electronic device usually acquires a frame of image, as a final output image,
however, due to the influence of external factors, the captured image may have information missing, which may result in poor image performance.
Disclosure of Invention
The invention provides an image processing method, an image processing device and electronic equipment, and aims to solve the problems of information loss and poor effect of an image.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, which is applied to an electronic device, and the method may include:
acquiring at least two frames of images to be processed in a four-pixel integration format;
taking one frame of image to be processed in the at least two frames of image to be processed in the four-pixel integration format as a reference image;
performing information compensation on the reference image based on the images to be processed except the reference image in the images to be processed in the at least two frames of four-pixel-in-one format to obtain a first image in the four-pixel-in-one format;
and performing mosaic rearrangement processing on the first image to obtain a target image in a Bayer format.
In a second aspect, an embodiment of the present invention provides an apparatus, where the electronic device may include:
the acquisition module is used for acquiring at least two frames of images to be processed in a four-pixel integration format;
the selection module is used for taking one frame of image to be processed in the at least two frames of images to be processed in the four-pixel integration format as a reference image;
the first compensation module is used for performing information compensation on the reference image based on the images to be processed except the reference image in the images to be processed in the at least two frames of four-pixel-in-one format to obtain a first image in the four-pixel-in-one format;
and the processing module is used for performing mosaic rearrangement processing on the first image to obtain a target image in a Bayer format.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the computer program implements the steps of the image processing method according to the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the image processing method according to the first aspect.
In the embodiment of the present invention, the electronic device may obtain at least two frames of images to be processed in a four-pixel integration format, then use one frame of image to be processed in the at least two frames of images to be processed in the four-pixel integration format as a reference image, then perform information compensation on the reference image based on the images to be processed except the reference image to obtain a first image, and finally perform mosaic rearrangement processing on the first image to obtain a target image in a bayer format. Because each acquired image to be processed comprises different information, the finally obtained target image comprises more image information, and the richness of the image can be further improved.
Drawings
FIG. 1 is a flowchart illustrating steps of an image processing method according to an embodiment of the present invention;
FIG. 2-1 is a flow chart illustrating steps of another image processing method according to an embodiment of the present invention;
FIG. 2-2 is a schematic diagram of a first image provided by an embodiment of the invention;
2-3 illustrate a re-mosaiced target image according to embodiments of the present invention;
FIG. 3-1 is a flowchart illustrating steps of another image processing method according to an embodiment of the present invention;
FIG. 3-2 is a schematic diagram of a convolution process according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating steps of another image processing method according to an embodiment of the present invention;
fig. 5 is a block diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing various embodiments of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of steps of an image processing method provided by an embodiment of the present invention, where the method may be applied to an electronic device, and as shown in fig. 1, the method may include:
step 101, acquiring at least two frames of images to be processed in a four-pixel integration format.
In the embodiment of the present invention, the to-be-processed image may be obtained when a shooting operation is received, where the shooting operation may be a triggering operation of a user on a shooting function of the electronic device, for example, the shooting operation may be a click on a shooting key, further, when the shooting operation is received, it may be considered that the user needs to shoot an image to record a picture detected by a current camera, so that at least two frames of the to-be-processed image in a four-pixel unification format may be obtained, where the to-be-processed images in the at least two frames of the four-pixel unification format may be recorded images with the same picture content and different image information content, specifically, the to-be-processed images in the at least two frames of the four-pixel unification format may be obtained in a continuous shooting manner to ensure that the picture content recorded by the obtained to-be-processed images is the same, during shooting, sampling can be performed by using a four-pixel-in-one (4-Cell) technology, namely, the value of a certain type of color channel corresponding to each pixel can be acquired, and then the pixels corresponding to the same type of color channel are arranged together, so that the image to be processed in the four-pixel-in-one format is obtained. Further, under the influence of external factors, the amount of image information included in each frame of the acquired to-be-processed image may be different, for example, the brightness of the external environment changes, and accordingly, the amount of image information included in each frame of the to-be-processed image may be different, so that each frame of the to-be-processed image has different amounts of image information. Of course, the image to be processed may also be input to the electronic device, which is not limited in this embodiment of the present invention.
And 102, taking one frame of image to be processed in the at least two frames of image to be processed with the four-pixel integration format as a reference image.
In the embodiment of the present invention, a frame of image to be processed may be optionally selected as a reference image, and a frame of image to be processed may also be selected as a reference image according to image quality, and further, a frame of image is selected from a plurality of frames of images to be processed as a reference image, so that it is convenient for the subsequent steps to further enrich the reference image by using image information of other images to be processed as an operation object, and further obtain a target image.
And 103, performing information compensation on the reference image based on the images to be processed except the reference image in the images to be processed in the at least two frames of four-pixel-in-one format to obtain a first image in the four-pixel-in-one format.
In the embodiment of the invention, the compensation information corresponding to the image to be processed can be obtained, the compensation information is utilized to compensate the reference image, and the residual image to be processed possibly contains image information which is not contained in the reference image, so that the information compensation can be carried out on the reference image based on the residual image to be processed, and thus, the information loss caused by the influence of external factors can be compensated to a certain extent, the image information contained in the finally obtained target image is richer, and the image effect is further improved.
And 104, performing mosaic rearrangement on the first image to obtain a target image in a Bayer format.
Specifically, when the first image is subjected to re-mosaic (remosaic) processing, the arrangement of the pixels in the initial image may be adjusted based on pixel position translation and a pixel interpolation algorithm according to an arrangement manner defined by a bayer (bayer) format, so that the pixel arrangement rule conforms to the bayer format, and then the image to be processed is obtained.
In summary, the image processing method according to the embodiment of the present invention obtains at least two frames of images to be processed in a four-pixel integration format, then uses one frame of image to be processed in the at least two frames of images to be processed in the four-pixel integration format as a reference image, then performs information compensation on the reference image based on the images to be processed except the reference image to obtain a first image, and finally performs mosaic rearrangement processing on the first image to obtain a target image in a bayer format. Because each acquired image to be processed comprises different information, the finally obtained target image comprises more image information, and the richness of the image can be further improved.
Fig. 2-1 is a flowchart of steps of another image processing method provided by an embodiment of the present invention, which may be applied to an electronic device, as shown in fig. 2-1, and the method may include:
step 201, acquiring at least two frames of images to be processed in a four-pixel integration format.
Specifically, the implementation manner of this step may refer to step 101 described above, and details of the embodiment of the present invention are not described herein.
Step 202, inputting each frame of the image to be processed into a pre-trained image selection model.
In this step, the image selection model may be a model obtained by training a plurality of frames of first sample images and a first sample image with the highest quality, specifically, the plurality of frames of first sample images may be input into an initial image selection model, the initial image selection model may extract image features, predict the first sample image with the highest quality based on the image features, and output the first sample image with the highest image quality, then calculate a loss value of the initial image selection model based on the output first sample image with the highest image quality and the real first sample image with the highest quality, and adjust parameters in the initial image selection model based on the loss value to generate the image selection model. Further, in this step, each frame of image to be processed is input into the pre-trained image selection model, so that the image to be processed with the highest quality can be directly output by using the image selection model.
Further, when training the image selection model, the training may be performed in conjunction with each first sample image number, specifically, the training may be performed by using a plurality of frames of the first sample image, an image number of each frame of the first sample image, and an image number of a sample image with the highest quality among a plurality of frames of the second sample image, so that the image selection model may output the number of the image with the highest quality based on the plurality of input images and image numbers thereof, specifically, the plurality of frames of the first sample image and the image number thereof may be input to an initial image selection model, the initial image selection model may extract image features, predict the image with the highest quality based on the image features, and output the image number of the first sample image, and then the image number based on the output and the image number of the first sample image with the highest quality among the plurality of frames of the first sample images, calculating a loss value of the initial image selection model, if the loss value is within a preset range, determining that the initial image selection model can correctly determine the image with the highest image quality, at this time, determining the initial image selection model as the image selection model, further, if the loss value is not within the preset range, determining that the initial image selection model cannot correctly determine the image with the highest image quality, at this time, adjusting parameters in the initial image selection model, and continuing training based on the adjusted initial image selection model until the loss value is within the preset range. Therefore, the image selection model is trained by combining the image and the image number thereof, so that when the image with the highest predicted image quality is selected and output by the image, only the corresponding image number needs to be output, meanwhile, when the loss value is calculated, only the output image number and the image number of the first sample image with the highest real image quality need to be calculated, the loss value can be determined, and compared with the output image, the loss value is calculated based on the output image and the first sample image with the highest real image quality. Furthermore, in the embodiment of the invention, the image to be processed with the best image quality can be determined by inputting the image to be processed with the set image number into the image selection model without calculation, so that the efficiency of the determination operation can be improved to a certain extent, and the cost of the determination operation can be reduced.
Correspondingly, when the reference image is selected, an image number can be set for each frame of image to be processed, and then, each frame of image to be processed and the image number of the image to be processed are input into the pre-trained image selection model to obtain the number of the image to be processed with the highest image quality.
And 203, taking the image to be processed output by the image selection model as the reference image.
In this step, the image to be processed output by the image selection model is used as the image to be processed with the highest image quality, and the reference image can be ensured to have higher image quality to the greatest extent. Of course, in another optional embodiment of the present invention, the to-be-processed image with the highest image quality may also be obtained in other manners, for example, an image quality index corresponding to each to-be-processed image may be calculated, and the to-be-processed image with the highest image quality is determined based on the image quality index, where the image quality index may be a Peak Signal to Noise Ratio (PSNR), and correspondingly, the PSNR of each to-be-processed image may be respectively calculated, and then the to-be-processed image with the largest PSNR value is determined as the to-be-processed image with the highest image quality, and the image quality index may also be a Mean Squared Error (MSE), and correspondingly, the MSE of each to-be-processed image may be respectively calculated, and then the to-be-processed image with the smallest MSE value is determined as the to-be-processed image with the highest image quality. Further, one frame of image to be processed may be selected as a reference image from the images to be processed whose image quality satisfies the preset quality condition, which is not limited in the embodiment of the present invention.
Furthermore, one frame of image can be selected as the reference image optionally, so that the reference image can be determined only through selection operation without calculation and other operations, the whole operation process is simple and convenient, and the consumed cost is low. Specifically, during the selection, a frame of image to be processed may be randomly selected as the reference image by using a preset random selection algorithm.
And 204, performing information compensation on the reference image based on the images to be processed except the reference image in the images to be processed in the at least two frames of four-pixel-in-one format to obtain a first image in the four-pixel-in-one format.
Specifically, the implementation manner of this step may refer to the foregoing steps, and details are not described herein in this embodiment of the present invention.
And step 205, performing mosaic rearrangement processing on the first image to obtain a target image in a bayer pattern.
Fig. 2-2 is a schematic diagram of a first image provided by an embodiment of the present invention, and it can be seen that pixels corresponding to the same type of color channel are arranged together. For example, fig. 2-3 illustrate a re-mosaic processed target image according to an embodiment of the present invention, as shown in fig. 2-3, the arrangement of the pixels in fig. 2-3 is changed compared to the arrangement of the pixels in fig. 2-2.
In summary, the image processing method provided in the embodiment of the present invention obtains at least two frames of images to be processed in a four-pixel integration format, inputs each frame of images to be processed into the pre-trained image selection model, and uses the images to be processed output by the image selection model as the reference images, so that the reference images with the highest quality can be conveniently obtained only through input operation, and further the processing efficiency can be improved to a certain extent. Because each acquired image to be processed comprises different information, the target image obtained after compensation comprises more image information, and the richness of the image can be further improved.
Fig. 3-1 is a flowchart of steps of another image processing method provided by an embodiment of the present invention, which may be applied to an electronic device, as shown in fig. 3-1, and the method may include:
step 301, acquiring at least two frames of images to be processed in a four-pixel integration format.
Specifically, the implementation manner of this step may refer to descriptions in other embodiments, and details of the embodiments of the present invention are not described herein.
Step 302, taking one frame of image to be processed in the at least two frames of image to be processed with the four-pixel integration format as a reference image.
Specifically, the implementation manner of this step may refer to descriptions in other embodiments, and details of the embodiments of the present invention are not described herein.
And 303, taking the images to be processed except the reference image in the at least two frames of images to be processed in the four-pixel-in-one format as residual images to be processed, inputting each frame of residual images to be processed into a pre-trained convolution kernel generation model, and generating a target convolution kernel corresponding to each frame of residual images to be processed through the convolution kernel generation model.
In this step, the convolution kernel generation model may be trained in advance based on a convolution neural network, and the convolution kernel generation model may generate a target convolution kernel capable of extracting compensation information from the image to be processed based on image information included in the image to be processed, further, when the convolution kernel generation model is trained, parameters in the convolution kernel generation model may be set so that the trained convolution kernel generation model has a dimension of the target convolution kernel generated for the image to be processed larger than 1, thereby ensuring that when a pixel matrix corresponding to the image to be processed is processed based on the target convolution kernel corresponding to the image to be processed, pixels and adjacent pixels of the pixels are considered, and further the generated target convolution kernel can be extracted based on the pixels and the adjacent pixels in subsequent steps, so that a final pixel compensation matrix can express local and global characteristics of the image to be processed, thereby improving the compensation effect.
Further, optionally, the convolution kernel generation model may be implemented by the following steps a to D:
step A, inputting a sample image group into an initial convolution kernel generation model, and generating a convolution kernel corresponding to each frame of second sample image in the sample image group through the initial convolution kernel generation model.
In this step, the sample image group may include multiple frames of second sample images, the second sample images included in the sample image group may be images acquired in the same scene and having the same content and the same included image information amount with differences, further, the model capable of generating the initial convolution kernel may be built based on a neural network, specifically, the second sample images in the sample image group may be input into the model capable of generating the initial convolution kernel, and the model capable of generating the initial convolution kernel may generate the convolution kernel corresponding to each frame of the second sample image based on the image features of the second sample images. Specifically, the number of convolution kernels and the included dimension number corresponding to each frame of the second sample image may be preset, and the number of convolution kernels corresponding to the second sample image may be 1 or multiple, which is not limited in the embodiment of the present invention.
B, calculating a loss value of the initial convolution kernel generation model based on a convolution kernel corresponding to each frame of the second sample image, each frame of the second sample image and a real image corresponding to the sample image group; the real image comprises more image information than any second sample image in the sample image group.
In this step, when calculating a convolution value, for each frame of the second sample image, performing convolution operation on the second sample image by using a convolution kernel corresponding to the second sample image to obtain an offset information matrix corresponding to each frame of the second sample image, where the offset information matrix may represent an information offset that may exist in the second sample image, then selecting one frame of the second sample image from the sample image group, for each offset information matrix, adding a value of each element in the offset information matrix to a color channel value of a pixel at a corresponding position in the selected second sample image to obtain a predicted image, and finally calculating a loss value of the initial convolution kernel generation model based on the predicted image and the real image. Specifically, the degree of deviation between the predicted image and the real image can represent the prediction capability of the initial convolution kernel generation model, so that the loss value of the initial convolution kernel generation model can be calculated based on the predicted image and the real image, and during calculation, the value of each pixel in the predicted image and the value of each pixel in the real image can be substituted into a preset loss function, and the loss value of the initial convolution kernel generation model can be obtained.
Step D, if the loss value is within a preset range, taking the initial convolution kernel generation model as the convolution kernel generation model; or if the loss value is not within the preset range, adjusting parameters of the initial convolution kernel generation model, and continuing training until the loss value is within the preset range based on the adjusted initial convolution kernel generation model.
If the loss value is within the preset range, the prediction capability of the initial convolution kernel generation model can be considered to be strong enough, that is, the information extracted by the convolution kernel generated by the initial convolution kernel generation model is accurate enough, at this time, the training can be stopped, the initial convolution kernel generation model is used as the convolution kernel generation model, further, if the loss value is within the preset range, the information extracted by the convolution kernel generated by the initial convolution kernel generation model is not accurate enough, at this time, the parameters in the initial convolution kernel generation model can be adjusted, and the training is continued based on the adjusted initial convolution kernel generation model until the loss value is within the preset range, the training is stopped. In the embodiment of the invention, the convolution kernel generation model is trained in advance, and is utilized to generate the model, so that the convolution kernel is generated for the image to be processed in a targeted manner, and further, in the subsequent steps, the pixel compensation matrix extracted from the image to be processed based on the convolution kernel is more accurate.
And step 304, performing convolution processing on the corresponding residual images to be processed by utilizing each target convolution kernel to obtain a pixel compensation matrix of each frame of residual images to be processed.
In this step, the process of performing convolution processing on any remaining image to be processed may be: performing convolution operation by using a target convolution kernel corresponding to the remaining to-be-processed image and a pixel matrix corresponding to the remaining to-be-processed image, where the pixel matrix corresponding to the remaining to-be-processed image may be a matrix formed by pixels in the remaining to-be-processed image, and each element in the pixel matrix may be each pixel in the remaining to-be-processed image, where a value of each element in the pixel matrix is a color channel value of a pixel corresponding to each element in the remaining to-be-processed image, and specifically, the convolution operation may be performed by traversing the entire pixel matrix by using the target convolution kernel, for example, fig. 3-2 is a schematic diagram of convolution processing provided in an embodiment of the present invention, as shown in fig. 3-2, and the schematic diagram includes the pixel matrix corresponding to the remaining to-be-processed image, the target convolution kernel, and a pixel compensation matrix.
Further, when performing the convolution operation, the convolution operation can be performed by the following formula:
destination(i,j)=∑source(i+k,j+l)kernel(k,l)
where destination (i, j) represents the value of the element corresponding to the element in the ith row and the jth column in the pixel matrix in the pixel compensation matrix, kernal (k, j) represents the value of the element with the coordinate (k, j) in the target convolution kernel, and source (i + k, j +1) represents the value of the element corresponding to the element with the coordinate (k, j) in the target convolution kernel in the pixel matrix.
Step 305, adding each pixel compensation matrix to the reference image to obtain a first image in a four-pixel integration format.
Specifically, the element values of the elements in the pixel compensation matrix may be added to the color channel values of the pixels at the corresponding positions in the reference image to obtain the first target image in the four-pixel integration format. By adding the color channel value of the pixel in the reference image with the value of the corresponding element in the pixel compensation matrix, the image information amount contained in the reference image can be increased, and the missing information of the reference image can be compensated to a certain extent.
And step 306, performing mosaic rearrangement on the first image to obtain a target image in a Bayer format.
Specifically, the implementation manner of this step may refer to descriptions in other embodiments, and details of the embodiments of the present invention are not described herein.
In summary, the image processing method provided in the embodiment of the present invention can obtain at least two frames of images to be processed in a four-pixel integration format; taking one frame of image to be processed in at least two frames of images to be processed in a four-pixel integration format as a reference image, taking the images to be processed except the reference image in the at least two frames of images to be processed in the four-pixel integration format as residual images to be processed, inputting each frame of residual images to be processed into a pre-trained convolution kernel generation model, generating a model through a convolution kernel, and generating a target convolution kernel corresponding to each frame of residual images to be processed; carrying out convolution processing on the corresponding residual images to be processed by utilizing each target convolution kernel to obtain a pixel compensation matrix of each frame of residual images to be processed; each pixel compensation matrix is added with the reference image to obtain a first image in a four-pixel integration format, the first image is subjected to rearrangement mosaic processing to obtain a target image in a Bayer format, and each obtained image to be processed comprises different information, so that the finally obtained target image can contain more image information by adding the pixel compensation matrix extracted from other images to be processed to the reference image, and the richness of the image can be improved.
Fig. 4 is a flowchart of steps of still another image processing method provided by an embodiment of the present invention, which may be applied to an electronic device, as shown in fig. 4, and the method may include:
step 401, acquiring at least two frames of images to be processed in a four-pixel integration format.
Specifically, the implementation manner of this step may refer to descriptions in other embodiments, and details of the embodiments of the present invention are not described herein.
Step 402, taking one frame of image to be processed in the at least two frames of image to be processed with the four-pixel integration format as a reference image.
Specifically, the implementation manner of this step may refer to descriptions in other embodiments, and details of the embodiments of the present invention are not described herein.
Step 403, performing information compensation on the reference image based on the to-be-processed image except the reference image in the to-be-processed images in the at least two frames of four-pixel-in-one format to obtain a first image in the four-pixel-in-one format.
Specifically, the implementation manner of this step may refer to descriptions in other embodiments, and details of the embodiments of the present invention are not described herein.
And step 404, performing mosaic rearrangement processing on the first image to obtain a target image in a Bayer format.
Specifically, the implementation manner of this step may refer to descriptions in other embodiments, and details of the embodiments of the present invention are not described herein.
Step 405, performing information compensation on the target image based on the reference image.
In an actual scene, some regions in the reference image may contain image information missing from other regions, so in the embodiment of the present invention, information compensation may be performed on the target image based on the reference image, and further, the amount of image information contained in the target image is increased to a greater extent, and the effect of the target image is improved. Specifically, a pixel compensation matrix corresponding to the reference image may be extracted based on the color channel value of each pixel in the reference image and the color channel values of the adjacent pixels of each pixel, and then the compensation values of each element in the pixel compensation matrix may be added to the color channel value of the pixel at the corresponding position in the target image. When the pixel compensation matrix corresponding to the reference image is extracted, the reference image can be input into a pre-trained convolution kernel generation model, a target convolution kernel corresponding to the reference image is generated through the convolution kernel generation model, the dimension of the target convolution kernel corresponding to the reference image is larger than 1, the pixel matrix corresponding to the reference image is subjected to convolution processing through the target convolution kernel, the pixel compensation matrix corresponding to the reference image is obtained, and the value of each element in the pixel matrix is the color channel value of the pixel corresponding to each element in the reference image.
In other possible embodiments of the present invention, optionally, after the target Image is generated, the target Image may be input to an Image Signal Processor, so as to obtain the target Image to be displayed, and thus, the Image Processor (ISP) is used to perform subsequent processing on the target Image, for example, format conversion is performed on the target Image, so that the target Image can be processed by other devices. At this time, optionally, the method may include step 405, or may not include, which is not specifically limited in this embodiment of the present invention.
In the embodiment of the present invention, an image in another format may be used as the image to be processed, for example, an image in a bayer format is used as the image to be processed, which is not limited in the embodiment of the present invention.
In summary, the image processing method provided in the embodiment of the present invention obtains at least two frames of images to be processed in a four-pixel integration format, then uses one frame of images to be processed in the at least two frames of images to be processed in the four-pixel integration format as a reference image, then performs information compensation on the reference image based on the images to be processed other than the reference image to obtain a first image, then performs mosaic rearrangement processing on the first image to obtain a target image in a bayer format, and finally further performs compensation on the target image by using the reference image, so as to further improve the image information content included in the target image to a greater extent and improve the effect of the target image.
Fig. 5 is a block diagram of an image processing apparatus according to an embodiment of the present invention, and as shown in fig. 5, the apparatus 50 may include:
the acquiring module 501 is configured to acquire at least two frames of images to be processed in a four-pixel integration format.
The selecting module 502 is configured to use one frame of to-be-processed image in the at least two frames of to-be-processed images in the four-pixel integration format as a reference image.
The first compensation module 503 is configured to perform information compensation on the reference image based on the to-be-processed image except the reference image in the to-be-processed images in the at least two frames of four-pixel unification format, so as to obtain a first image in the four-pixel unification format.
The processing module 504 is configured to perform mosaic rearrangement processing on the first image to obtain a target image in a bayer pattern.
In summary, the apparatus provided in the embodiment of the present invention may obtain at least two frames of images to be processed in a four-pixel integration format, then use one frame of image to be processed in the at least two frames of images to be processed in the four-pixel integration format as a reference image, then perform information compensation on the reference image based on the images to be processed except the reference image to obtain a first image, and finally perform mosaic rearrangement processing on the first image to obtain a target image in a bayer format. Because each acquired image to be processed comprises different information, the finally obtained target image comprises more image information, and the richness of the image can be further improved.
Optionally, the selecting module 502 is specifically configured to:
inputting each frame of image to be processed into a pre-trained image selection model; the image selection model is obtained by training a plurality of frames of first sample images and the first sample image with the highest quality.
And taking the image to be processed output by the image selection model as the reference image.
Optionally, the first compensation module 503 is specifically configured to:
and taking the images to be processed except the reference image in the images to be processed in the at least two frames of four-pixel integrated format as residual images to be processed, inputting each frame of residual images to be processed into a pre-trained convolution kernel generation model, and generating a target convolution kernel corresponding to each frame of residual images to be processed through the convolution kernel generation model.
And performing convolution processing on the corresponding residual images to be processed by utilizing each target convolution kernel to obtain a pixel compensation matrix of each frame of residual images to be processed.
And adding each pixel compensation matrix and the reference image to obtain a first image in a four-pixel integration format.
Optionally, the apparatus 50 further includes:
and the input module is used for inputting the target image into an image signal processor to obtain a target image to be displayed.
Optionally, the apparatus 50 further includes:
and the second compensation module is used for performing information compensation on the target image based on the reference image.
In summary, the apparatus provided in the embodiment of the present invention may obtain at least two frames of images to be processed in a four-pixel integration format, then use one frame of image to be processed in the at least two frames of images to be processed in the four-pixel integration format as a reference image, then perform information compensation on the reference image based on the images to be processed except the reference image to obtain a first image, and finally perform mosaic rearrangement processing on the first image to obtain a target image in a bayer format. Because each acquired image to be processed comprises different information, the finally obtained target image comprises more image information, and the richness of the image can be further improved.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device for implementing various embodiments of the present invention, and as shown in fig. 6, the electronic device 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and a power supply 611. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 6 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, a pedometer, and the like.
The processor 610 is configured to obtain at least two frames of images to be processed in a four-pixel integration format.
The processor 610 is configured to use one frame of to-be-processed image in the at least two frames of to-be-processed images in the four-pixel integration format as a reference image.
The processor 610 is configured to perform information compensation on the reference image based on an image to be processed, other than the reference image, in the image to be processed in the at least two frames of four-pixel-in-one format, so as to obtain a first image in the four-pixel-in-one format.
And the processor 610 is configured to perform mosaic rearrangement processing on the first image to obtain a target image in a bayer pattern.
In the embodiment of the present invention, the electronic device may obtain at least two frames of images to be processed in a four-pixel integration format, then use one frame of image to be processed in the at least two frames of images to be processed in the four-pixel integration format as a reference image, then perform information compensation on the reference image based on the images to be processed except the reference image to obtain a first image, and finally perform mosaic rearrangement processing on the first image to obtain a target image in a bayer format. Because each acquired image to be processed comprises different information, the finally obtained target image comprises more image information, and the richness of the image can be further improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 610; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 601 may also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 602, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 may also provide audio output related to a specific function performed by the electronic apparatus 600 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used to receive audio or video signals. The input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics processor 6041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. The microphone 6042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 601 in case of the phone call mode.
The electronic device 600 also includes at least one sensor 605, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 6061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 6061 and/or the backlight when the electronic apparatus 600 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 605 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 606 is used to display information input by the user or information provided to the user. The Display unit 106 may include a Display panel 6061, and the Display panel 6061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 6071 using a finger, stylus, or any suitable object or accessory). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 610, receives a command from the processor 610, and executes the command. In addition, the touch panel 6071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, the other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 6071 can be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation on or near the touch panel 6071, the touch operation is transmitted to the processor 610 to determine the type of the touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although the touch panel 6071 and the display panel 6061 are shown in fig. 6 as two separate components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the electronic device, and this is not limited here.
The interface unit 608 is an interface for connecting an external device to the electronic apparatus 600. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the electronic device 600 or may be used to transmit data between the electronic device 600 and external devices.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, performs various functions of the electronic device and processes data by running or executing software programs and/or modules stored in the memory 609, and calling data stored in the memory 609, thereby performing overall monitoring of the electronic device. Processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The electronic device 600 may further include a power supply 611 (e.g., a battery) for supplying power to the various components, and preferably, the power supply 611 may be logically connected to the processor 610 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
In addition, the electronic device 600 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 610, a memory 609, and a computer program stored in the memory 609 and capable of running on the processor 610, where the computer program, when executed by the processor 610, implements each process of the above-mentioned image processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The term "comprising" is used to specify the presence of stated features, integers, steps, operations, elements, components, operations.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. An image processing method applied to an electronic device, the method comprising:
acquiring at least two frames of images to be processed in a four-pixel integration format;
taking one frame of image to be processed in the at least two frames of image to be processed in the four-pixel integration format as a reference image;
performing information compensation on the reference image based on the images to be processed except the reference image in the images to be processed in the at least two frames of four-pixel-in-one format to obtain a first image in the four-pixel-in-one format;
and performing mosaic rearrangement processing on the first image to obtain a target image in a Bayer format.
2. The method according to claim 1, wherein the taking one of the at least two frames of images to be processed in the four-pixel unification format as a reference image specifically comprises:
inputting each frame of image to be processed into a pre-trained image selection model; the image selection model is a model obtained by training a plurality of frames of first sample images and the first sample image with the highest quality;
and taking the image to be processed output by the image selection model as the reference image.
3. The method according to claim 1, wherein the performing information compensation on the reference image based on the to-be-processed image other than the reference image in the at least two frames of to-be-processed images in the four-pixel unification format to obtain the first image in the four-pixel unification format specifically comprises:
taking the images to be processed except the reference image in the images to be processed in the at least two frames of four-pixel-in-one format as residual images to be processed, inputting each frame of residual images to be processed into a pre-trained convolution kernel generation model, and generating a target convolution kernel corresponding to each frame of residual images to be processed through the convolution kernel generation model;
performing convolution processing on the corresponding residual images to be processed by utilizing each target convolution kernel to obtain a pixel compensation matrix of each frame of residual images to be processed;
and adding each pixel compensation matrix and the reference image to obtain a first image in a four-pixel integration format.
4. The method according to claim 1, wherein after the demosaicing of the first image to obtain the target image in bayer pattern, the method further comprises:
and inputting the target image into an image signal processor to obtain a target image to be displayed.
5. The method of any of claims 1 to 4, wherein after obtaining the target image, the method further comprises:
and performing information compensation on the target image based on the reference image.
6. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring at least two frames of images to be processed in a four-pixel integration format;
the selection module is used for taking one frame of image to be processed in the at least two frames of images to be processed in the four-pixel integration format as a reference image;
the first compensation module is used for performing information compensation on the reference image based on the images to be processed except the reference image in the images to be processed in the at least two frames of four-pixel-in-one format to obtain a first image in the four-pixel-in-one format;
and the processing module is used for performing mosaic rearrangement processing on the first image to obtain a target image in a Bayer format.
7. The apparatus according to claim 6, wherein the selection module is specifically configured to:
inputting each frame of image to be processed into a pre-trained image selection model; the image selection model is a model obtained by training a plurality of frames of first sample images and the first sample image with the highest quality;
and taking the image to be processed output by the image selection model as the reference image.
8. The apparatus of claim 6, wherein the first compensation module is specifically configured to:
taking the images to be processed except the reference image in the images to be processed in the at least two frames of four-pixel-in-one format as residual images to be processed, inputting each frame of residual images to be processed into a pre-trained convolution kernel generation model, and generating a target convolution kernel corresponding to each frame of residual images to be processed through the convolution kernel generation model;
performing convolution processing on the corresponding residual images to be processed by utilizing each target convolution kernel to obtain a pixel compensation matrix of each frame of residual images to be processed;
and adding each pixel compensation matrix and the reference image to obtain a first image in a four-pixel integration format.
9. The apparatus of claim 6, further comprising:
and the input module is used for inputting the target image into an image signal processor to obtain a target image to be displayed.
10. The apparatus of any of claims 6 to 9, further comprising:
and the second compensation module is used for performing information compensation on the target image based on the reference image.
CN201910703737.2A 2019-07-31 2019-07-31 Image processing method and device and electronic equipment Active CN112308771B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910703737.2A CN112308771B (en) 2019-07-31 2019-07-31 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910703737.2A CN112308771B (en) 2019-07-31 2019-07-31 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112308771A true CN112308771A (en) 2021-02-02
CN112308771B CN112308771B (en) 2024-08-16

Family

ID=74486325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910703737.2A Active CN112308771B (en) 2019-07-31 2019-07-31 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112308771B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132629A (en) * 2023-02-17 2023-11-28 荣耀终端有限公司 Image processing method and electronic device

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000244851A (en) * 1999-02-18 2000-09-08 Canon Inc Picture processor and method and computer readable storage medium
JP2004272751A (en) * 2003-03-11 2004-09-30 Seiko Epson Corp Creation of still image from plurality of frame images
US20120105681A1 (en) * 2010-11-03 2012-05-03 Morales Efrain O Method for producing high dynamic range images
WO2014022965A1 (en) * 2012-08-07 2014-02-13 展讯通信(上海)有限公司 Image processing method and device based on bayer format
CN103765876A (en) * 2011-08-31 2014-04-30 索尼公司 Image processing device, image processing method, and program
JP2014103626A (en) * 2012-11-22 2014-06-05 Olympus Corp Image processor, image processing method and program
US20170374299A1 (en) * 2016-06-28 2017-12-28 Intel Corporation Color correction of rgbir sensor stream based on resolution recovery of rgb and ir channels
JP2018112936A (en) * 2017-01-12 2018-07-19 ピナクル イメージング コーポレーション HDR image processing apparatus and method
WO2018137267A1 (en) * 2017-01-25 2018-08-02 华为技术有限公司 Image processing method and terminal apparatus
CN109035306A (en) * 2018-09-12 2018-12-18 首都师范大学 Moving-target automatic testing method and device
CN109076144A (en) * 2016-05-10 2018-12-21 奥林巴斯株式会社 Image processing apparatus, image processing method and image processing program
US20190156516A1 (en) * 2018-12-28 2019-05-23 Intel Corporation Method and system of generating multi-exposure camera statistics for image processing

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000244851A (en) * 1999-02-18 2000-09-08 Canon Inc Picture processor and method and computer readable storage medium
JP2004272751A (en) * 2003-03-11 2004-09-30 Seiko Epson Corp Creation of still image from plurality of frame images
US20120105681A1 (en) * 2010-11-03 2012-05-03 Morales Efrain O Method for producing high dynamic range images
CN103765876A (en) * 2011-08-31 2014-04-30 索尼公司 Image processing device, image processing method, and program
WO2014022965A1 (en) * 2012-08-07 2014-02-13 展讯通信(上海)有限公司 Image processing method and device based on bayer format
JP2014103626A (en) * 2012-11-22 2014-06-05 Olympus Corp Image processor, image processing method and program
CN109076144A (en) * 2016-05-10 2018-12-21 奥林巴斯株式会社 Image processing apparatus, image processing method and image processing program
US20170374299A1 (en) * 2016-06-28 2017-12-28 Intel Corporation Color correction of rgbir sensor stream based on resolution recovery of rgb and ir channels
JP2018112936A (en) * 2017-01-12 2018-07-19 ピナクル イメージング コーポレーション HDR image processing apparatus and method
WO2018137267A1 (en) * 2017-01-25 2018-08-02 华为技术有限公司 Image processing method and terminal apparatus
CN109035306A (en) * 2018-09-12 2018-12-18 首都师范大学 Moving-target automatic testing method and device
US20190156516A1 (en) * 2018-12-28 2019-05-23 Intel Corporation Method and system of generating multi-exposure camera statistics for image processing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
董鹏宇;: "RGBX格式图像传感器的去马赛克算法", 集成电路应用, no. 05, 3 May 2018 (2018-05-03) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117132629A (en) * 2023-02-17 2023-11-28 荣耀终端有限公司 Image processing method and electronic device

Also Published As

Publication number Publication date
CN112308771B (en) 2024-08-16

Similar Documents

Publication Publication Date Title
CN110740259B (en) Video processing method and electronic equipment
CN108989672B (en) Shooting method and mobile terminal
CN107977652B (en) Method for extracting screen display content and mobile terminal
CN107846583B (en) Image shadow compensation method and mobile terminal
CN111405199B (en) Image shooting method and electronic equipment
CN110213485B (en) Image processing method and terminal
CN109922294B (en) Video processing method and mobile terminal
CN107730460B (en) Image processing method and mobile terminal
CN111554321A (en) Noise reduction model training method and device, electronic equipment and storage medium
CN107749046B (en) Image processing method and mobile terminal
CN109819166B (en) Image processing method and electronic equipment
CN109618218B (en) Video processing method and mobile terminal
CN111401463B (en) Method for outputting detection result, electronic equipment and medium
CN111145151B (en) Motion area determining method and electronic equipment
CN110636225B (en) Photographing method and electronic equipment
CN109639981B (en) Image shooting method and mobile terminal
CN108932505B (en) Image processing method and electronic equipment
CN111008929A (en) Image correction method and electronic equipment
CN108366171B (en) Temperature rise control method and mobile terminal
CN107798662B (en) Image processing method and mobile terminal
CN107734269B (en) Image processing method and mobile terminal
CN108259808B (en) Video frame compression method and mobile terminal
CN108063894B (en) Video processing method and mobile terminal
CN111010514B (en) Image processing method and electronic equipment
CN107566738A (en) A kind of panorama shooting method, mobile terminal and computer-readable recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant