CN111371987B - Image processing method and device, electronic equipment and computer readable storage medium - Google Patents

Image processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN111371987B
CN111371987B CN202010114660.8A CN202010114660A CN111371987B CN 111371987 B CN111371987 B CN 111371987B CN 202010114660 A CN202010114660 A CN 202010114660A CN 111371987 B CN111371987 B CN 111371987B
Authority
CN
China
Prior art keywords
image frame
jitter
original image
grid
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010114660.8A
Other languages
Chinese (zh)
Other versions
CN111371987A (en
Inventor
贾玉虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202010114660.8A priority Critical patent/CN111371987B/en
Publication of CN111371987A publication Critical patent/CN111371987A/en
Application granted granted Critical
Publication of CN111371987B publication Critical patent/CN111371987B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • H04N23/6811Motion detection based on the image signal

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to an image processing method, an image processing device, an electronic device and a computer readable storage medium, wherein the image processing method comprises the following steps: acquiring jitter information of the electronic equipment when the electronic equipment shoots an original image frame, and performing grid division on the original image frame according to the jitter information of the original image frame to obtain the original image frame after grid division. And carrying out image processing on the original image frame after the grid division according to the jitter information to obtain the image frame after the image processing. When the original image frame is subjected to the grid division, the original image frame is subjected to the grid division according to the jitter information of the original image frame instead of a fixed and uniform division method. Because the blurring degrees of the original image frames are different when the original image frames are shot by the electronic device, the grid size corresponding to the blurring degrees can be adopted when the grid division is performed. Therefore, the image processing speed is increased and resources are saved while the photographing quality of the electronic equipment is improved.

Description

Image processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the continuous development of the camera shooting technology, people have increasingly higher requirements for shooting the camera of the electronic equipment. The traditional electronic equipment develops from a single camera to a later double camera, and the photographing quality is obviously improved. However, the requirement for taking pictures of electronic devices is increasing day by day, and how to further improve the picture taking quality of the electronic devices and improve the image processing speed to meet the higher picture taking requirement of users is a problem to be solved urgently.
Disclosure of Invention
The embodiment of the application provides an image processing method and device, electronic equipment and a computer readable storage medium, which can improve the photographing quality of the electronic equipment, improve the image processing speed and save resources.
An image processing method applied to an electronic device includes:
acquiring jitter information of an electronic device when the electronic device shoots an original image frame;
performing mesh division on the original image frame according to the jitter information of the original image frame to obtain the original image frame after the mesh division;
and carrying out image processing on the original image frame after the grid division according to the jitter information to obtain an image frame after the image processing.
An image processing apparatus comprising:
the shaking information acquisition module is used for acquiring shaking information when the electronic equipment shoots an original image frame;
the mesh division module is used for carrying out mesh division on the original image frame according to the jitter information of the original image frame to obtain the original image frame after the mesh division;
and the image processing module is used for carrying out image processing on the original image frame after the grid division according to the jitter information to obtain the image frame after the image processing.
An electronic device comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the steps of the above method.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method as above.
The image processing method, the image processing device, the electronic equipment and the computer readable storage medium acquire the shaking information of the original image frame shot by the electronic equipment, and perform mesh division on the original image frame according to the shaking information of the original image frame to obtain the original image frame after the mesh division. And carrying out image processing on the original image frame after the grid division according to the jitter information to obtain the image frame after the image processing. When the original image frame is subjected to the grid division, the original image frame is subjected to the grid division according to the jitter information of the original image frame instead of a fixed and uniform division method. Because the blurring degrees of the original image frames are different when the original image frames are shot by the electronic device, the grid size corresponding to the blurring degrees can be adopted when the grid division is performed. Therefore, the image processing speed is increased and resources are saved while the photographing quality of the electronic equipment is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram of an exemplary embodiment of an image processing method;
FIG. 2 is a flow diagram of a method of image processing in one embodiment;
FIG. 3 is a flowchart of the method for meshing the original image frame according to the jitter information of the original image frame to obtain a meshed original image frame in FIG. 2;
fig. 4 is a flowchart of the method for classifying jitter amplitude of the jitter information according to a preset rule in fig. 3 to obtain a jitter amplitude level corresponding to the jitter information;
FIG. 5 is a diagram of dividing jitter amplitude levels on a normal distribution graph;
fig. 6 is a schematic diagram illustrating a method for processing an image of the original image frame after the grid division according to the dithering information in fig. 2 to obtain an image frame after the image processing;
FIG. 7 is a diagram of an original grid on an image frame in one embodiment;
FIG. 8 is a schematic diagram of a grid on an image frame divided by the method of the present application in one embodiment;
FIG. 9A is a block diagram showing a configuration of an image processing apparatus according to an embodiment;
FIG. 9B is a diagram showing an internal configuration of an electronic apparatus according to an embodiment;
FIG. 10 is a schematic diagram of an image processing circuit in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first camera may be referred to as a second camera, and similarly, a second camera may be referred to as a first camera, without departing from the scope of the present application. The first camera and the second camera are both cameras, but they are not the same camera.
Fig. 1 is a schematic diagram of an application environment of an image processing method in an embodiment. As shown in fig. 1, the application environment includes an electronic device 100. The electronic device 100 includes at least two camera modules, a camera module 110 and a camera module 120. The electronic device 100 may acquire shake information when capturing an original image frame; performing mesh division on an original image frame according to the jitter information of the original image frame to obtain the original image frame after the mesh division; and carrying out image processing on the original image frame after the grid division according to the jitter information to obtain the image frame after the image processing. It is understood that the electronic device 100 may not be limited to various mobile phones, cameras, computers, portable devices, and the like.
FIG. 2 is a flow diagram of a method of image processing in one embodiment. As shown in fig. 2, the image processing method includes steps 220 to 260.
Step 220, acquiring the shaking information of the electronic device when the electronic device shoots the original image frame.
The phenomenon of shaking of a shot video or image often occurs due to shaking of the electronic equipment when the electronic equipment is held by hands, shaking of a vehicle in the driving shooting process, shaking of a shot object and the like, and even the situation of blurred and unclear video can occur in severe cases. The shake information may be attitude data obtained by the attitude sensor when the electronic device captures a raw image frame. The attitude data can be information such as angular velocity or three-axis rotation angle acquired by acquiring a lens in any camera module of the electronic equipment through a gyroscope.
And 240, performing mesh division on the original image frame according to the jitter information of the original image frame to obtain the original image frame after the mesh division.
In the conventional image processing, a uniform and regular original grid is sleeved on an original image frame to perform projection of pixel points, for example, an M × N grid, where M is to divide the original image frame into M grids horizontally and N is to divide the original image frame into N grids vertically. Each subsequent frame is image processed using this grid.
When the jitter amplitude corresponding to the original image frame is large, the definition of the original image frame is low, the original image frame is divided by adopting a uniform and regular original grid size, and the definition of an image generated by performing subsequent image processing on the basis of the original image frame after grid division cannot meet a preset standard and cannot meet the requirements of users.
When the corresponding jitter amplitude of the original image frame is smaller, the definition of the original image frame is originally higher, the original image frame is still divided by adopting the uniform and regular original grid size, and subsequent image processing is performed on the basis of the original image frame after grid division, so that unnecessary operation waste is obviously caused. In fact, for the original image frame with small jitter amplitude, the original image frame does not need to be divided into such dense grids at all.
Therefore, the original image frame can be subjected to dynamic grid division according to the jitter information of the original image frame, and the original image frame after grid division is obtained. Namely, the original image frame is subjected to appropriate size grid division according to the jitter information of the original image frame. When the corresponding jitter amplitude of the original image frame is larger, the original image frame can be divided into denser grids. When the corresponding jitter amplitude of the original image frame is smaller, the original image frame can be divided into sparser grids.
And step 260, performing image processing on the original image frame after the grid division according to the shaking information to obtain an image frame after the image processing.
After the original image frame is subjected to dynamic grid division according to the jitter information of the original image frame to obtain the original image frame after grid division, the original image frame after grid division is subjected to image processing to obtain the image frame after image processing. The image processing comprises the steps of carrying out anti-shake compensation and interpolation processing on an original image frame to obtain an image frame after the image processing. Of course, the image processing herein also includes other processing performed on the image, and the present application is not limited thereto.
In the embodiment of the application, when the original image frame is subjected to meshing, a fixed and uniform dividing method is not adopted, but dynamic meshing is carried out on the original image frame according to the jitter information of the original image frame. Because the blurring degrees of the original image frames are different when the original image frames are shot by the electronic device, the grid size corresponding to the blurring degrees can be adopted when the grid division is performed. Therefore, the image processing speed is increased and resources are saved while the photographing quality of the electronic equipment is improved.
In one embodiment, as shown in fig. 3, step 240, performing mesh division on the original image frame according to the shaking information of the original image frame to obtain a original image frame after mesh division, includes:
and 242, performing jitter amplitude grading on the jitter information according to a preset rule to obtain a jitter amplitude grade corresponding to the jitter information.
The preset rule includes gaussian distribution (or referred to as normal distribution), and may also include other rules in statistics, which is not limited in the present application. Namely, the original image frame and the historical image frame which are shot at present are subjected to statistical analysis according to a preset rule to obtain an analysis result. And then, carrying out jitter amplitude grading on the jitter information based on the analysis result to obtain the jitter amplitude grade corresponding to the jitter information.
Step 244, inputting the jitter amplitude level corresponding to the jitter information into a preset function, and calculating the grid size of the original image frame.
And after the jitter amplitude level corresponding to the jitter information is obtained, calculating the grid size of the original image frame according to the jitter amplitude level. When the grid size of the original image frame is calculated, the grid size of the original image frame may be calculated in a manner of a preset function, and of course, the grid size of the original image frame may also be calculated in a manner of a correspondence between a jitter amplitude level obtained by a plurality of tests and the grid size, which is not limited in this application. The grid size adaptive to the jitter amplitude level can be calculated through the calculation mode, so that the quality of subsequent image processing on the original image frame can be improved, the processing efficiency is improved, and resources are saved after the original image frame is subjected to grid division according to the grid size.
Step 246, the original image frame is subjected to mesh division according to the size of the mesh to obtain the original image frame after the mesh division.
After the grid size adaptive to the jitter amplitude level can be calculated through the calculation method, the original image frame can be subjected to grid division according to the grid size, and the original image frame after grid division is obtained. For example, the original image frame is subjected to grid division into I × J grids at this time, wherein I means that the horizontal direction of the original image frame is divided into I grids, and J means that the vertical direction of the original image frame is divided into J grids. Wherein, I can be equal to or unequal to M in the traditional fixed grid division; similarly, J may be equal to or different from N in the conventional fixed-grid partitioning.
In the embodiment of the application, the jitter information is subjected to jitter amplitude grading to obtain the jitter amplitude grade corresponding to the jitter information. And inputting the jitter amplitude level corresponding to the jitter information into a preset function, and calculating the grid size of the original image frame. And finally, carrying out mesh division on the original image frame according to the size of the mesh to obtain the original image frame after the mesh division. The dithering amplitude grading is carried out on the dithering information, so that the dithering degree of the dithering information can be quantized. And then inputting the image frame into a preset function based on the dithering amplitude in a grading manner, so that the size of the grid corresponding to the dithering amplitude can be accurately calculated, and the original image frame is subjected to grid division according to the size of the grid to obtain the original image frame after the grid division. Therefore, the quality of subsequent image processing on the original image frame according to the grid can be improved, the processing efficiency is improved, and resources are saved.
In an embodiment, as shown in fig. 4, step 242, performing jitter amplitude classification on the jitter information according to a preset rule to obtain a jitter amplitude level corresponding to the jitter information, includes:
in step 242a, the dithering information of a preset number of image frames adjacent to the original image frame is obtained.
When the adopted preset rule is Gaussian distribution (or called normal distribution), and a user shoots a video in real time through the electronic equipment, the jitter information of the preset number of image frames adjacent to the original image frame is obtained. Specifically, the shake information of the adjacent preset number of image frames captured before the original image frame is obtained, for example, the shake information of the adjacent 4 frame image frames captured before the original image frame is obtained, and the number of the specific frames is not limited in the present application. The shake information here may be angular velocity information of the gyroscope at the time of capturing the image frame.
And acquiring the jitter information of a preset number of image frames adjacent to the original image frame in the post-processing process of the shot video. Specifically, the shake information of a preset number of adjacent image frames captured before or after the original image frame is obtained, for example, the shake information of each 2 adjacent image frames captured before and after the original image frame is obtained, that is, the shake information of the 2 image frames is obtained from the original image frame forward, and the shake information of the 2 image frames is obtained from the original image frame backward. The number of the specific frames is not limited in the present application.
And 242b, calculating a mean value and a standard deviation of the shaking information according to the shaking information of the original image frames and the shaking information of the preset number of image frames.
After acquiring the shaking information of the original image frame shot by the electronic equipment and the shaking information of the preset number of image frames adjacent to the original image frame, calculating the average value mu and the standard deviation sigma of the shaking information in normal distribution according to the shaking information of the original image frame and the shaking information of the preset number of image frames. The normal distribution curve reflects the distribution rule of the random variable X, and the theoretical normal distribution curve is a bell-shaped curve which is high in the middle and completely symmetrical and gradually descends at two ends. In the embodiment of the present application, the random variable X is the dithering information of the original image frame.
The area of a certain interval on the horizontal axis under the normal curve reflects the percentage of the number of instances of the interval to the total number of instances, or the probability (probability distribution) that the variable value falls within the interval. The area under the normal curve in different ranges can be calculated by a formula.
Under the normal curve, the area in the horizontal axis interval (μ - σ, μ + σ) was 68.268949%.
P{|X-μ|<σ}=2Φ(1)-1=0.6826
The area in the range of the horizontal axis (. mu. -1.96. mu.,. mu. + 1.96. mu.) was 95.449974%.
P{|X-μ|<2σ}=2Φ(2)-1=0.9544
The area in the range of the horizontal axis (. mu. -2.58. mu.,. mu. + 2.58. mu.) was 99.730020%.
P{|X-μ|<3σ}=2Φ(3)-1=0.9974
And 242c, classifying the jitter amplitude of the jitter information of the original image frame based on the mean value and the standard deviation of the jitter information to obtain the jitter amplitude level corresponding to the jitter information.
Specifically, as shown in fig. 5, a diagram of dividing the jitter amplitude level on a normal distribution graph is shown.
In the figure, a region of the horizontal-axis section 1.96 σ ≦ | X- μ | <2.58 is set to the jitter amplitude level of 3 steps, a region of the horizontal-axis section σ ≦ | X- μ | <1.96 σ is set to the jitter amplitude level of 2 steps, and a region of the horizontal-axis section 0 ≦ | X- μ | < σ is set to the jitter amplitude level of 1 step. The jitter amplitude corresponding to the 3-level is greater than the jitter amplitude corresponding to the 2-level, and the jitter amplitude corresponding to the 2-level is greater than the jitter amplitude corresponding to the 1-level. Of course, the above is only one classification standard of the jitter amplitude according to the normal distribution, and other classification standards of the jitter amplitude can be made according to the normal distribution according to the actual situation.
In the embodiment of the application, the jitter information of a preset number of image frames adjacent to an original image frame is acquired. And calculating the mean value and the standard deviation of the jitter information according to the jitter information of the original image frames and the jitter information of the preset number of image frames to obtain a normal distribution graph corresponding to the acquired jitter information. And based on the normal distribution diagram, carrying out jitter amplitude grading on the jitter information of the original image frame to obtain the jitter amplitude grade corresponding to the jitter information. The normal distribution diagram can accurately reflect the distribution rule of the random variable, so that the jitter amplitude of the original image frame is classified based on the normal distribution diagram, and the jitter amplitude level corresponding to the jitter information can be accurately obtained. And then, accurately calculating the size of the grid which is adaptive to the jitter amplitude level, and carrying out grid division on the original image frame according to the size of the grid to obtain the original image frame after grid division. In this way, it is achieved that the grid size is accurately calculated based on the jitter amplitude level obtained by the normal distribution. Thus, while ensuring improved image sharpness, the amount of computation is reduced.
In one embodiment, the preset function is an inverse proportional function;
step 244, inputting the jitter amplitude level corresponding to the jitter information into a preset function, and calculating the grid size of the original image frame, including:
inputting the jitter amplitude level corresponding to the jitter information into the inverse proportion function, and calculating the grid size of the original image frame.
Specifically, the larger the dithering amplitude, the lower the definition of the original image frame, so that the original image frame needs to be divided into denser grids, i.e., the grid size is smaller when the original image frame is subjected to grid division. The smaller the jitter amplitude is, the higher the definition of the original image frame is, so that the original image frame needs to be divided into sparser grids when being subjected to grid division, i.e., the grid size is larger. Therefore, an inverse relationship between the jitter amplitude and the grid size is obtained, and an inverse proportional function f (n) is obtained, where the jitter amplitude level in the above embodiment is used as an input n, and f (n) is the grid size output when the input n is obtained.
And inputting the jitter amplitude level corresponding to the jitter information into the inverse proportion function for the original image frame with larger jitter amplitude, and calculating that the grid of the original image frame is smaller, namely the grid number is larger. Since the larger the number of meshes, the more mesh points correspond to the meshes, and each mesh point corresponds to one pixel value. Thus, the denser grid corresponds to the more pixel data in the original image frame. Therefore, the anti-shake compensation and interpolation processing is performed on the original image frame based on a large amount of pixel data, so that the definition of the image frame after the anti-shake compensation and interpolation processing is obviously improved, and the problem that mosaic appears on the original image frame with larger shake amplitude is solved to a certain extent.
And inputting the jitter amplitude level corresponding to the jitter information into the inverse proportion function for the original image frame with smaller jitter amplitude, and calculating that the grid of the original image frame is larger, namely the grid number is smaller. Since the smaller the number of grids, the fewer the grid points corresponding to the grids, and each grid point corresponds to one pixel value. Thus, the sparse grid corresponds to less pixel data on the original image frame. Since the original image frame with a smaller jitter amplitude has a higher image definition, the image definition requirement can be satisfied by performing anti-jitter compensation and interpolation on the original image frame with less pixel data. The method avoids the problems that the image processing speed is reduced and resources are wasted due to the fact that the extra calculation amount is increased because the uniform and fixed grid division is still adopted.
In the embodiment of the application, the jitter amplitude level corresponding to the jitter information is input into the inverse proportional function, and the grid size of the original image frame is calculated. Therefore, for the original image frame with larger jitter amplitude, the jitter amplitude level corresponding to the jitter information is input into the inverse proportion function, and the grid of the original image frame is calculated to be smaller, namely the grid number is larger. And inputting the jitter amplitude level corresponding to the jitter information into the inverse proportion function for the original image frame with smaller jitter amplitude, and calculating that the grid of the original image frame is larger, namely the grid number is smaller. Dynamic meshing according to the jitter amplitude is achieved. Therefore, the image processing speed is increased and resources are saved while the photographing quality of the electronic equipment is improved.
In the last embodiment, the inverse proportional function is a discrete function or a continuous function.
Specifically, the dither amplitude level in the above embodiment is used as an input n, and f (n) is the size of the output grid when the input is n. The inverse scaling function f (n) in the previous embodiment may be a discrete function or a continuous function. Wherein the continuous function is derivable over the domain of definition, i.e. without discontinuity points; whereas a discrete function is a discontinuity in the domain of definition. The discrete function can ensure that the original image frame is divided into integral grids according to the calculated grid size so as to avoid incomplete grids.
In the embodiment of the present application, the inverse proportional function may be a discrete function or a continuous function. Thus, when the grid size is calculated according to the inverse proportion function, more diversified calculation modes can be provided, and diversified grid division is realized.
In one embodiment, the grid includes grid points; as shown in fig. 6, step 260, performing image processing on the original image frame after the grid division according to the shaking information to obtain an image-processed image frame, includes:
and 262, performing jitter compensation on the original image frame after the grid division according to the jitter information to obtain a jitter-compensated image frame.
After the mesh division is completed, because the mesh is sleeved on the original image frame, the mesh vertex is subjected to jitter compensation, and the vertex coordinates of the mesh after the jitter compensation are obtained, the pixel points positioned at the mesh vertex of the original image frame can be mapped to the vertex coordinates of the mesh after the jitter compensation. And mapping each pixel point positioned at the grid vertex of the original image frame to the vertex coordinate of the grid after the jitter compensation, and forming the image frame after the jitter compensation according to the mapped pixel points. In this way, the compensation for the shaking of the original image frame is completed.
Step 264, obtaining the pixel values of the grid points on the image frame after the jitter compensation, and performing interpolation calculation on the pixel values of the grid points on the image frame after the jitter compensation to obtain the pixel values in the grid on the image frame after the jitter compensation.
After the pixel values of the grid points on the image frame after the jitter compensation are obtained, the pixel values in the grid points also need to be calculated. Specifically, interpolation calculation is performed on pixel values of grid points on the image frame after the jitter compensation, and pixel values in grids on the image frame after the jitter compensation are obtained.
And step 266, obtaining the image frame after image processing according to the pixel values of the grid points on the image frame after the shake compensation and the pixel values in the grid on the image frame after the shake compensation.
In the embodiment of the application, the original image frame after grid division is subjected to jitter compensation firstly according to the jitter information to obtain the image frame after the jitter compensation. And then, carrying out interpolation calculation on the pixel values of the grid points on the image frame after the jitter compensation to obtain the pixel values in the grid on the image frame after the jitter compensation. Finally, an image frame after image processing is obtained. The image quality after processing is improved from two aspects of jitter compensation and interpolation processing.
In one embodiment, the step 264 of performing interpolation calculation on the pixel values of the grid points on the image frame after the shake compensation to obtain the pixel values in the grid on the image frame after the shake compensation includes:
and performing interpolation calculation on the pixel values of the grid points on the image frame after the jitter compensation by adopting any one of a nearest interpolation algorithm, a bilinear interpolation algorithm and a bicubic interpolation algorithm to obtain the pixel values in the grid on the image frame after the jitter compensation.
Specifically, when an image is shot, the image is exposed line by line, the exposure time of each line is the same, and meanwhile, the gyroscope collects attitude data of the electronic equipment according to a preset frame rate and records a timestamp for collecting the attitude data each time. Because the attitude data can not be completely corresponding and consistent with each pixel point, the attitude data needs to be supplemented by adopting a forward interpolation mode or a backward interpolation mode. The backward interpolation is to divide grids on an output image, reversely find a corresponding position on an original image for each grid, obtain posture data of the corresponding position, and use the posture data of the position as the posture data of the grid. The forward interpolation is to divide grids on an input image, find the attitude data corresponding to each grid according to the divided grids, accurately find the attitude data, wherein the attitude data is sparsely and densely distributed, and then interpolate to obtain the attitude data in each grid. The interpolation mode in the application can be a forward interpolation mode or a backward interpolation mode.
In addition, the nearest neighbor interpolation algorithm is the simplest interpolation method, calculation is not needed, and in the four neighbor pixels of the pixel to be solved, the gray level of the pixel nearest to the pixel to be solved is given to the pixel to be solved. And setting the pixel coordinate (x + u, y + v) to be solved (x, y are integers, u, v are decimal numbers which are larger than zero and smaller than 1), selecting a pixel point which is closest to the inserted pixel point (x + u, y + v) as the value f (x + u, y + v) of the pixel gray to be solved, and replacing the inserted pixel point with the gray value of the pixel point.
Bilinear interpolation, namely performing linear interpolation on the pixel matrix in the x direction and the y direction.
The bicubic interpolation algorithm is the most commonly used interpolation method in two-dimensional space, in which the value of the function f at point (x, y) can be obtained by weighted averaging of the nearest sixteen sampling points in a rectangular grid, where two polynomial interpolation cubic functions are required, one for each direction.
In the embodiment of the present application, the interpolation algorithms are different, so that different interpolation methods can be adopted according to different requirements of image processing. Therefore, an appropriate algorithm is screened out from the interpolation method according to different requirements of image processing, and interpolation calculation is performed on the pixel values of the grid points on the image frame after the jitter compensation, so that the pixel values in the grid on the image frame after the jitter compensation are obtained.
In one embodiment, step 220, acquiring shaking information of the electronic device when capturing the original image frame includes:
and acquiring attitude data of the electronic equipment when the electronic equipment shoots the original image frame through an attitude sensor.
In the embodiment of the application, the obtaining of the shake information when the electronic device shoots the original image frame may be obtaining of attitude data when the electronic device shoots the original image frame by the attitude sensor. The attitude sensor can be a gyroscope, and the attitude data can be information such as angular velocity or three-axis rotation angle acquired by acquiring a lens in any camera module of the electronic equipment through the gyroscope. Based on the angular velocity, an angle (i.e., a three-axis rotation angle) can be calculated. Wherein, the three-axis rotation angle refers to euler angle. The euler angle comprises a set of 3 independent angle parameters, consisting of a nutation angle θ, a precession angle (i.e. precession angle) ψ and a rotation angle j, and is divided mainly into rotations on three axes, pitch on the x-axis, yaw on the y-axis and roll on the z-axis. Wherein, pitch (): pitch, rotate the object around the X-axis (localRotationX), yaw (): heading, rotating the object about the Y axis (localRotationY), rolling, and rotating the object about the Z axis (localRotationZ). Therefore, the attitude data of the electronic equipment when the electronic equipment shoots the original image frame can be accurately acquired through the gyroscope, and the method lays a foundation for subsequent grid division based on the attitude data, jitter compensation and interpolation calculation.
In a specific embodiment, as shown in FIG. 7, a diagram of the original grid on the image frame is shown.
Specifically, the left diagram ABCD in fig. 7 represents the original grid, e.g., a 40 × 40 grid, before the jitter compensation (i.e., rotation) on the original image frame. The right diagram a 'B' C 'D' in fig. 7 represents the original mesh after rotation. When the shake amplitude of the original image frame is large, the original grid a 'B' C 'D' after rotation generated after shake correction also has large variation, which is embodied in large variation in angle and large variation in resolution. When the original grid a 'B' C 'D' with large variation is interpolated to calculate its internal pixel value based on the pixel values of the four grid points a ', B', C ', D' on the grid, the distances between two adjacent grid points of the four grid points a ', B', C ', D' have become larger (compared to the ABCD grid), so that the interpolated image frame has different degrees of mosaic effect and sawtooth edge effect, resulting in the reduction of image quality.
Therefore, by adopting the image processing method provided by the application, firstly, the gyroscope is used for acquiring the angular velocity information of the electronic equipment when the original image frame is shot, then the jitter information of the adjacent 4 image frames shot before the original image frame is acquired, and the normal distribution map is calculated according to the jitter information of the 4 image frames and the angular velocity information of the currently shot original image frame; secondly, carrying out jitter amplitude grading on the jitter information of the original image frame based on the normal distribution diagram to obtain a jitter amplitude grade corresponding to the jitter information; and thirdly, inputting the jitter amplitude level into an inverse proportion function, and calculating the grid size of the original image frame. And finally, carrying out jitter compensation and interpolation processing on the original image frame after the grid division to obtain the image frame after image processing.
When the jitter amplitude is large, the original grid of the left diagram ABCD in fig. 7 is divided again into 20 × 20 grids by the method in the present application. Fig. 8 is a schematic diagram of a grid divided on an image frame by the method of the present application. The left graph is a 20 × 20 grid with four grid points added E, F, G, H, and the right graph is a rotated grid with four grid points added E ', F', G ', H'. In this way, the distance between two adjacent grid points in the rotated grid is correspondingly reduced. The mosaic effect and the sawtooth edge effect on the image frame calculated by interpolation are weakened adaptively, and the definition of the image is further improved.
In the embodiment of the application, the jitter information of a preset number of image frames adjacent to an original image frame is acquired. And calculating the mean value and the standard deviation of the jitter information according to the jitter information of the original image frames and the jitter information of the preset number of image frames to obtain a normal distribution graph corresponding to the acquired jitter information. The jitter amplitude of the original image frame is classified based on the normal distribution diagram, so that the jitter amplitude level corresponding to the jitter information can be accurately obtained. And then, accurately calculating the size of the grid which is adaptive to the jitter amplitude level, and carrying out grid division on the original image frame according to the size of the grid to obtain the original image frame after grid division. In this way, the original image frame after the grid division is subjected to jitter compensation and then interpolation calculation according to the jitter information, and the image frame after the image processing is obtained. Thus, while ensuring improved image sharpness, the amount of computation is reduced.
It should be understood that, although the steps in the above-described flowcharts are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in the above-described flowcharts may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or the stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 9A, there is provided an image processing apparatus 900 including: a jitter information obtaining module 920, a mesh dividing module 940 and an image processing module 960, wherein:
a shaking information obtaining module 920, configured to obtain shaking information of an original image frame taken by an electronic device;
a mesh division module 940, configured to perform mesh division on the original image frame according to the jitter information of the original image frame, to obtain an original image frame after the mesh division;
the image processing module 960 is configured to perform image processing on the original image frame after the grid division according to the dithering information, so as to obtain an image frame after the image processing.
In one embodiment, the meshing module 940 includes:
the jitter amplitude level obtaining unit is used for carrying out jitter amplitude grading on the jitter information according to a preset rule to obtain a jitter amplitude level corresponding to the jitter information;
the grid size calculation unit is used for inputting the jitter amplitude level corresponding to the jitter information into a preset function and calculating the grid size of the original image frame;
and the grid division unit is used for carrying out grid division on the original image frame according to the size of the grid to obtain the original image frame after the grid division.
In one embodiment, the apparatus includes a shaking amplitude level acquiring unit for acquiring shaking information of a preset number of image frames adjacent to an original image frame; calculating the mean value and the standard deviation of the shaking information according to the shaking information of the original image frames and the shaking information of the preset number of image frames; and carrying out jitter amplitude grading on the jitter information of the original image frame based on the mean value and the standard deviation of the jitter information to obtain the jitter amplitude grade corresponding to the jitter information.
In one embodiment, the preset function is an inverse proportional function; and the grid size calculating unit is used for inputting the jitter amplitude level corresponding to the jitter information into the inverse proportion function and calculating the grid size of the original image frame.
In one embodiment, the inverse proportional function is a discrete function or a continuous function.
In one embodiment, the grid includes grid points; an image processing module 960, comprising:
the jitter compensation unit is used for carrying out jitter compensation on the original image frame after the grid division according to the jitter information to obtain an image frame after the jitter compensation;
the interpolation unit is used for acquiring the pixel values of the grid points on the image frame after the jitter compensation and carrying out interpolation calculation on the pixel values of the grid points on the image frame after the jitter compensation to obtain the pixel values in the grids on the image frame after the jitter compensation;
and the image frame generating unit is used for obtaining the image frame after image processing according to the pixel values of the grid points on the image frame after the jitter compensation and the pixel values in the grids on the image frame after the jitter compensation.
In an embodiment, the interpolation unit is further configured to perform interpolation calculation on the pixel values of the grid points on the image frame after the jitter compensation by using any one of a nearest neighbor interpolation algorithm, a bilinear interpolation algorithm, and a bicubic interpolation algorithm, so as to obtain the pixel values in the grid on the image frame after the jitter compensation.
In one embodiment, the shake information obtaining module 920 is further configured to obtain, through the attitude sensor, attitude data of the electronic device when capturing the raw image frame.
The division of the modules in the image processing apparatus is only for illustration, and in other embodiments, the image processing apparatus may be divided into different modules as needed to complete all or part of the functions of the image processing apparatus.
Fig. 9B is a schematic diagram of an internal structure of the electronic device in one embodiment. As shown in fig. 9B, the electronic device includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement an image processing method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
The implementation of each module in the image processing apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. The program modules constituted by the computer program may be stored on the memory of the terminal or the server. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides the electronic equipment. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 10 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 10, for convenience of explanation, only aspects of the image processing technology related to the embodiments of the present application are shown.
As shown in fig. 10, the image processing circuit includes a first ISP processor 1030, a second ISP processor 1040, and a control logic 1050. The first camera 1010 includes one or more first lenses 1012 and a first image sensor 1014. First image sensor 1014 may include a color filter array (e.g., a Bayer filter), and first image sensor 1014 may acquire light intensity and wavelength information captured with each imaging pixel of first image sensor 1014 and provide a set of image data that may be processed by first ISP processor 1030. The second camera 1020 includes one or more second lenses 1022 and a second image sensor 1024. The second image sensor 1024 may include a color filter array (e.g., a Bayer filter), and the second image sensor 1024 may acquire light intensity and wavelength information captured with each imaging pixel of the second image sensor 1024 and provide a set of image data that may be processed by the second ISP processor 1040.
The first image acquired by the first camera 1010 is transmitted to the first ISP processor 1030 to be processed, after the first ISP processor 1030 processes the first image, the statistical data (such as the brightness of the image, the contrast value of the image, the color of the image, and the like) of the first image can be sent to the control logic 1050, and the control logic 1050 can determine the control parameter of the first camera 1010 according to the statistical data, so that the first camera 1010 can perform operations such as automatic focusing and automatic exposure according to the control parameter. The first image may be stored in the image memory 1060 after being processed by the first ISP processor 1030, and the first ISP processor 1030 may also read the image stored in the image memory 1060 for processing. In addition, the first image may be directly transmitted to the display 1070 to be displayed after being processed by the ISP processor 1030, and the display 1070 may also read and display the image in the image memory 1060.
Wherein the first ISP processor 1030 processes the image data pixel by pixel in a plurality of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the first ISP processor 1030 may perform one or more image processing operations on the image data, collecting statistics about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The image Memory 1060 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving an interface from first image sensor 1014, first ISP processor 1030 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to an image memory 1060 for additional processing before being displayed. The first ISP processor 1030 receives the processed data from the image memory 1060 and performs image data processing in RGB and YCbCr color space on the processed data. The image data processed by the first ISP processor 1030 may be output to a display 1070 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the first ISP processor 1030 may also be sent to an image memory 1060, and the display 1070 may read image data from the image memory 1060. In one embodiment, image memory 1060 may be configured to implement one or more frame buffers.
The statistics determined by the first ISP processor 1030 may be sent to the control logic 1050. For example, the statistical data may include first image sensor 1014 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, first lens 1012 shading correction, and the like. Control logic 1050 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters for first camera 1010 and control parameters for first ISP processor 1030 based on the received statistical data. For example, the control parameters of the first camera 1010 may include gain, integration time of exposure control, anti-shake parameters, flash control parameters, first lens 1012 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters, and the like. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as first lens 1012 shading correction parameters.
Similarly, the second image captured by the second camera 1020 is transmitted to the second ISP processor 1040 for processing, after the second ISP processor 1040 processes the first image, the statistical data of the second image (such as the brightness of the image, the contrast value of the image, the color of the image, etc.) may be sent to the control logic 1050, and the control logic 1050 may determine the control parameter of the second camera 1020 according to the statistical data, so that the second camera 1020 may perform operations such as auto-focus and auto-exposure according to the control parameter. The second image may be stored in the image memory 1060 after being processed by the second ISP processor 1040, and the second ISP processor 1040 may also read the image stored in the image memory 1060 for processing. In addition, the second image may be directly transmitted to the display 1070 to be displayed after being processed by the ISP processor 1040, or the display 1070 may read and display the image in the image memory 1060. The second camera 1020 and the second ISP processor 1040 may also implement the processes described for the first camera 1010 and the first ISP processor 1030.
The image processing circuit provided by the embodiment of the application can realize the image processing method. The electronic equipment can be provided with a plurality of cameras, each camera comprises a lens and an image sensor arranged corresponding to the lens, and the image sensors in the cameras are arranged in a rectangular diagonal mode. The process of the electronic device implementing the image processing method is as described in the above embodiments, and is not described herein again.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of the image processing method.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform an image processing method.
Any reference to memory, storage, database, or other medium used by embodiments of the present application may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image processing method applied to an electronic device, comprising:
acquiring jitter information of an electronic device when the electronic device shoots an original image frame;
carrying out jitter amplitude grading on the jitter information according to a preset rule to obtain a jitter amplitude grade corresponding to the jitter information;
inputting the jitter amplitude level corresponding to the jitter information into an inverse proportion function, and calculating the grid size of the original image frame;
performing mesh division on the original image frame according to the size of the mesh to obtain the original image frame after the mesh division;
and carrying out image processing on the original image frame after the grid division according to the jitter information to obtain an image frame after the image processing.
2. The method according to claim 1, wherein the step of classifying the jitter amplitude of the jitter information according to a preset rule to obtain a jitter amplitude level corresponding to the jitter information comprises:
acquiring jitter information of a preset number of image frames adjacent to the original image frame;
calculating the mean value and the standard deviation of the jitter information according to the jitter information of the original image frames and the jitter information of the preset number of image frames;
and carrying out jitter amplitude grading on the jitter information of the original image frame based on the mean value and the standard deviation of the jitter information to obtain a jitter amplitude grade corresponding to the jitter information.
3. The method of claim 1, wherein the inverse proportional function is a discrete function or a continuous function.
4. The method of claim 1, wherein the grid comprises grid points; the image processing the original image frame after the grid division is performed according to the jitter information to obtain the image frame after the image processing, and the image processing comprises the following steps:
performing jitter compensation on the original image frame after the grid division according to the jitter information to obtain a jitter-compensated image frame;
acquiring pixel values of grid points on the image frame after the jitter compensation, and performing interpolation calculation on the pixel values of the grid points on the image frame after the jitter compensation to obtain the pixel values in grids on the image frame after the jitter compensation;
and obtaining the image frame after image processing according to the pixel value of the grid point on the image frame after the jitter compensation and the pixel value in the grid on the image frame after the jitter compensation.
5. The method according to claim 4, wherein the interpolating the pixel values of the grid points on the image frame after the jitter compensation to obtain the pixel values in the grid on the image frame after the jitter compensation comprises:
and performing interpolation calculation on the pixel values of the grid points on the image frame after the jitter compensation by adopting any one of a nearest interpolation algorithm, a bilinear interpolation algorithm and a bicubic interpolation algorithm to obtain the pixel values in the grid on the image frame after the jitter compensation.
6. The method of claim 1, wherein the obtaining of the shaking information of the electronic device when capturing the original image frame comprises:
and acquiring attitude data of the electronic equipment when the electronic equipment shoots the original image frame through an attitude sensor.
7. An image processing apparatus characterized by comprising:
the shaking information acquisition module is used for acquiring shaking information when the electronic equipment shoots an original image frame;
a meshing module comprising:
the jitter amplitude level obtaining unit is used for carrying out jitter amplitude grading on the jitter information according to a preset rule to obtain a jitter amplitude level corresponding to the jitter information;
the grid size calculation unit is used for inputting the jitter amplitude level corresponding to the jitter information into a preset function and calculating the grid size of the original image frame;
the grid division unit is used for carrying out grid division on the original image frame according to the size of the grid to obtain the original image frame after the grid division;
and the image processing module is used for carrying out image processing on the original image frame after the grid division according to the jitter information to obtain the image frame after the image processing.
8. The apparatus according to claim 7, wherein the dithering amplitude level obtaining unit is configured to obtain dithering information for a preset number of image frames adjacent to an original image frame; calculating the mean value and the standard deviation of the shaking information according to the shaking information of the original image frames and the shaking information of the preset number of image frames; and carrying out jitter amplitude grading on the jitter information of the original image frame based on the mean value and the standard deviation of the jitter information to obtain the jitter amplitude grade corresponding to the jitter information.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the computer program, when executed by the processor, causes the processor to perform the steps of the image processing method according to any of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202010114660.8A 2020-02-25 2020-02-25 Image processing method and device, electronic equipment and computer readable storage medium Active CN111371987B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010114660.8A CN111371987B (en) 2020-02-25 2020-02-25 Image processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010114660.8A CN111371987B (en) 2020-02-25 2020-02-25 Image processing method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111371987A CN111371987A (en) 2020-07-03
CN111371987B true CN111371987B (en) 2021-06-25

Family

ID=71210117

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010114660.8A Active CN111371987B (en) 2020-02-25 2020-02-25 Image processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111371987B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114339101B (en) * 2020-09-29 2023-06-20 华为技术有限公司 Video recording method and equipment
CN113256484B (en) * 2021-05-17 2023-12-05 百果园技术(新加坡)有限公司 Method and device for performing stylization processing on image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102318334A (en) * 2009-12-22 2012-01-11 松下电器产业株式会社 Image processing device, imaging device, and image processing method
CN105163046A (en) * 2015-08-17 2015-12-16 成都鹰眼视觉科技有限公司 Video stabilization method based on grid point non-parametric motion model
CN105407271A (en) * 2014-09-09 2016-03-16 佳能株式会社 Image Processing Apparatus, Image Capturing Apparatus, Image Generation Apparatus, And Image Processing Method
CN107864374A (en) * 2017-11-17 2018-03-30 电子科技大学 A kind of binocular video digital image stabilization method for maintaining parallax
WO2018154130A1 (en) * 2017-02-27 2018-08-30 Koninklijke Kpn N.V. Processing spherical video data
CN110473159A (en) * 2019-08-20 2019-11-19 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005224983A (en) * 2004-02-10 2005-08-25 Seiko Epson Corp Image output system for outputting image according to information on number of dots formed in prescribed area
US20190035049A1 (en) * 2017-07-31 2019-01-31 Qualcomm Incorporated Dithered variable rate shading

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102318334A (en) * 2009-12-22 2012-01-11 松下电器产业株式会社 Image processing device, imaging device, and image processing method
CN105407271A (en) * 2014-09-09 2016-03-16 佳能株式会社 Image Processing Apparatus, Image Capturing Apparatus, Image Generation Apparatus, And Image Processing Method
CN105163046A (en) * 2015-08-17 2015-12-16 成都鹰眼视觉科技有限公司 Video stabilization method based on grid point non-parametric motion model
WO2018154130A1 (en) * 2017-02-27 2018-08-30 Koninklijke Kpn N.V. Processing spherical video data
CN107864374A (en) * 2017-11-17 2018-03-30 电子科技大学 A kind of binocular video digital image stabilization method for maintaining parallax
CN110473159A (en) * 2019-08-20 2019-11-19 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment, computer readable storage medium

Also Published As

Publication number Publication date
CN111371987A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
CN110473159B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108898567B (en) Image noise reduction method, device and system
CN111246089B (en) Jitter compensation method and apparatus, electronic device, computer-readable storage medium
CN110166695B (en) Camera anti-shake method and device, electronic equipment and computer readable storage medium
CN110610465B (en) Image correction method and device, electronic equipment and computer readable storage medium
CN110475067B (en) Image processing method and device, electronic equipment and computer readable storage medium
WO2017016050A1 (en) Image preview method, apparatus and terminal
JP6308748B2 (en) Image processing apparatus, imaging apparatus, and image processing method
CN110660090B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN110866486B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
US11770613B2 (en) Anti-shake image processing method, apparatus, electronic device and storage medium
CN107864335B (en) Image preview method and device, computer readable storage medium and electronic equipment
CN108769523B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN109286758B (en) High dynamic range image generation method, mobile terminal and storage medium
CN108989699B (en) Image synthesis method, image synthesis device, imaging apparatus, electronic apparatus, and computer-readable storage medium
CN111432118B (en) Image anti-shake processing method and device, electronic equipment and storage medium
CN111371987B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110035206B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113875219B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113313661A (en) Image fusion method and device, electronic equipment and computer readable storage medium
CN109559352B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium
CN112087571A (en) Image acquisition method and device, electronic equipment and computer readable storage medium
CN110796041A (en) Subject recognition method and device, electronic equipment and computer-readable storage medium
CN111372000B (en) Video anti-shake method and apparatus, electronic device, and computer-readable storage medium
CN109584311B (en) Camera calibration method, device, electronic equipment and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant