CN107749046B - Image processing method and mobile terminal - Google Patents

Image processing method and mobile terminal Download PDF

Info

Publication number
CN107749046B
CN107749046B CN201711030164.9A CN201711030164A CN107749046B CN 107749046 B CN107749046 B CN 107749046B CN 201711030164 A CN201711030164 A CN 201711030164A CN 107749046 B CN107749046 B CN 107749046B
Authority
CN
China
Prior art keywords
background
area
target
image
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711030164.9A
Other languages
Chinese (zh)
Other versions
CN107749046A (en
Inventor
徐日磊
卢异龄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201711030164.9A priority Critical patent/CN107749046B/en
Publication of CN107749046A publication Critical patent/CN107749046A/en
Application granted granted Critical
Publication of CN107749046B publication Critical patent/CN107749046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image processing method and a mobile terminal, wherein the method comprises the following steps: determining an object area of a target object in an image to be processed; determining a target background area to be processed; dividing the target background area into N background sub-areas; performing blurring processing of different degrees on the N background subregions according to the arrangement sequence of the N background subregions; wherein, the blurring degree value is a quantization value representing the blurring degree of the background subregion; n is an integer greater than 1 to obtain an image with blurring effect on the background. Therefore, the image processing method provided by the embodiment of the invention enables the mobile terminal to perform background blurring processing to different degrees on the background area of the image to be acquired in the image acquisition process, so that the acquired image is more natural and can highlight the focus. In addition, a complex operation process is avoided, and the time of a user is saved.

Description

Image processing method and mobile terminal
Technical Field
The present invention relates to the field of communications technologies, and in particular, to an image processing method and a mobile terminal.
Background
With the overall popularization of mobile terminals, people have higher and higher requirements on the mobile terminals. The method aims to meet the requirement of people on photo-taking memorial of nice things. More and more mobile terminals are provided with cameras to realize the photographing function. In order to make the image taken by the mobile terminal more natural and more focused (such as a person), the background of the image taken by the mobile terminal needs to have blurring effect.
At present, generally, after a user takes an image, the image is opened through image processing software, and a tool carried by the image processing software is used for manually framing a background area. The image processing software carries out Gaussian blur with the same or gradually changed scale on the background area selected by the user frame and outputs an image with a background blurring effect.
However, in the prior art, when an image with a background blurring effect is to be acquired, a user needs to shoot the image through a mobile terminal, open the image through image processing software, and perform gaussian blurring on a manually framed background area, so that an operation process of acquiring the image with the background blurring effect is complex, and a lot of time is wasted for the user.
Disclosure of Invention
The embodiment of the invention provides an image processing method and a mobile terminal, and aims to solve the problems that in the prior art, the operation process for obtaining an image with a background having a blurring effect is complex and time-consuming.
In order to solve the technical problem, the invention is realized as follows: a method of image processing, the method comprising:
determining an object area of a target object in an image to be processed;
determining a target background area to be processed;
dividing the target background area into N background sub-areas;
performing blurring processing of different degrees on the N background subregions according to the arrangement sequence of the N background subregions;
wherein, the blurring degree value is a quantization value representing the blurring degree of the background subregion; n is an integer greater than 1.
In a second aspect, an embodiment of the present invention further provides a mobile terminal, including:
the first determination module is used for determining an object area of a target object in an image to be processed;
the second determination module is used for determining a target background area to be processed;
the dividing module is used for dividing the target background area into N background sub-areas;
the blurring processing module is used for blurring the N background sub-regions to different degrees according to the arrangement sequence of the N background sub-regions;
wherein, the blurring degree value is a quantization value representing the blurring degree of the background subregion; n is an integer greater than 1.
In a third aspect, an embodiment of the present invention further provides a mobile terminal, including: a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the image processing method described above.
In a fourth aspect, the embodiment of the present invention further provides a readable storage medium, where a computer program is stored on the readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps of the image processing method described above.
In the embodiment of the invention, the object area of the target object in the image to be processed is determined. Determining a target background area to be processed, and dividing the target background area into N background sub-areas. And performing blurring processing of different degrees on the N background sub-regions according to the arrangement sequence of the N background sub-regions to obtain an image with a blurring effect on the background. Therefore, the image processing method provided by the embodiment of the invention enables the mobile terminal to perform background blurring processing to different degrees on the background area of the image to be acquired in the image acquisition process, so that the acquired image is more natural and can highlight the focus. Meanwhile, a complex operation process is avoided, and the time of a user is saved.
Drawings
FIG. 1 is a flowchart of an image processing method according to an embodiment of the present invention;
fig. 2 is a flowchart of an image processing method provided in an embodiment of the present invention in an actual application scenario;
fig. 3 is one of schematic diagrams of an image processing method provided in an embodiment of the present invention in an actual application scenario;
fig. 4 is a second schematic diagram of an image processing method in an actual application scenario according to an embodiment of the present invention;
fig. 5 is a third schematic diagram of an image processing method in an actual application scenario according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention;
fig. 7 is a second schematic structural diagram of a mobile terminal according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. The technical solutions provided by the embodiments of the present invention are described in detail below with reference to the accompanying drawings.
In order to solve the problems of complex operation process and waste of a large amount of time for a user in obtaining an image with a blurring effect in the background in the prior art, the present invention provides an image processing method, and an execution subject of the method may be, but is not limited to, a mobile terminal (e.g., a mobile phone, a tablet computer, etc.) or an apparatus capable of being configured to execute the method provided by the embodiment of the present invention.
For convenience of description, the following description will be made on embodiments of the method, taking as an example that the execution subject of the method is a mobile terminal capable of executing the method. It is understood that the mobile terminal is used as the main body of the method and is only an exemplary illustration, and should not be construed as a limitation of the method.
Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention, where the method in fig. 1 may be executed by a mobile terminal, and as shown in fig. 1, the method may include:
step 101, determining an object area of a target object in the image to be processed.
The mobile terminal is provided with a camera, and images are collected through the camera.
The image to be processed can comprise a pre-stored image and a preview image acquired by a camera in real time in the shooting process. That is, the image to be processed may be a picture that has already been generated; or, the image to be processed may be a preview image acquired by the camera in real time.
The target object may refer to an object to be imaged, such as a person or a scene.
The object region of the target object may be a region surrounded by the outline of the target object or a minimum rectangular region of the target object. In other words, the object region of the target object may be a region where the target object is imaged in the image. And step 102, determining a target background area to be processed.
In this step, all image regions except the object region in the image to be processed may be determined as a target background region to be processed.
Therefore, in both the pre-stored image and the preview image captured by the camera in real time during shooting, the region other than the target region where the target object exists is determined as the target background region.
And 103, dividing the target background area into N background sub-areas.
In this step, the target background area is divided into N background sub-areas, which can be implemented in various ways. Specifically, for example, the division may be performed in accordance with the vertical axis coordinates of the target background region, or in accordance with the distance of the target beijing from the target region.
It should be understood that, in the embodiment of the present application, the ordinate of the vertical axis refers to the ordinate of the vertical axis of the image to be processed in the cartesian coordinate system, and the upward direction is the positive direction of the ordinate of the vertical axis. The origin of the ordinate is not limited.
And 104, performing blurring processing on the N background sub-regions to different degrees according to the arrangement sequence of the N background sub-regions.
The blurring process performed in different degrees may refer to blurring process performed by using different blurring degree values. The blurring degree value may take various forms, for example, may be a quantitative value representing the blurring degree of the background sub-region. In other words, the blurring level value may be a quantitative value characterizing the blurring level of the background sub-region. In practical application, the degree of blurring value m ∈ (0, 1%).
The arrangement sequence of the plurality of background sub-regions may refer to sequential arrangement in the direction of the longitudinal axis, or may refer to sequential arrangement in the direction away from the target object with the contour of the target object in the image acquisition interface as the center.
In the embodiment of the invention, the object area of the target object in the image to be processed is determined. Determining a target background area to be processed, and dividing the target background area into N background sub-areas. And performing blurring processing of different degrees on the N background sub-regions according to the arrangement sequence of the N background sub-regions to obtain an image with a blurring effect on the background. Therefore, the method of the embodiment of the invention can enable the mobile terminal to perform background blurring processing of different degrees on the background area of the image to be acquired in the image acquisition process, so that the acquired image is more natural and can highlight the focus.
In addition, the method of the embodiment of the invention can simplify the background blurring process, avoid the complex operation process and improve the image generation efficiency.
Optionally, as an embodiment, the step 101 may be specifically implemented as:
determining a minimum rectangular area containing the target object as an object area of the target object;
or, recognizing the contour of the target object, and determining an image area surrounded by the contour as an object area of the target object.
Of course, other implementation manners may also exist in step 101, and the embodiment of the present invention is not particularly limited.
Optionally, as an embodiment, the step 103 may be specifically implemented as:
and longitudinally dividing the target background area into N background sub-areas according to the longitudinal axis coordinate of the target background area.
It should be understood that the vertical axis coordinate of the target background area may refer to the distance from the lowest end of the target background area to the bottom end of the image.
Wherein, the height of the N background subregions can be equal; or, the difference between the heights of the two adjacent background sub-regions in the N background sub-regions is a different preset value, and the preset value may be set according to actual requirements, and the embodiment of the present invention is not particularly limited.
According to the embodiment of the invention, the target background area is longitudinally divided into N background sub-areas according to the longitudinal axis coordinate of the target background area, so that the whole target background area can be divided into N strip-shaped background sub-areas. And then, the images formed by blurring the plurality of background sub-regions in different degrees are subjected to subsequent blurring treatment, so that the blurring degree of the images from the bottom end to the top end of the target background region is from weak to strong, and the finally formed images are more natural.
Optionally, as another embodiment, step 103 may be specifically implemented as:
dividing a first area in the target background area into M background sub-areas according to the coordinates of a longitudinal axis;
determining a second region in the target background regions as an M +1 th background subregion;
determining a third region in the target background regions as an M +2 th background subregion;
wherein, M is a positive integer larger than 1, and N is M + 2.
Further, in the present embodiment,
the first area is a first rectangular area which comprises the target object in the image to be processed, the longitudinal axis coordinate range of the first rectangular area is equal to the longitudinal axis coordinate range of the target object, and the width of the first rectangular area is equal to the width of the image to be processed;
the second region is a second rectangular region on the lower side of the first rectangular region in the image to be processed;
the third area is a third rectangular area on the upper side of the first rectangular area in the image to be processed; wherein the image to be processed is composed of the first rectangular area, the second rectangular area and the third rectangular area.
It should be understood that, in this embodiment, the range of the ordinate of the longitudinal axis of the first rectangular area is equal to the range of the ordinate of the longitudinal axis of the target object, that is, the ordinate of the longitudinal axis of the first rectangular area is between the maximum ordinate of the longitudinal axis and the minimum ordinate of the longitudinal axis of the target object.
It should be understood that, in this embodiment, the second region is a second rectangular region on the lower side of the first rectangular region in the image to be processed, that is, the longitudinal axis coordinate of the first rectangular region is smaller than the minimum longitudinal axis coordinate of the target object.
It should be understood that, in this embodiment, the third region is a third rectangular region on the upper side of the first rectangular region in the image to be processed, that is, the ordinate of the longitudinal axis of the third rectangular region is greater than the maximum ordinate of the longitudinal axis of the target object.
It should be understood that the area heights of the M background sub-areas in the first area 1 may be equal; or, the difference between the area heights of every two adjacent background sub-areas in the M background sub-areas in the first area 1 is a different preset value, and the preset value may be set according to actual requirements, and the embodiment of the present invention is not specifically limited.
According to the scheme of the embodiment of the invention, the image formed by finely dividing the first area containing the target object and then performing blurring processing on a plurality of background sub-areas to different degrees is formed, so that the blurring degree of the image from the bottom end to the top end of the first rectangular area is from weak to strong, the target background area containing the target object is more natural, and the target object in the object area can be more highlighted.
Optionally, as still another embodiment, step 103 may be specifically implemented as:
dividing a first area in the target background area into N background sub-areas according to the coordinates of a longitudinal axis;
merging a second region in the target background region and a 1 st background subregion adjacent to the second region into one background subregion;
and combining a third area in the target background area and an Nth background subarea adjacent to the third area into one background subarea.
Further, in the present embodiment,
the first area is a first rectangular area which comprises the target object in the image to be processed, the longitudinal axis coordinate range of the first rectangular area is equal to the longitudinal axis coordinate range of the target object, and the width of the first rectangular area is equal to the width of the image to be processed;
the second region is a second rectangular region on the lower side of the first rectangular region in the image to be processed;
the third area is a third rectangular area on the upper side of the first rectangular area in the image to be processed;
wherein the image to be processed is composed of the first rectangular area, the second rectangular area and the third rectangular area.
It should be understood that, in this embodiment, the range of the ordinate of the longitudinal axis of the first rectangular area is equal to the range of the ordinate of the longitudinal axis of the target object, that is, the ordinate of the longitudinal axis of the first rectangular area is between the maximum ordinate of the longitudinal axis and the minimum ordinate of the longitudinal axis of the target object.
It should be understood that, in this embodiment, the second region is a second rectangular region on the lower side of the first rectangular region in the image to be processed, that is, the longitudinal axis coordinate of the first rectangular region is smaller than the minimum longitudinal axis coordinate of the target object.
It should be understood that, in this embodiment, the third region is a third rectangular region on the upper side of the first rectangular region in the image to be processed, that is, the ordinate of the longitudinal axis of the third rectangular region is greater than the maximum ordinate of the longitudinal axis of the target object.
It should be understood that the area heights of the N background sub-areas in the first area 1 may be equal; or, the difference between the area heights of every two adjacent background sub-areas in the N background sub-areas in the first area 1 is a different preset value, and the preset value may be set according to actual requirements, and the embodiment of the present invention is not specifically limited.
In the embodiment of the invention, the background sub-regions positioned at two sides in the first region are combined with the respectively adjacent second region and third region into a background sub-region by dividing the first region containing the target object, and then the image formed by blurring the plurality of background sub-regions in different degrees is performed subsequently, so that the scene blurring degree of the image from the bottom end to the top end of the first region is from weak to strong, the adjacent background sub-regions in the second region and the first region have the same blurring effect, and the adjacent background sub-regions in the third region and the first region have the same blurring effect, thereby enabling the formed image to be more natural and more highlighting the target object in the object region.
Optionally, as still another embodiment, step 103 may be specifically implemented as:
and dividing the target background area into N background sub-areas according to the distance from the target area to the object area.
Specifically, the target background region is divided into N background sub-regions according to the distance from the object region, which may be implemented as:
the method comprises the steps of obtaining the outline of an object area of a target object, and dividing a target background area into N background sub-areas according to the distance from the outline of the object area to the outline of the object area by taking the outline of the object area of the target object as the center.
In this embodiment, the target background area is divided into N background sub-areas according to the distance from the object area, and the difference between the maximum distance and the minimum distance from each of the background sub-areas to the object area is equal. Of course, the difference between the maximum distance and the minimum distance between each two adjacent background sub-areas and the object area may be different preset values, and the preset values may be set according to actual requirements, and embodiments of the present invention are not specifically limited.
According to the embodiment of the invention, the target background area is divided into N background sub-areas according to the distance from the target area, so that the whole target background area can be divided into N annular background sub-areas, and then the plurality of background sub-areas are subjected to blurring processing of different degrees to form an image, so that the image has the effect of gradually increasing the blurring degree from the position close to the target object to the position far from the target object, and the finally formed image is more natural and more highlights the target object in the target area of the target object.
Optionally, as an embodiment of the present invention, step 104 may be specifically implemented as: performing virtualization processing of different degrees on the N background sub-regions according to the arrangement sequence of the N background sub-regions and a first virtualization mode with gradually increased virtualization degree values; or performing different degrees of blurring processing on the N background sub-regions according to the arrangement sequence of the N background sub-regions and a second blurring mode with a decreasing blurring degree value.
The first blurring mode may be that the difference between blurring degree values of two adjacent background sub-regions is a preset value with the same value; or, the blurring degree value of each background sub-region is a preset value with a different value. The second blurring mode may also be a preset value in which the difference between the blurring degree values of two adjacent background sub-regions has the same value; or, the blurring degree value of each background sub-region is a preset value with a different value.
In the embodiment of the present invention, by performing different degrees of blurring on the plurality of background sub-regions formed in the above embodiment, different degrees of blurring can be performed on each background sub-region in combination with parameters such as the size, the position, the shape, and the like of each background sub-region, so that the formed image is more natural, and the image of the target object in the object region can be more prominent.
The method of the embodiments of the present invention will be further described with reference to specific embodiments.
FIG. 2 is a flowchart illustrating an image processing method provided by an embodiment of the present invention in an actual application scenario;
specifically, as shown in fig. 2:
at 210, determining whether an object area of a target object exists in the image to be processed;
if the object region of the target object exists, go to step 220; if the target area of the target object does not exist, no operation is performed.
At 220, a target background area to be processed is determined.
The method specifically comprises the following steps: and determining all image areas except the object area in the image to be processed as a target background area to be processed.
At 230, the target background region is divided into N background sub-regions.
The method specifically comprises the following steps:
step 241, longitudinally dividing the target background area into N background sub-areas according to the longitudinal axis coordinate of the target background area, as shown in fig. 3. Of course, the division according to the coordinate of the longitudinal axis further includes other two division manners as shown in fig. 4 and fig. 5, which refer to the related contents in the above embodiments in detail, and the description of the embodiments of the present invention is omitted.
Alternatively, in step 242, the target background region is divided into N background sub-regions according to the distance from the object region.
At 250, blurring processing is performed on the N background sub-regions to different degrees according to the arrangement sequence of the N background sub-regions.
In specific implementation, assuming that the blurring degree value may be m, the blurring processing performed on the N background sub-regions formed in the above embodiment at different degrees is specifically as follows:
for example, as shown in fig. 3, the target background area is divided into N background sub-areas, segment 1 and segment 2 … N, in the vertical direction according to the vertical axis coordinate of the target background area. And calculating the corresponding virtualization degree value of each section, wherein the virtualization degree value of the first section is m/N, the virtualization degree value of the second section is 2m/N, the virtualization degree value of the N-1 section is m (N-1)/N, the virtualization degree value of the N section is m, and m can be any value of (0, 1).
For example, as shown in fig. 4, a first region 1 in the target background region is divided into M background sub-regions by vertical axis coordinates; determining a second region 2 in the target background regions as an M +1 th background sub-region; the third region 3 in the target background region is determined as the M +2 th background sub-region. Wherein, M is a positive integer larger than 1, and N is M + 2. From paragraph 1, paragraph 2, paragraph …, paragraph N. And calculating the corresponding virtualization degree value of each section, wherein the virtualization degree value of the first section is M/N, the virtualization degree value of the (M + 1) th background sub-region below the first section is not particularly limited, the virtualization degree value of the second section is 2M/N, the virtualization degree value of the M section is M, the virtualization degree value of the (M + 2) th background sub-region above the M section is not particularly limited, and M can be any value of (0, 1).
For example, as shown in fig. 5, a first region 1 in the target background region is divided into N background sub-regions by vertical axis coordinates; merging a second region 2 in the target background region and a 1 st background subregion adjacent to the second region 2 into one background subregion; the third region 3 in the object background region and the nth background subregion adjacent to the third region 3 are merged into one background subregion. The height H of the target object (human body) (i.e., the maximum vertical axis coordinate of the human body when the position of the human foot is the origin position) is identified and divided into N equal parts, i.e., the 1 st part and the 2 nd part …. And calculating the corresponding virtualization degree value of each section, wherein the virtualization degree value of the first section is m/N, the virtualization degree value of a second region below the first section is also m/N, the virtualization degree value of the second section is 2m/N, the virtualization degree value of the Nth section is m, the virtualization degree value of a third region above the Nth section is also m, and m can be any value of (0, 1).
In the embodiment of the invention, the object area of the target object in the image to be processed is determined. Determining a target background area to be processed, and dividing the target background area into N background sub-areas. And performing blurring processing of different degrees on the N background sub-regions according to the arrangement sequence of the N background sub-regions to obtain an image with a blurring effect on the background. Therefore, the image processing method provided by the embodiment of the invention enables the mobile terminal to perform background blurring processing to different degrees on the background area of the image to be acquired in the image acquisition process, so that the acquired image is more natural and can highlight the focus. Meanwhile, a complex operation process is avoided, and the time of a user is saved.
The image processing method according to the embodiment of the present invention is described in detail with reference to fig. 1 to 5, and the mobile terminal according to the embodiment of the present invention is described in detail with reference to fig. 6.
Fig. 6 is a schematic structural diagram of a mobile terminal according to an embodiment of the present invention, and as shown in fig. 6, the mobile terminal may include, based on an image processing method according to an embodiment of the present invention:
a first determining module 601, configured to determine an object region of a target object in an image to be processed;
a second determining module 602, configured to determine a target background area to be processed;
a dividing module 603, configured to divide the target background area into N background sub-areas;
a blurring processing module 604, configured to perform blurring processing on the N background sub-regions to different degrees according to the arrangement order of the N background sub-regions;
wherein, the blurring degree value is a quantization value representing the blurring degree of the background subregion; n is an integer greater than 1.
In one embodiment, the first determining module 601 includes:
a first determination unit configured to determine a minimum rectangular region including the target object as an object region of the target object; alternatively, the first and second electrodes may be,
and the second determining unit is used for identifying the outline of the target object and determining an image area surrounded by the outline as an object area of the target object.
In one embodiment, the second determining module 602 includes:
and the third determining unit is used for determining all image areas except the object area in the image to be processed as target background areas to be processed.
In one embodiment, the partitioning module 603 comprises:
and the first dividing unit is used for longitudinally dividing the target background area into N background sub-areas according to the longitudinal axis coordinate of the target background area.
In one embodiment, the partitioning module 603 comprises:
the second dividing unit is used for dividing the first area in the target background area into M background sub-areas according to the longitudinal axis coordinate;
a fourth determining unit, configured to determine a second region in the target background regions as an M +1 th background sub-region;
a fifth determining unit, configured to determine a third region in the target background regions as an M +2 th background sub-region;
wherein, M is a positive integer larger than 1, and N is M + 2.
In one embodiment, the partitioning module 603 comprises:
the third dividing unit is used for dividing the first area in the target background area into N background sub-areas according to the vertical axis coordinate;
a sixth determining unit, configured to combine a second region in the target background regions and a 1 st background sub-region adjacent to the second region into one background sub-region;
a seventh determining unit, configured to combine a third region in the object background regions and an nth background subregion adjacent to the third region into one background subregion.
In one embodiment, the first region is a first rectangular region including the target object in the image to be processed, a longitudinal axis coordinate range of the first rectangular region is equal to a longitudinal axis coordinate range of the target object, and a width of the first rectangular region is equal to a width of the image to be processed;
the second region is a second rectangular region on the lower side of the first rectangular region in the image to be processed;
the third area is a third rectangular area on the upper side of the first rectangular area in the image to be processed;
wherein the image to be processed is composed of the first rectangular area, the second rectangular area and the third rectangular area.
In one embodiment, the partitioning module 603 comprises:
and dividing the target background area into N background sub-areas according to the distance from the target area to the object area.
In one embodiment, the difference between the maximum distance and the minimum distance of each background sub-region from the object region is equal.
In one embodiment, the blurring processing module 604 includes:
the first virtualization processing unit is used for performing virtualization processing on the N background sub-regions to different degrees according to the arrangement sequence of the N background sub-regions and a first virtualization mode with gradually increased virtualization degree values; alternatively, the first and second electrodes may be,
and the second virtualization processing unit is used for performing virtualization processing of different degrees on the N background sub-regions according to the arrangement sequence of the N background sub-regions and a second virtualization mode with a decreasing virtualization degree value.
In one embodiment, the difference between the blurring degree values of two adjacent background sub-regions is a preset value with the same value; or the blurring degree value of each background sub-region is a preset value with different values.
In one embodiment, the image to be processed includes a pre-stored image and a preview image acquired by a camera in real time during shooting.
In one embodiment, the N background sub-regions are equal in region height.
In the embodiment of the invention, the object area of the target object in the image to be processed is determined. Determining a target background area to be processed, and dividing the target background area into N background sub-areas. And performing blurring processing of different degrees on the N background sub-regions according to the arrangement sequence of the N background sub-regions to obtain an image with a blurring effect on the background. Therefore, the mobile terminal provided by the embodiment of the invention enables the mobile terminal to perform background blurring processing of different degrees on the background area of the image to be acquired in the image acquisition process, so that the acquired image is more natural and can highlight the focus. Meanwhile, a complex operation process is avoided, and the time of a user is saved.
The mobile terminal may also execute the method in fig. 1, and implement the functions of the mobile terminal in the embodiments shown in fig. 1 and fig. 2, which are not described again.
Figure 7 is a schematic diagram of a hardware configuration of a mobile terminal implementing various embodiments of the present invention,
the mobile terminal 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, a power supply 711, and the like. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 7 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the mobile terminal includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 710 is configured to determine an object region of a target object in an image to be processed;
determining a target background area to be processed;
dividing the target background area into N background sub-areas;
performing blurring processing of different degrees on the N background subregions according to the arrangement sequence of the N background subregions;
wherein, the blurring degree value is a quantization value representing the blurring degree of the background subregion; n is an integer greater than 1.
In the embodiment of the invention, the object area of the target object in the image to be processed is determined. Determining a target background area to be processed, and dividing the target background area into N background sub-areas. And performing blurring processing of different degrees on the N background sub-regions according to the arrangement sequence of the N background sub-regions to obtain an image with a blurring effect on the background. Therefore, the image processing method provided by the embodiment of the invention enables the mobile terminal to perform background blurring processing to different degrees on the background area of the image to be acquired in the image acquisition process, so that the acquired image is more natural and can highlight the focus. In addition, a complex operation process is avoided, and the time of a user is saved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 701 may be used for receiving and sending signals during a message transmission and reception process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 710; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 701 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 701 may also communicate with a network and other devices through a wireless communication system.
The mobile terminal provides the user with wireless broadband internet access via the network module 702, such as helping the user send and receive e-mails, browse web pages, and access streaming media.
The audio output unit 703 may convert audio data received by the radio frequency unit 701 or the network module 702 or stored in the memory 709 into an audio signal and output as sound. Also, the audio output unit 703 may also provide audio output related to a specific function performed by the mobile terminal 700 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 703 includes a speaker, a buzzer, a receiver, and the like.
The input unit 704 is used to receive audio or video signals. The input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics processor 7041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 706. The image frames processed by the graphic processor 7041 may be stored in the memory 709 (or other storage medium) or transmitted via the radio unit 701 or the network module 702. The microphone 7042 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 701 in case of a phone call mode.
The mobile terminal 700 also includes at least one sensor 705, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 7061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 7061 and/or a backlight when the mobile terminal 700 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 705 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 706 is used to display information input by the user or information provided to the user. The Display unit 706 may include a Display panel 7061, and the Display panel 7061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 707 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 7071 (e.g., operations by a user on or near the touch panel 7071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 7071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 170, receives commands from the processor 710, and executes the commands. In addition, the touch panel 7071 can be implemented by various types such as resistive, capacitive, infrared, and surface acoustic wave. The user input unit 707 may include other input devices 7072 in addition to the touch panel 7071. In particular, the other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 7071 may be overlaid on the display panel 7061, and when the touch panel 7071 detects a touch operation on or near the touch panel 7071, the touch operation is transmitted to the processor 710 to determine the type of the touch event, and then the processor 710 provides a corresponding visual output on the display panel 7061 according to the type of the touch event. Although the touch panel 7071 and the display panel 7061 are shown in fig. 7 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 7071 and the display panel 7061 may be integrated to implement the input and output functions of the mobile terminal, which is not limited herein.
The interface unit 708 is an interface through which an external device is connected to the mobile terminal 700. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 708 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 700 or may be used to transmit data between the mobile terminal 700 and external devices.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 709 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 710 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 709 and calling data stored in the memory 709, thereby integrally monitoring the mobile terminal. Processor 710 may include one or more processing units; preferably, the processor 710 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The mobile terminal 700 may also include a power supply 711 (e.g., a battery) for powering the various components, and the power supply 711 may be logically coupled to the processor 710 via a power management system that may enable managing charging, discharging, and power consumption by the power management system.
In addition, the mobile terminal 700 includes some functional modules that are not shown, and thus will not be described in detail herein.
Preferably, an embodiment of the present invention further provides a mobile terminal, including a processor 710, a memory 709, and a computer program stored in the memory 709 and capable of running on the processor 710, where the computer program is executed by the processor 710 to implement each process of the above-mentioned image processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (28)

1. An image processing method, comprising:
determining an object area of a target object in an image to be processed;
determining a target background area to be processed;
dividing the target background area into N background sub-areas;
performing blurring processing of different degrees on the N background subregions according to the arrangement sequence of the N background subregions;
wherein the blurring processing of the N background sub-regions to different degrees includes: performing blurring processing on the N background sub-regions by respectively adopting different blurring degree values, wherein the blurring degree value is a quantitative value representing the blurring degree of the background sub-regions; n is an integer greater than 1;
the dividing the target background area into N background sub-areas includes:
dividing according to the longitudinal axis coordinate of the target background area or the distance between the target background and the object area, and dividing the target background area into N background sub-areas;
the arrangement sequence refers to the sequential arrangement in the longitudinal axis direction or the sequential arrangement in the direction far away from the target object by taking the contour of the target object in the image acquisition interface as the center.
2. The method of claim 1, wherein determining the object region of the target object in the image to be processed comprises:
determining a minimum rectangular area containing the target object as an object area of the target object;
or, recognizing the contour of the target object, and determining an image area surrounded by the contour as an object area of the target object.
3. The method of claim 1, wherein the determining the target background area to be processed comprises:
and determining all image areas except the object area in the image to be processed as a target background area to be processed.
4. The method of claim 1, wherein the dividing the target background area into N background sub-areas comprises:
and longitudinally dividing the target background area into N background sub-areas according to the longitudinal axis coordinate of the target background area.
5. The method of claim 1, wherein the dividing the target background area into N background sub-areas comprises:
dividing a first area in the target background area into M background sub-areas according to the coordinates of a longitudinal axis;
determining a second region in the target background regions as an M +1 th background subregion;
determining a third region in the target background regions as an M +2 th background subregion;
wherein, M is a positive integer larger than 1, and N is M + 2.
6. The method of claim 1, wherein the dividing the target background area into N background sub-areas comprises:
dividing a first area in the target background area into N background sub-areas according to the coordinates of a longitudinal axis;
merging a second region in the target background region and a 1 st background subregion adjacent to the second region into one background subregion;
and combining a third area in the target background area and an Nth background subarea adjacent to the third area into one background subarea.
7. The method according to claim 5 or 6,
the first area is a first rectangular area which comprises the target object in the image to be processed, the longitudinal axis coordinate range of the first rectangular area is equal to the longitudinal axis coordinate range of the target object, and the width of the first rectangular area is equal to the width of the image to be processed;
the second region is a second rectangular region on the lower side of the first rectangular region in the image to be processed;
the third area is a third rectangular area on the upper side of the first rectangular area in the image to be processed;
wherein the image to be processed is composed of the first rectangular area, the second rectangular area and the third rectangular area.
8. The method of claim 1, wherein the dividing the target background area into N background sub-areas comprises:
and dividing the target background area into N background sub-areas according to the distance from the target area to the object area.
9. The method of claim 8, wherein the difference between the maximum distance and the minimum distance of each background sub-region from the object region is equal.
10. The method according to claim 1, wherein the blurring the N background sub-regions to different degrees according to the arrangement order of the N background sub-regions comprises:
performing virtualization processing of different degrees on the N background sub-regions according to the arrangement sequence of the N background sub-regions and a first virtualization mode with gradually increased virtualization degree values;
or performing different degrees of blurring processing on the N background sub-regions according to the arrangement sequence of the N background sub-regions and a second blurring mode with a decreasing blurring degree value.
11. The method according to claim 1, wherein the difference between the blurring degree values of two adjacent background sub-regions is a preset value with the same value; or the blurring degree values of the N background sub-regions are preset values with different values.
12. The method according to claim 1, wherein the image to be processed comprises a pre-stored image and a preview image acquired by a camera in real time during shooting.
13. The method of claim 1, wherein the N background sub-regions are equal in area height.
14. A mobile terminal, comprising:
the first determination module is used for determining an object area of a target object in an image to be processed;
the second determination module is used for determining a target background area to be processed;
the dividing module is used for dividing the target background area into N background sub-areas;
the blurring processing module is used for blurring the N background sub-regions to different degrees according to the arrangement sequence of the N background sub-regions;
the blurring processing module is specifically configured to perform blurring processing on the N background sub-regions by using different blurring degree values respectively, where the blurring degree value is a quantization value representing a blurring degree of the background sub-region; n is an integer greater than 1;
the dividing module is specifically configured to divide the target background area into N background sub-areas according to a longitudinal axis coordinate of the target background area, or divide the target background area into N background sub-areas according to a distance between the target background and the object area;
the arrangement sequence refers to the sequential arrangement in the longitudinal axis direction or the sequential arrangement in the direction far away from the target object by taking the contour of the target object in the image acquisition interface as the center.
15. The mobile terminal of claim 14, wherein the first determining module comprises:
a first determination unit configured to determine a minimum rectangular region including the target object as an object region of the target object; alternatively, the first and second electrodes may be,
and the second determining unit is used for identifying the outline of the target object and determining an image area surrounded by the outline as an object area of the target object.
16. The mobile terminal of claim 14, wherein the second determining module comprises:
and the third determining unit is used for determining all image areas except the object area in the image to be processed as target background areas to be processed.
17. The mobile terminal of claim 14, wherein the partitioning module comprises:
and the first dividing unit is used for longitudinally dividing the target background area into N background sub-areas according to the longitudinal axis coordinate of the target background area.
18. The mobile terminal of claim 14, wherein the partitioning module comprises:
the second dividing unit is used for dividing the first area in the target background area into M background sub-areas according to the longitudinal axis coordinate;
a fourth determining unit, configured to determine a second region in the target background regions as an M +1 th background sub-region;
a fifth determining unit, configured to determine a third region in the target background regions as an M +2 th background sub-region;
wherein, M is a positive integer larger than 1, and N is M + 2.
19. The mobile terminal of claim 14, wherein the partitioning module comprises:
the third dividing unit is used for dividing the first area in the target background area into N background sub-areas according to the vertical axis coordinate;
a sixth determining unit, configured to combine a second region in the target background regions and a 1 st background sub-region adjacent to the second region into one background sub-region;
a seventh determining unit, configured to combine a third region in the object background regions and an nth background subregion adjacent to the third region into one background subregion.
20. The mobile terminal according to claim 18 or 19,
the first area is a first rectangular area which comprises the target object in the image to be processed, the longitudinal axis coordinate range of the first rectangular area is equal to the longitudinal axis coordinate range of the target object, and the width of the first rectangular area is equal to the width of the image to be processed;
the second region is a second rectangular region on the lower side of the first rectangular region in the image to be processed;
the third area is a third rectangular area on the upper side of the first rectangular area in the image to be processed;
wherein the image to be processed is composed of the first rectangular area, the second rectangular area and the third rectangular area.
21. The mobile terminal of claim 14, wherein the partitioning module comprises:
and dividing the target background area into N background sub-areas according to the distance from the target area to the object area.
22. The mobile terminal of claim 21, wherein the difference between the maximum distance and the minimum distance of each background sub-region from the object region is equal.
23. The mobile terminal of claim 14, wherein the blurring processing module comprises:
the first virtualization processing unit is used for performing virtualization processing on the N background sub-regions to different degrees according to the arrangement sequence of the N background sub-regions and a first virtualization mode with gradually increased virtualization degree values; alternatively, the first and second electrodes may be,
and the second virtualization processing unit is used for performing virtualization processing of different degrees on the N background sub-regions according to the arrangement sequence of the N background sub-regions and a second virtualization mode with a decreasing virtualization degree value.
24. The mobile terminal of claim 14, wherein the difference between the blurring degree values of two adjacent background sub-regions is a preset value with the same value; or the blurring degree values of the N background sub-regions are preset values with different values.
25. The mobile terminal according to claim 14, wherein the image to be processed comprises a pre-stored image and a preview image acquired by a camera in real time during shooting.
26. The mobile terminal of claim 14, wherein the N background sub-regions have equal region heights.
27. A mobile terminal, comprising: memory, processor and computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the image processing method according to any one of claims 1 to 13.
28. A computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, realizes the steps of the image processing method according to any one of claims 1 to 13.
CN201711030164.9A 2017-10-27 2017-10-27 Image processing method and mobile terminal Active CN107749046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711030164.9A CN107749046B (en) 2017-10-27 2017-10-27 Image processing method and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711030164.9A CN107749046B (en) 2017-10-27 2017-10-27 Image processing method and mobile terminal

Publications (2)

Publication Number Publication Date
CN107749046A CN107749046A (en) 2018-03-02
CN107749046B true CN107749046B (en) 2020-02-07

Family

ID=61253355

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711030164.9A Active CN107749046B (en) 2017-10-27 2017-10-27 Image processing method and mobile terminal

Country Status (1)

Country Link
CN (1) CN107749046B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11893668B2 (en) 2021-03-31 2024-02-06 Leica Camera Ag Imaging system and method for generating a final digital image via applying a profile to image information

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108495030A (en) * 2018-03-16 2018-09-04 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN108665475A (en) * 2018-05-15 2018-10-16 北京市商汤科技开发有限公司 Neural metwork training, image processing method, device, storage medium and electronic equipment
CN108989678B (en) * 2018-07-27 2021-03-23 维沃移动通信有限公司 Image processing method and mobile terminal
CN110223301B (en) * 2019-03-01 2021-08-03 华为技术有限公司 Image clipping method and electronic equipment
WO2021102704A1 (en) * 2019-11-26 2021-06-03 深圳市大疆创新科技有限公司 Image processing method and apparatus

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102932541A (en) * 2012-10-25 2013-02-13 广东欧珀移动通信有限公司 Mobile phone photographing method and system
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101184460B1 (en) * 2010-02-05 2012-09-19 연세대학교 산학협력단 Device and method for controlling a mouse pointer
KR101189633B1 (en) * 2011-08-22 2012-10-10 성균관대학교산학협력단 A method for recognizing ponter control commands based on finger motions on the mobile device and a mobile device which controls ponter based on finger motions
KR20140122054A (en) * 2013-04-09 2014-10-17 삼성전자주식회사 converting device for converting 2-dimensional image to 3-dimensional image and method for controlling thereof

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102932541A (en) * 2012-10-25 2013-02-13 广东欧珀移动通信有限公司 Mobile phone photographing method and system
CN106993112A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Background-blurring method and device and electronic installation based on the depth of field

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11893668B2 (en) 2021-03-31 2024-02-06 Leica Camera Ag Imaging system and method for generating a final digital image via applying a profile to image information

Also Published As

Publication number Publication date
CN107749046A (en) 2018-03-02

Similar Documents

Publication Publication Date Title
CN108513070B (en) Image processing method, mobile terminal and computer readable storage medium
CN107749046B (en) Image processing method and mobile terminal
CN109151180B (en) Object identification method and mobile terminal
CN108459797B (en) Control method of folding screen and mobile terminal
CN108495029B (en) Photographing method and mobile terminal
CN110557575B (en) Method for eliminating glare and electronic equipment
CN109688322B (en) Method and device for generating high dynamic range image and mobile terminal
CN107977652B (en) Method for extracting screen display content and mobile terminal
CN107730460B (en) Image processing method and mobile terminal
CN108038825B (en) Image processing method and mobile terminal
CN108924414B (en) Shooting method and terminal equipment
CN110602389B (en) Display method and electronic equipment
CN111401463B (en) Method for outputting detection result, electronic equipment and medium
CN109523253B (en) Payment method and device
CN111031234B (en) Image processing method and electronic equipment
CN111145087B (en) Image processing method and electronic equipment
CN109727212B (en) Image processing method and mobile terminal
CN108174110B (en) Photographing method and flexible screen terminal
CN110908517B (en) Image editing method, image editing device, electronic equipment and medium
CN110929540A (en) Scanning code identification method and device
CN108243489B (en) Photographing control method and mobile terminal
CN108156386B (en) Panoramic photographing method and mobile terminal
CN107798662B (en) Image processing method and mobile terminal
CN109819331B (en) Video call method, device and mobile terminal
CN108965701B (en) Jitter correction method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant