CN108776800B - Image processing method, mobile terminal and computer readable storage medium - Google Patents

Image processing method, mobile terminal and computer readable storage medium Download PDF

Info

Publication number
CN108776800B
CN108776800B CN201810579524.9A CN201810579524A CN108776800B CN 108776800 B CN108776800 B CN 108776800B CN 201810579524 A CN201810579524 A CN 201810579524A CN 108776800 B CN108776800 B CN 108776800B
Authority
CN
China
Prior art keywords
target
foreground
processing
processing target
blurring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810579524.9A
Other languages
Chinese (zh)
Other versions
CN108776800A (en
Inventor
黄海东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810579524.9A priority Critical patent/CN108776800B/en
Publication of CN108776800A publication Critical patent/CN108776800A/en
Application granted granted Critical
Publication of CN108776800B publication Critical patent/CN108776800B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The application is applicable to the technical field of image processing, and provides an image processing method, a mobile terminal and a computer readable storage medium, wherein the image processing method comprises the following steps: the method comprises the steps of obtaining a preview picture collected by a camera in the mobile terminal, identifying the preview picture to obtain an identification result, wherein the identification result is used for indicating whether a foreground target exists in the preview picture and a foreground label of the foreground target when the foreground target exists, recording a foreground target corresponding to a preset label combination as a processing target if at least two foreground targets exist in the preview picture and a preset label combination exists in a foreground label combination of the foreground target, obtaining a background image corresponding to each processing target in the preview picture, carrying out virtual processing on the background image corresponding to each processing target, and obtaining diversified images through the method.

Description

Image processing method, mobile terminal and computer readable storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image processing method, a mobile terminal, and a computer-readable storage medium.
Background
With the development of intelligent mobile terminals, people use mobile terminals such as mobile phones and the like to take pictures more and more frequently. Most of the existing photographing functions of mobile terminals support image processing, such as a filter function for a human face, a peeling function, a whitening function, and the like.
However, the current image processing method is single, for example, the processing for human face is related to beauty. Therefore, the effect of the shot photos is single at present, and the user experience is poor.
Disclosure of Invention
In view of this, embodiments of the present application provide an image processing method, a mobile terminal, and a computer-readable storage medium, so as to solve the problems of single effect and poor user experience of a currently-captured photo.
A first aspect of an embodiment of the present application provides an image processing method, including:
acquiring a preview picture acquired by a camera in the mobile terminal, and identifying the preview picture to obtain an identification result, wherein the identification result is used for indicating whether a foreground target exists in the preview picture and indicating a foreground label of the foreground target when the foreground target exists;
if at least two foreground targets exist in the preview picture, judging whether a preset label combination exists in the foreground label combinations of the foreground targets;
if a preset label combination exists in the foreground label combinations of the foreground targets, marking the foreground targets corresponding to the preset label combination as processing targets, acquiring a background image corresponding to each processing target in the preview picture, and blurring the background image corresponding to each processing target.
A second aspect of an embodiment of the present application provides a mobile terminal, including:
the identification result acquisition module is used for acquiring a preview picture acquired by a camera in the mobile terminal and identifying the preview picture to obtain an identification result, wherein the identification result is used for indicating whether a foreground target exists in the preview picture and indicating a foreground label of the foreground target when the foreground target exists;
the judging module is used for judging whether a preset label combination exists in the combinations of the foreground labels of the foreground targets if at least two foreground targets exist in the preview picture;
and the blurring processing module is used for recording the foreground target corresponding to the preset label combination as a processing target, acquiring a background image corresponding to each processing target in the preview picture and blurring the background image corresponding to each processing target if the preset label combination exists in the foreground label combinations of the foreground targets.
A third aspect of an embodiment of the present application provides a mobile terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method provided in the first aspect of the embodiment of the present application when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium storing a computer program which, when executed by one or more processors, performs the steps of the method provided by the first aspect of embodiments of the present application.
A fifth aspect of embodiments of the present application provides a computer program product comprising a computer program that, when executed by one or more processors, performs the steps of the method provided by the first aspect of embodiments of the present application.
The embodiment of the application acquires a preview picture acquired by a camera in the mobile terminal, identifies the preview picture to acquire an identification result, the identification result is used for indicating whether a foreground target exists in the preview picture and a foreground label of the foreground target when the foreground target exists, if at least two foreground targets exist in the preview picture, judges whether a preset label combination exists in a foreground label combination of the foreground targets, if the preset label combination exists in the foreground label combination of the foreground targets, marks the foreground target corresponding to the preset label combination as a processing target, acquires a background image corresponding to each processing target in the preview picture, and performs blurring processing on the background image corresponding to each processing target, when at least two foreground targets exist in the preview picture acquired by the camera, and judging whether a preset label combination exists in the foreground label combinations of the foreground targets, and if the preset label combination exists, blurring the background image of each foreground target corresponding to the preset label combination, so that the shot picture can have diversified effects.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of an implementation of an image processing method provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating another implementation of an image processing method according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart illustrating another implementation of an image processing method according to an embodiment of the present disclosure;
fig. 4 is a schematic block diagram of a mobile terminal according to an embodiment of the present application;
fig. 5 is a schematic block diagram of another mobile terminal provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Fig. 1 is a schematic flow chart of an implementation of an image processing method provided in an embodiment of the present application, and as shown in the figure, the method may include the following steps:
step S101, a preview picture collected by a camera in a mobile terminal is obtained, the preview picture is identified to obtain an identification result, and the identification result is used for indicating whether a foreground target exists in the preview picture and indicating a foreground label of the foreground target when the foreground target exists.
In the embodiment of the application, the preview picture is a picture which is acquired in real time through a camera on the mobile terminal and is displayed on the display screen. When the preview picture is identified, whether a foreground target exists in the preview picture can be identified through a convolutional neural network model, if the identification result shows that the foreground target exists, an identification image with a detection frame can be output, the foreground target which is identified in the detection frame is output, and meanwhile a foreground label of the foreground target is output. If no foreground object is identified in the preview picture, the original image can be output, that is, if no foreground object exists, no foreground label exists.
Step S102, if at least two foreground objects exist in the preview picture, judging whether a preset label combination exists in the foreground label combinations of the foreground objects.
In the embodiment of the application, usually, when a user takes a picture, a plurality of foreground objects can be recognized through the object recognition model, for example, three face images are recognized, or one face image and one animal image are recognized, and possibly one face image and one flower image are recognized.
According to the embodiment of the application, in order to highlight the foreground object which is shot by the user in a key mode, the background image around the foreground object which is shot by the user in a key mode is blurred. When a user needs to shoot two face images, the background image around each face image needs to be blurred, and when the user shoots the group photo of a person and a dog, the background image around the face and the background image around the dog need to be blurred at the same time in order to highlight the person and the dog. Therefore, some label combinations can be preset, for example, a face label + a face label, a face label + an animal label, and the like.
For ease of understanding, we will by way of illustration: the identified foreground objects include four: face, dog, flower, the prospect label that corresponds is in proper order: face label, animal label, plant label, the label combination of predetermineeing is: if the human face tag and the animal tag indicate that a preset tag combination exists in the foreground tag combination of the foreground target, blurring the background image around the foreground target corresponding to the human face tag in the preset tag combination can be performed (blurring both the background images corresponding to the two human face images), and meanwhile blurring the background image around the foreground target corresponding to the animal tag can be performed (blurring the background image around the dog).
Note that, the four foreground labels (respectively denoted by A, B, C, D) correspond to combinations of: AB. AC, AD, BC, BD, CD, ABC, ABD, ACD, BCD, ABCD.
Certainly, in practical application, the set may also be a face tag, an animal tag and another tag, that is, as long as a face tag and an animal tag exist in a foreground tag of an identified foreground object at the same time, it is considered that a combination of the foreground tags of the foreground object is a preset tag combination, and background images corresponding to all foreground objects may be subjected to blurring processing, and the preset tag combination may also be the face tag and the other tag, which is only used for example and is not limited herein.
Step S103, if a preset tag combination exists in the foreground tag combinations of the foreground targets, marking the foreground targets corresponding to the preset tag combination as processing targets, acquiring a background image corresponding to each processing target in the preview picture, and blurring the background image corresponding to each processing target.
In this embodiment of the present application, the background image corresponding to each processing target in the preview screen may be an image after the processing target is removed from the image in the detection frame corresponding to each processing target in the preview screen. Of course, in practical applications, images of other areas may also be set as the background image of the processing target, and specifically, refer to the description in the embodiment shown in fig. 3.
As another embodiment of the present application, if there is a preset tag combination in the foreground tag combinations of the foreground objects, before marking the foreground object corresponding to the preset tag combination as the processing object, the method further includes:
acquiring the proportion of each foreground target corresponding to the preset label combination in the preview picture;
and if the foreground targets with the proportion larger than the preset value in the preview picture exist in the foreground targets corresponding to the preset label combination, marking the foreground targets with the proportion larger than the preset value in the preview picture in the foreground targets corresponding to the preset label combination as processing targets.
In the embodiment of the application, the ratio of the target which the user wants to shoot in a highlight manner in the preview image is usually relatively large, so that which foreground target is the target which the user shoots in the highlight manner can be further judged by the ratio of the target which the user wants to shoot in the preview image, in addition to judging which foreground target is the target which the user shoots in the highlight manner in the preset label combination. Therefore, the situation that a user can possibly recognize a plurality of face images simultaneously when the user self-shoots in a place with a large number of people is avoided, in fact, the user can shoot the user with emphasis on the target of the user, the proportion of the faces recognized in the people in the preview picture is relatively small, and therefore blurring processing can be carried out on the background images corresponding to the faces recognized in the people. For convenience of distinguishing, the foreground objects, of the foreground objects corresponding to the preset label combination, whose proportion in the preview image is greater than the preset value may be marked as processing objects, which are taken as objects of user's key shooting.
According to the method and the device, when at least two foreground objects exist in a preview picture acquired by a camera, whether preset label combinations exist in the foreground label combinations of the foreground objects or not is judged, if the preset label combinations exist, the background images of all the foreground objects corresponding to the preset label combinations can be subjected to blurring treatment, and therefore shot photos can have diversified effects.
Fig. 2 is a schematic flowchart of another image processing method provided in an embodiment of the present application, and as shown in the figure, the method describes how to perform blurring processing on a background image corresponding to each processing target on the basis of the embodiment shown in fig. 1, and may include the following steps:
step S201, acquiring a quasi-focus target corresponding to the preview image and depth-of-field information of the processing target.
In the embodiment of the application, when the camera images, the plane passing through the focal point and perpendicular to the main axis is called a focal plane. For ease of understanding, we make a trivial explanation that objects are usually equidistant from the camera lens, that they image with their focal points in one plane, that the object imaged in the focal plane, we call the object of collimation, that their distance from the lens is equal. And objects imaged in front of and behind the focal plane are distributed in front of and behind the in-focus target in the actual scene. Points imaged on the focal plane (i.e., the in-focus targets) can constitute relatively sharp images, while points not imaged on the focal plane constitute relatively unsharp images.
The acquiring of the in-focus target corresponding to the preview picture comprises:
determining a focal plane corresponding to a preview picture based on the depth of field information distribution corresponding to the current preview picture captured by the double cameras of the mobile terminal, and determining a quasi-focal target based on the focal plane;
alternatively, the first and second electrodes may be,
acquiring a foreground target selected by a user through a touch screen displaying the preview picture, and taking the foreground target selected by the user through the touch screen as a quasi-focus target;
alternatively, the first and second electrodes may be,
and determining a focus target from the foreground targets based on the position information of each foreground target in the preview picture.
In the embodiment of the application, the double cameras arranged on the mobile terminal can capture the depth of field information of each object in a scene corresponding to the current preview picture, the depth of field information distribution of the current preview picture can be formed, the focal plane corresponding to the preview picture can be obtained from the depth of field information distribution, and the focus-aligning target can be determined based on the focal plane. And acquiring a foreground target selected by a user through a touch screen displaying the preview picture, and taking the foreground target selected by the user through the touch screen as a quasi-focus target. The quasi-focus target can be determined from the foreground targets based on the position information of each foreground target in the preview picture, for example, the foreground target located in the center area of the preview picture is taken as the quasi-focus target.
The depth of field information of the processing target can be acquired through the two cameras on the mobile terminal, and after the quasi-focus target is determined, the two cameras on the mobile terminal can also acquire the depth of field information of the quasi-focus target.
After the in-focus target and the depth of field information of each processing target are determined, the background image corresponding to each processing target may be blurred based on the in-focus target and the depth of field information of each processing target.
Step S202, determining the distance between the processing target and the in-focus target in the direction of the optical axis of the camera according to the depth-of-field information of the in-focus target and the depth-of-field information of the processing target.
In the embodiment of the application, the depth of field information of the quasi-focus target may be understood as a distance between the quasi-focus target and a camera of the mobile terminal, and the depth of field information of the processing target may also be understood as a distance between the processing target and the camera of the mobile terminal. The distance between the processing target and the in-focus target may be determined by a difference between depth of field information of the in-focus target and depth of field information of the processing target with respect to an optical axis direction of the camera. Therefore, the distance between the processing target and the in-focus target refers to the distance between the processing target and the in-focus target in the optical axis direction of the camera.
Step S203, determining the blurring grade of the processing target according to the distance between the processing target and the focus alignment target in the optical axis direction of the camera.
And step S204, performing blurring processing related to the blurring level on the background image corresponding to the processing target based on the blurring level of the processing target.
In the embodiment of the present application, the in-focus target generally refers to a focused target, and the in-focus target is a relatively clear area in the preview screen, so that the sharpness of the shooting may be low due to the distance from the in-focus target in the optical axis direction. In order to highlight other processing objects with lower sharpness, the degree of blurring of the background image around the processing object with lower sharpness may be enhanced. Therefore, the closer the distance between the processing target and the quasi-focus target is, the clearer the processing target is, the lower the blurring level corresponding to the processing target is, and the less obvious the blurring degree is; the farther the distance between the processing target and the quasi-focus target is, the more fuzzy the processing target is, the higher the blurring level corresponding to the processing target is, and the more obvious the blurring degree is, so that the processing target which is fuzzy can be highlighted.
Of course, in practical applications, in order to obtain diversified image effects, other blurring processing modes may be set, for example: the closer the distance between the processing target and the quasi-focus target is, the clearer the processing target is, the higher the virtualization level corresponding to the processing target is, and the more obvious the virtualization degree is; the farther the distance between the processing target and the quasi-focus target is, the more fuzzy the processing target is, the lower the blurring level corresponding to the processing target is, and the less obvious the blurring degree is, so that the quasi-focus target can be highlighted. Therefore, different blurring processing modes can be set according to actual application.
The in-focus target may not be a processing target, and may be any target object on the preview screen.
Fig. 3 is a schematic flowchart of another image processing method provided in an embodiment of the present application, and as shown in the flowchart, the method describes how to acquire a background image corresponding to each processing target in the preview screen based on the embodiment shown in fig. 1, and may include the following steps:
step S301, the processing target is divided in the preview screen, and a contour line of the processing target is obtained.
In this embodiment of the present application, the background image around the processing target may be an image other than the processing target in the detection frame corresponding to the processing target. Since the detection frame is usually a rectangular detection frame, the shape formed by the blurring region and the processing target is a rectangular region. In order to obtain diversified blurring effects, the processing object may be divided from the preview screen, and an outline of the processing object may be obtained. And after the contour line of the processing target is obtained, acquiring a blurring region corresponding to the processing target based on the contour line of the processing target.
Step S302, based on the contour line of the processing target, determining the minimum circumscribed circle of the processing target.
Step S303, in a circular region in which the center of the minimum circumscribed circle is a central point and a preset distance is a radius, the region from which the processing target is removed is the blurring region, and the preset distance is greater than or equal to the radius of the minimum circumscribed circle.
Step S304, using the image corresponding to the blurring region as a background image corresponding to the processing target.
In the embodiment of the present application, after the contour line of the processing target is determined, the minimum circumscribed circle of the processing target may be determined based on the contour line of the processing target, and thus, it is equivalent to that the processing target is inside the minimum circumscribed circle. We can provide a concentric circle with the minimum circumscribing circle, the radius of this concentric circle being greater than or equal to the radius of the minimum circumscribing circle. The region inside the concentric circle except the processing target is the blurring region, and the region formed by the blurring region and the processing target is the circular region.
As another embodiment of the present application, the obtaining a blurring region corresponding to the processing target based on the contour line of the processing target includes:
determining the gravity center of the processing target based on the contour line of the processing target;
and amplifying the contour line of the processing target by a preset multiple by taking the gravity center of the processing target as a reference point to obtain an amplified target contour line, wherein the area, from which the processing target is removed, in the area represented by the amplified target contour line is the blurring area.
In this application embodiment, can also set the blurring region into with the similar region of prospect target shape, can be based on earlier the contour line of processing the target, confirm the focus of processing the target, with the focus of processing the target is the benchmark, will the contour line of processing the target enlargies preset times, obtains the target contour line after the enlargeing, follow it gets rid of in the region that the target contour line after the enlargeing shows the region after the processing target is the blurring region, the region that blurring region and processing target constitute like this is the region of processing the target shape.
The blurring region is the background image corresponding to the processing target. In practical applications, there may be other methods for acquiring the blurring region, which is not an example here.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 4 is a schematic block diagram of a mobile terminal according to an embodiment of the present application, and only a portion related to the embodiment of the present application is shown for convenience of description.
The mobile terminal 4 may be a software unit, a hardware unit or a combination of software and hardware unit built in a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, etc., or may be integrated into a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, etc., as an independent pendant.
The mobile terminal 4 includes:
the recognition result obtaining module 41 is configured to obtain a preview image acquired by a camera in the mobile terminal, and recognize the preview image to obtain a recognition result, where the recognition result is used to indicate whether a foreground target exists in the preview image and a foreground tag of the foreground target when the foreground target exists;
a determining module 42, configured to determine whether a preset tag combination exists in the combinations of foreground tags of the foreground objects if at least two foreground objects exist in the preview picture;
a blurring processing module 43, configured to mark, if a preset tag combination exists in the combinations of the foreground tags of the foreground targets, the foreground target corresponding to the preset tag combination as a processing target, obtain a background image corresponding to each processing target in the preview picture, and perform blurring processing on the background image corresponding to each processing target.
Optionally, the blurring processing module 43 includes:
an information acquisition unit 431, configured to acquire a focus target corresponding to the preview screen and depth information of the processing target;
a blurring processing unit 432, configured to perform blurring processing on the background image corresponding to each processing target based on the quasi-focus target and the depth of field information of each processing target.
Optionally, the information obtaining unit 431 is further configured to:
determining a focal plane corresponding to the preview picture based on the depth of field information distribution corresponding to the current preview picture captured by the double cameras of the mobile terminal, and determining a quasi-focal target based on the focal plane;
alternatively, the first and second electrodes may be,
acquiring a foreground target selected by a user through a touch screen displaying the preview picture, and taking the foreground target selected by the user through the touch screen as a quasi-focus target;
alternatively, the first and second electrodes may be,
and determining a focus target from the foreground targets based on the position information of each foreground target in the preview picture.
Optionally, the blurring processing unit 432 includes:
a distance determining subunit 4321, configured to determine, according to the depth-of-field information of the in-focus target and the depth-of-field information of the processing target, a distance between the processing target and the in-focus target in the optical axis direction of the camera;
a blurring level determining subunit 4322, configured to determine a blurring level of the processing target according to a distance between the processing target and the in-focus target in the optical axis direction of the camera;
a blurring processing subunit 4323, configured to perform blurring processing related to the blurring level on the background image corresponding to the processing target based on the blurring level of the processing target.
Optionally, the blurring processing module 43 further includes:
a contour line obtaining unit 433 configured to divide the processing target in the preview screen to obtain a contour line of the processing target;
a background image obtaining unit 434, configured to obtain a blurring region corresponding to the processing target based on the contour line of the processing target, and use an image corresponding to the blurring region as a background image corresponding to the processing target.
Optionally, the background image obtaining unit 434 includes:
a minimum circumscribed circle determining subunit 4341, configured to determine a minimum circumscribed circle of the processing target based on the contour line of the processing target;
a first blurring region obtaining subunit 4342 is configured to, in a circular region with the center of the minimum circumscribed circle as a central point and a preset distance as a radius, remove the processing target to obtain a region as the blurring region, where the preset distance is greater than or equal to the radius of the minimum circumscribed circle.
Optionally, the background image obtaining unit 434 includes:
a center-of-gravity determining subunit 4343 configured to determine a center of gravity of the processing target based on the contour line of the processing target;
a second blurring region determining subunit 4344, configured to amplify the contour line of the processing target by a preset multiple with the center of gravity of the processing target as a reference point, to obtain an amplified target contour line, and a region obtained by removing the processing target from the region represented by the amplified target contour line is the blurring region.
It will be apparent to those skilled in the art that, for convenience and simplicity of description, the foregoing functional units and modules are merely illustrated in terms of division, and in practical applications, the foregoing functional allocation may be performed by different functional units and modules as needed, that is, the internal structure of the mobile terminal is divided into different functional units or modules to perform all or part of the above described functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the above-mentioned apparatus may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 5 is a schematic block diagram of a mobile terminal according to another embodiment of the present application. As shown in fig. 5, the mobile terminal 5 of this embodiment includes: one or more processors 50, a memory 51 and a computer program 52 stored in said memory 51 and executable on said processors 50. The processor 50, when executing the computer program 52, implements the steps in the various image processing method embodiments described above, such as steps S101 to S103 shown in fig. 1. Alternatively, the processor 50, when executing the computer program 52, implements the functions of the modules/units in the above-described mobile terminal embodiments, such as the functions of the modules 41 to 43 shown in fig. 4.
Illustratively, the computer program 52 may be partitioned into one or more modules/units, which are stored in the memory 51 and executed by the processor 50 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 52 in the mobile terminal 5. For example, the computer program 52 may be divided into a recognition result acquisition module, a judgment module, and a blurring processing module.
The identification result acquisition module is used for acquiring a preview picture acquired by a camera in the mobile terminal and identifying the preview picture to acquire an identification result, wherein the identification result is used for indicating whether a foreground target exists in the preview picture and indicating a foreground label of the foreground target when the foreground target exists;
the judging module is used for judging whether a preset label combination exists in the foreground label combinations of the foreground targets if at least two foreground targets exist in the preview picture;
and the blurring processing module is configured to mark the foreground object corresponding to the preset label combination as a processing object if the preset label combination exists in the foreground label combinations of the foreground objects, acquire a background image corresponding to each processing object in the preview picture, and perform blurring processing on the background image corresponding to each processing object.
Other modules or units can refer to the description of the embodiment shown in fig. 4, and are not described again here.
The mobile terminal includes, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is only one example of a mobile terminal 5 and is not intended to limit the mobile terminal 5 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the mobile terminal may also include input devices, output devices, network access devices, buses, etc.
The Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the mobile terminal 5, such as a hard disk or a memory of the mobile terminal 5. The memory 51 may also be an external storage device of the mobile terminal 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the mobile terminal 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the mobile terminal 5. The memory 51 is used for storing the computer program and other programs and data required by the mobile terminal. The memory 51 may also be used to temporarily store data that has been output or is to be output.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed mobile terminal and method may be implemented in other ways. For example, the above-described embodiments of the mobile terminal are merely illustrative, and for example, the division of the modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (8)

1. An image processing method applied to a mobile terminal, the image processing method comprising:
acquiring a preview picture acquired by a camera in the mobile terminal, and identifying the preview picture to obtain an identification result, wherein the identification result is used for indicating whether a foreground target exists in the preview picture and indicating a foreground label of the foreground target when the foreground target exists; the foreground label comprises a face label, an animal label and a plant label;
if at least two foreground targets exist in the preview picture, judging whether a preset label combination exists in the foreground label combinations of the foreground targets;
if a preset label combination exists in the foreground label combinations of the foreground targets, marking the foreground targets corresponding to the preset label combination as processing targets, acquiring a background image corresponding to each processing target in the preview picture, and blurring the background image corresponding to each processing target;
wherein the blurring the background image corresponding to each processing target includes:
acquiring a quasi-focus target corresponding to the preview picture and depth of field information of the processing target; wherein the quasi-focus target is an object imaged on a focal plane; the depth of field information of the quasi-focus target is the distance between the quasi-focus target and a camera of the mobile terminal;
determining the distance between the processing target and the quasi-focus target in the direction of the optical axis of the camera according to the depth-of-field information of the quasi-focus target and the depth-of-field information of the processing target;
determining the blurring grade of the processing target according to the distance between the processing target and the focusing target in the optical axis direction of the camera;
and performing blurring processing related to the blurring level on the background image corresponding to the processing target based on the blurring level of the processing target.
2. The image processing method according to claim 1, wherein the acquiring the in-focus target corresponding to the preview screen comprises:
determining a focal plane corresponding to the preview picture based on the depth of field information distribution corresponding to the current preview picture captured by the double cameras of the mobile terminal, and determining a quasi-focal target based on the focal plane;
alternatively, the first and second electrodes may be,
acquiring a foreground target selected by a user through a touch screen displaying the preview picture, and taking the foreground target selected by the user through the touch screen as a quasi-focus target;
alternatively, the first and second electrodes may be,
and determining a focus target from the foreground targets based on the position information of each foreground target in the preview picture.
3. The image processing method according to claim 1 or 2, wherein the acquiring the background image corresponding to each processing target in the preview screen includes:
dividing the processing target in the preview picture to obtain the contour line of the processing target;
and acquiring a blurring region corresponding to the processing target based on the contour line of the processing target, and taking an image corresponding to the blurring region as a background image corresponding to the processing target.
4. The image processing method according to claim 3, wherein the obtaining of the blurring region corresponding to the processing target based on the contour line of the processing target comprises:
determining a minimum circumscribed circle of the processing target based on the contour line of the processing target;
and in a circular area which takes the circle center of the minimum circumscribed circle as a central point and takes a preset distance as a radius, the area after the processing target is removed is taken as the virtual area, and the preset distance is greater than or equal to the radius of the minimum circumscribed circle.
5. The image processing method according to claim 3, wherein the obtaining of the blurring region corresponding to the processing target based on the contour line of the processing target comprises:
determining the gravity center of the processing target based on the contour line of the processing target;
and amplifying the contour line of the processing target by a preset multiple by taking the gravity center of the processing target as a reference point to obtain an amplified target contour line, wherein the area, from which the processing target is removed, in the area represented by the amplified target contour line is the blurring area.
6. A mobile terminal, comprising:
the identification result acquisition module is used for acquiring a preview picture acquired by a camera in the mobile terminal and identifying the preview picture to obtain an identification result, wherein the identification result is used for indicating whether a foreground target exists in the preview picture and indicating a foreground label of the foreground target when the foreground target exists; the foreground label comprises a face label, an animal label and a plant label;
the judging module is used for judging whether a preset label combination exists in the combinations of the foreground labels of the foreground targets if at least two foreground targets exist in the preview picture;
a blurring processing module, configured to mark a foreground object corresponding to a preset tag combination as a processing object if the preset tag combination exists in the foreground tag combinations of the foreground objects, obtain a background image corresponding to each processing object in the preview picture, and perform blurring processing on the background image corresponding to each processing object;
the blurring processing module comprises:
an information acquisition unit, configured to acquire a focus target corresponding to the preview screen and depth-of-field information of the processing target; wherein the quasi-focus target is an object imaged on a focal plane; the depth of field information of the quasi-focus target is the distance between the quasi-focus target and a camera of the mobile terminal;
the blurring processing unit is used for blurring the background image corresponding to each processing target based on the quasi-focus target and the depth of field information of each processing target;
the blurring processing unit includes:
the distance determining subunit is configured to determine, according to the depth-of-field information of the in-focus target and the depth-of-field information of the processing target, a distance between the processing target and the in-focus target in the direction of the optical axis of the camera;
the virtual grade determining subunit is used for determining the virtualization grade of the processing target according to the distance between the processing target and the focusing target in the optical axis direction of the camera;
and the virtual processing subunit is used for performing virtualization processing related to the virtualization level on the background image corresponding to the processing target based on the virtualization level of the processing target.
7. A mobile terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 5 when executing the computer program.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by one or more processors, implements the steps of the method according to any one of claims 1 to 5.
CN201810579524.9A 2018-06-05 2018-06-05 Image processing method, mobile terminal and computer readable storage medium Active CN108776800B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810579524.9A CN108776800B (en) 2018-06-05 2018-06-05 Image processing method, mobile terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810579524.9A CN108776800B (en) 2018-06-05 2018-06-05 Image processing method, mobile terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108776800A CN108776800A (en) 2018-11-09
CN108776800B true CN108776800B (en) 2021-03-12

Family

ID=64024750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810579524.9A Active CN108776800B (en) 2018-06-05 2018-06-05 Image processing method, mobile terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108776800B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111311482B (en) * 2018-12-12 2023-04-07 Tcl科技集团股份有限公司 Background blurring method and device, terminal equipment and storage medium
CN110363702B (en) * 2019-07-10 2023-10-20 Oppo(重庆)智能科技有限公司 Image processing method and related product
CN110728632B (en) * 2019-09-04 2022-07-12 北京奇艺世纪科技有限公司 Image blurring processing method, image blurring processing device, computer device and storage medium
CN111144216A (en) * 2019-11-27 2020-05-12 北京三快在线科技有限公司 Picture label generation method and device, electronic equipment and readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933613A (en) * 2016-06-28 2016-09-07 广东欧珀移动通信有限公司 Image processing method and apparatus and mobile terminal
CN107172346A (en) * 2017-04-28 2017-09-15 维沃移动通信有限公司 A kind of weakening method and mobile terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102120864B1 (en) * 2013-11-06 2020-06-10 삼성전자주식회사 Method and apparatus for processing image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105933613A (en) * 2016-06-28 2016-09-07 广东欧珀移动通信有限公司 Image processing method and apparatus and mobile terminal
CN107172346A (en) * 2017-04-28 2017-09-15 维沃移动通信有限公司 A kind of weakening method and mobile terminal

Also Published As

Publication number Publication date
CN108776800A (en) 2018-11-09

Similar Documents

Publication Publication Date Title
CN111339846B (en) Image recognition method and device, electronic equipment and storage medium
CN108776800B (en) Image processing method, mobile terminal and computer readable storage medium
CN109166156B (en) Camera calibration image generation method, mobile terminal and storage medium
CN109474780B (en) Method and device for image processing
CN110796600B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
CN110335216B (en) Image processing method, image processing apparatus, terminal device, and readable storage medium
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
CN109005368B (en) High dynamic range image generation method, mobile terminal and storage medium
CN109005367B (en) High dynamic range image generation method, mobile terminal and storage medium
CN110766706A (en) Image fusion method and device, terminal equipment and storage medium
CN111131688B (en) Image processing method and device and mobile terminal
CN111311481A (en) Background blurring method and device, terminal equipment and storage medium
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN108010009B (en) Method and device for removing interference image
CN108040244B (en) Snapshot method and device based on light field video stream and storage medium
CN108289176B (en) Photographing question searching method, question searching device and terminal equipment
CN112700376A (en) Image moire removing method and device, terminal device and storage medium
CN109726613B (en) Method and device for detection
CN109981967B (en) Shooting method and device for intelligent robot, terminal equipment and medium
CN110717452B (en) Image recognition method, device, terminal and computer readable storage medium
CN111340722B (en) Image processing method, processing device, terminal equipment and readable storage medium
CN111222446B (en) Face recognition method, face recognition device and mobile terminal
CN108769521B (en) Photographing method, mobile terminal and computer readable storage medium
CN109598195B (en) Method and device for processing clear face image based on monitoring video
CN108810407B (en) Image processing method, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant