CN108810407B - Image processing method, mobile terminal and computer readable storage medium - Google Patents

Image processing method, mobile terminal and computer readable storage medium Download PDF

Info

Publication number
CN108810407B
CN108810407B CN201810536275.5A CN201810536275A CN108810407B CN 108810407 B CN108810407 B CN 108810407B CN 201810536275 A CN201810536275 A CN 201810536275A CN 108810407 B CN108810407 B CN 108810407B
Authority
CN
China
Prior art keywords
foreground
image
foreground target
preview picture
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810536275.5A
Other languages
Chinese (zh)
Other versions
CN108810407A (en
Inventor
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810536275.5A priority Critical patent/CN108810407B/en
Publication of CN108810407A publication Critical patent/CN108810407A/en
Application granted granted Critical
Publication of CN108810407B publication Critical patent/CN108810407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The application is applicable to the technical field of image processing, and provides an image processing method, a mobile terminal and a computer readable storage medium, wherein the image processing method comprises the following steps: the method comprises the steps of determining whether a currently started camera is a front-facing camera or not after the camera of the mobile terminal is started, detecting whether a foreground target exists in a preview picture of the camera or not if the currently started camera is the front-facing camera through a first detection model, and acquiring the foreground type of the foreground target and estimating the background type of the preview picture if the foreground target is detected in the preview picture through the first detection model.

Description

Image processing method, mobile terminal and computer readable storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image processing method, a mobile terminal, and a computer-readable storage medium.
Background
With the development of intelligent mobile terminals, people use mobile terminals such as mobile phones and the like to take pictures more and more frequently. Most of the existing photographing functions of mobile terminals support image processing, such as a filter function for a human face, a peeling function, a whitening function, and the like.
However, the current image processing method is single, for example, the processing for human face is related to beauty. Therefore, the effect of the shot photos is single at present, and the user experience is poor.
Disclosure of Invention
In view of this, embodiments of the present application provide an image processing method, a mobile terminal, and a computer-readable storage medium, so as to solve the problems of single effect and poor user experience of a currently-captured photo.
A first aspect of an embodiment of the present application provides an image processing method, which is applied to a mobile terminal, and the method includes:
after a camera of the mobile terminal is started, determining whether the currently started camera is a front camera or not;
if the currently started camera is a front-facing camera, detecting whether a foreground target exists in a preview picture of the camera through a first detection model;
if a foreground target is detected in the preview picture through a first detection model, acquiring a foreground type of the foreground target, and estimating a background type of the preview picture;
and carrying out image processing on the foreground target according to the foreground type of the foreground target and the estimated background type of the preview picture.
A second aspect of an embodiment of the present application provides a mobile terminal, including:
the determining module is used for determining whether the currently started camera is a front camera or not after the camera of the mobile terminal is started;
the first foreground identification module is used for detecting whether a foreground target exists in a preview picture of the camera through a first detection model if the currently started camera is a front-facing camera;
the first background identification module is used for acquiring the foreground type of the foreground target and estimating the background type of the preview picture if the foreground target is detected in the preview picture through the first detection model;
and the first image processing module is used for carrying out image processing on the foreground target according to the foreground type of the foreground target and the estimated background type of the preview picture.
A third aspect of an embodiment of the present application provides a mobile terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method provided in the first aspect of the embodiment of the present application when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium storing a computer program which, when executed by one or more processors, performs the steps of the method provided by the first aspect of embodiments of the present application.
A fifth aspect of embodiments of the present application provides a computer program product comprising a computer program that, when executed by one or more processors, performs the steps of the method provided by the first aspect of embodiments of the present application.
According to the embodiment of the application, when the front-facing camera shoots, the foreground target of the preview picture is detected through the first detection model to obtain the foreground type of the foreground target, the background type of the preview picture is estimated, the foreground target is subjected to image processing through the foreground type of the foreground target and the estimated background type, and when the foreground camera shoots, the foreground camera is usually used for self-shooting, so that the probability that the foreground target is a human face is high, objects during image processing are mainly concentrated on the foreground target, the background type can be estimated according to the background image to save the memory occupancy rate, then the foreground target is processed according to the foreground type and the background type, the memory occupancy rate is saved, and the effect presented by the shot picture is diversified.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of an implementation of an image processing method provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart illustrating another implementation of an image processing method according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart illustrating another implementation of an image processing method according to an embodiment of the present disclosure;
fig. 4 is a schematic block diagram of a mobile terminal according to an embodiment of the present application;
fig. 5 is a schematic block diagram of another mobile terminal provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Fig. 1 is a schematic implementation flow diagram of an image processing method provided in an embodiment of the present application, and is applied to a mobile terminal, where as shown in the figure, the method may include the following steps:
step S101, after the camera of the mobile terminal is started, determining whether the currently started camera is a front-facing camera.
In the embodiment of the application, a front camera and a rear camera are generally arranged in mobile terminals such as mobile phones, and due to the difference of positions, application scenes during photographing are different, the front camera is generally used for self-photographing of a photographer, and the rear camera is generally used for photographing other people or objects. For the front camera, the human face needs to be processed to achieve a better shooting effect during self-shooting, and for the rear camera, the whole picture needs to be processed to achieve a better shooting effect. Therefore, after the camera of the mobile terminal is started, it is necessary to determine whether the currently started camera is a front camera or a rear camera, and then perform related image processing on the preview screen.
And step S102, if the currently started camera is a front-facing camera, detecting whether a foreground target exists in a preview picture of the camera through a first detection model.
In this embodiment of the application, the first detection model is a target detection model, and is generally used for detecting a foreground target in the preview picture.
In practical application, the first detection model may also use a scene detection model to detect whether a foreground object exists in a preview picture of the camera. The scene detection model is generally used for analyzing and outputting a plurality of scenes and the position area of each scene in an image according to the content of a single image, and is suitable for outputting foreground information of the image.
For ease of distinction, we proceed to explain the scene classification model, which is typically an analysis of the content of a single image to output a single scene, fitting the background information of the output picture. Therefore, when a foreground object of the preview image is obtained, a scene detection model needs to be adopted, and when a background type of the preview image is obtained, a scene classification model needs to be adopted. Of course, in practical application, other models may be used to obtain the foreground object, the foreground type corresponding to the foreground object, and the background type of the preview image.
Step S103, if a foreground object is detected in the preview picture through the first detection model, acquiring a foreground type of the foreground object, and estimating a background type of the preview picture.
In the embodiment of the application, when a foreground object is detected through a first detection model, the first detection model outputs a foreground type of the foreground object and position information of the foreground object. And taking the area except the foreground object in the preview picture as a background image, and estimating the background type of the preview picture. The estimating of the background type of the preview screen may be performed by the method of the embodiment shown in fig. 3, and details thereof are not described herein.
And step S104, performing image processing on the foreground target according to the foreground type of the foreground target and the estimated background type of the preview picture.
In the embodiment of the present application, since the currently started foreground camera is used, the main shot object is a foreground target, when the preview picture is subjected to image processing, the image processing may be mainly performed on the foreground target, and the image processing manner may be performed according to the foreground type of the foreground target and the estimated background type of the preview picture. It should be noted that the image processing may be performed only on the foreground object, or the preview image may be subjected to global processing first and then subjected to local processing on the foreground object.
For example, when the foreground object is a face, the determined foreground type may be the face, and if the estimated background type is a night scene, when the foreground object is subjected to image processing, the foreground object may be subjected to denoising processing first to obtain a face image from which noise is removed, then the face image from which noise is removed is subjected to image enhancement processing, and finally the face image from which enhancement processing is performed is subjected to beauty processing. Thus, a clear face image with a beautifying effect can be obtained.
According to the embodiment of the application, when the front-facing camera shoots, the foreground target of the preview picture is detected through the first detection model to obtain the foreground type of the foreground target, the background type of the preview picture is estimated, the foreground target is subjected to image processing through the foreground type of the foreground target and the estimated background type, and when the foreground camera shoots, the foreground camera is usually used for self-shooting, so that the probability that the foreground target is a human face is high, processed objects are concentrated on the foreground target, the background type can be estimated according to the background image to save the memory occupancy rate, and then the foreground target is processed according to the foreground type and the background type, so that the memory occupancy rate is saved, and the effect presented by shot pictures is diversified.
Fig. 2 is a schematic flowchart of another image processing method provided in an embodiment of the present application, and as shown in the figure, the method may include the following steps:
step S201, after the camera of the mobile terminal is started, determining whether the currently started camera is a front-facing camera.
Step S202, if the currently started camera is a front-facing camera, detecting whether a foreground target exists in a preview picture of the camera through a first detection model.
The contents of step S201 to step S202 can refer to the descriptions of step S101 to step S102, and are not described herein again.
Step S203, if a foreground target is detected in the preview picture through the first detection model, acquiring a foreground type of the foreground target, and judging whether the foreground target meets a preset condition.
In the embodiment of the present application, after the foreground type of the foreground target is obtained, it is further required to determine whether the foreground target meets a preset condition.
The judging whether the foreground target meets the preset condition comprises the following steps:
judging whether the proportion of the foreground target in the preview picture is larger than a preset value or not;
if the proportion of the foreground target in the preview picture is larger than a preset value, determining that the foreground target meets a preset condition;
and if the proportion of the foreground target in the preview picture is less than or equal to the preset value, determining that the foreground target does not meet the preset condition.
In the embodiment of the present application, the determining whether the foreground target meets the preset condition is determining whether the foreground target meets the preset condition according to a ratio of the foreground target in the preview image. If the proportion of the foreground target in the preview picture is smaller, the foreground target does not accord with a preset condition, and the current shot object of a photographer not only comprises the foreground target but also comprises a background; and if the proportion of the foreground target in the preview picture is larger, the foreground target accords with a preset condition and represents that the current object shot by the photographer is mainly the foreground target. Different image processing modes can be set according to different shooting objects.
The ratio of the foreground object in the preview image is calculated, the ratio of the detection frame where the foreground object is located in the preview image is calculated, the ratio of the actual area of the foreground object in the preview image is also calculated, if the ratio of the actual area of the foreground object in the preview image is calculated, the foreground object needs to be divided from the preview image, and the method for dividing the foreground object from the preview image can refer to the method shown in fig. 3.
Step S204, if the foreground target meets the preset condition, estimating the background type of the preview picture.
The content of step S204 is consistent with that of step S103, and reference may be made to the description of step S103, which is not repeated herein.
Step S205, performing image processing on the foreground object according to the foreground type of the foreground object and the estimated background type of the preview picture.
The content of step S205 is consistent with that of step S104, and reference may be made to the description of step S104, which is not repeated herein.
Step S206, if the foreground object does not meet the preset condition, detecting the background type of the preview image through a second detection model.
In the embodiment of the present application, if the foreground object does not meet the preset condition, it indicates that the current shooting object has a background in addition to the foreground object, and the background type of the background image needs to be accurately obtained. So that the preview picture can be image processed according to the background type and the foreground type of the foreground object.
As can be seen from step S204 and the description of the embodiment shown in fig. 1, when the ratio of the foreground object in the preview image is large, it indicates that the current shooting object is the foreground object, and the processing object during image processing is also the foreground object, while the area occupied by the background is small, and image processing may not be performed on the background area. However, when the ratio of the foreground object in the preview image is small, the object to be photographed includes a background region in addition to the foreground object, so that it is necessary to perform global processing when processing the preview image, or perform local processing on the foreground object after the global processing. The method for detecting the background type of the preview picture comprises the steps of performing global processing on the preview picture, obtaining the background type of the preview picture accurately, and obtaining the background type of the preview picture by adopting an estimation method. The second detection model may be the scene classification model described in the embodiment shown in fig. 1, and may also be other convolutional neural network models, which is not limited herein.
Step S207, performing image processing on the preview image according to the foreground type of the foreground object and the detected background type of the preview image.
In the embodiment of the application, if the foreground type is a human face and the background type is a night scene, global denoising can be performed on the preview picture to obtain an image with noise removed, and global image enhancement is performed on the image with noise removed; of course, after the preview picture is subjected to global image processing, the face-beautifying processing can also be performed on the human face.
And step S208, if the currently started camera is not the front camera, detecting whether a foreground target exists in the preview picture through a first detection model.
Step S209, if a foreground object is detected in the preview image through the first detection model, acquiring a foreground type of the foreground object, and detecting a background type of the preview image through the second detection model.
Step S210, performing image processing on the preview image according to the foreground type of the foreground target and the detected background type of the preview image.
In this embodiment of the application, if the currently-started camera is not a front-facing camera, it indicates that the currently-started camera is a rear-facing camera, and it indicates that the currently-shot object may include not only a foreground object but also a background area, so that a foreground type of the foreground object and a background type of a preview picture need to be accurately obtained, so as to perform image processing on the preview picture according to the foreground type of the foreground object and the detected background type of the preview picture.
It should be noted that, if the currently started camera is the rear camera, the image processing method may refer to the image processing method when the started camera is the front camera and the ratio of the foreground object in the preview picture is less than or equal to the preset value, which is not described herein again.
Fig. 3 is a schematic flowchart of another image processing method provided in an embodiment of the present application, and as shown in the figure, the method is based on the embodiment shown in fig. 1 or fig. 2, and describes how to estimate the background type of the preview screen, and specifically may include the following steps:
step S301, according to the position information of the foreground target in the preview picture, removing the foreground target from the preview picture to obtain a background image of the preview picture.
In this embodiment of the application, the position information of the foreground object in the preview screen may be position information of a detection frame corresponding to the foreground object in the preview screen, and the step of removing the foreground object from the preview screen to obtain a background image of the preview screen may be:
acquiring an image in the detection frame from the preview picture based on the position information of the detection frame; and carrying out segmentation processing on the image in the detection frame to obtain a foreground target, and removing the foreground target in the preview picture to obtain a background image.
In the embodiment of the present application, since the detection frame is usually a rectangular window containing a foreground object, an image in the detection frame is not completely a foreground object, and particularly when the shape of the foreground object is irregular, the image in the detection frame may also contain a background image. The image within the detection frame may be obtained from the preview screen based on the position information of the detection frame. And then, carrying out segmentation processing on the image in the detection frame to obtain a foreground target, wherein the image obtained by removing the foreground target in the preview picture is a background image.
As another embodiment of the present application, the segmenting the image in the detection frame to obtain the foreground object includes:
and acquiring a gray threshold sequence, and performing binarization processing on the image in the detection frame through each gray threshold in the gray threshold sequence to obtain a binarization image sequence.
And identifying the boundary of the foreground target in the image in the detection frame based on the gray gradient of the image in the detection frame to obtain a foreground target contour line.
And acquiring a binary image with the highest matching degree with the foreground target contour line from the binary image sequence.
And fusing the binarized image with the highest matching degree with the second target contour line with the foreground target contour line to generate a continuous foreground target area, wherein the image in the foreground target area in the preview picture is a foreground target.
In the embodiment of the application, binarization processing can be performed on the image in the detection frame, and if the threshold value is set properly, the foreground target area can be obtained. However, in practical application, it is difficult to accurately select a proper threshold to segment the foreground object from the background, and even if the proper threshold is selected, pixels having the same gray value as that of the pixels in the foreground object inevitably exist in the background. Therefore, the foreground target in the detection frame can be identified by combining a binarization method and a gray gradient method.
Firstly, carrying out gray processing on the image in the detection frame to obtain a gray image, then obtaining a gray threshold value sequence, carrying out binarization processing on the image in the detection frame through each gray threshold value in the gray threshold value sequence to obtain a binarization image sequence, wherein a gray image exists in the binarization gray image sequence and can roughly represent a foreground target area.
The detection frame is the position information of the foreground target obtained through the first detection model identification, so that the foreground target can be determined to exist in the detection frame, and the foreground target occupies most of the area of the detection frame. In practical application, the boundary of the object is an important basis for distinguishing the object from the background, and the change rate of the gray level value around the boundary point is high, so that the boundary of the object can be identified through the gray level gradient of the image, and the object contour line of the object is obtained. However, the object contour line also has a problem in that: in the image, there are actually boundaries, however, no contour lines may be generated because the gradient changes are not obvious; alternatively, instead of actually being a boundary, a contour line is generated inside the object because the gray value inside the foreground object changes significantly.
It can be found by analysis that both the binarization method and the gray scale gradient method have certain defects, and the obtained result is not very accurate. In order to obtain accurate results, the embodiment of the present application combines a binarization method and a gray gradient method to obtain a second target.
In the binarized image sequence, the region range of the target in one gray level image is closest to the real region range of the foreground target, and how to obtain the image closest to the real region range of the foreground target from the binarized image sequence, the binarized image with the highest matching degree with the contour line of the target can be obtained from the binarized image sequence. The matching degree may be a degree of coincidence of regions of the target obtained by a binarization method and a gray gradient method, respectively, as the matching degree. The target area in each binary image can be temporarily regarded as the target area of the foreground target, the area in the contour line obtained by the gray gradient method can also be temporarily regarded as the target area of the foreground target, and the binary image with the highest coincidence degree with the target area obtained by the gray gradient method is found. The target area in this binarized image is most representative of the true foreground target area.
In fact, neither the target region in the obtained binarized image with the highest matching degree nor the target region represented by the contour line obtained by the gray gradient method can accurately describe the foreground target region, however, the binarized image with the highest matching degree with the foreground object contour line can be fused with the foreground object contour line to generate a continuous foreground object area, the inaccurate part in the contour line obtained by the gray gradient is abandoned by the binary image, the inaccurate part in the binary image is abandoned by the contour line obtained by the gray gradient method, a continuous foreground target area is obtained after fusion, the foreground target area is not a real foreground target image because the binary image and the contour line are fused, therefore, the obtained foreground object area represents the coordinates of the foreground object in the preview screen. The image in the foreground object region in the preview screen is the foreground object.
The foreground target is obtained by combining a binarization method and a gray gradient method. The foreground object can be more accurately segmented from the preview picture. After the foreground target is obtained, the foreground target can be removed from the preview picture to obtain a background image, and then the background type of the background image is estimated according to the data characteristic value of the pixel point in the background image.
Step S302, obtaining coordinates of pixel points in the background image in the RGB color gamut space.
Step S303, calculating the coordinate mean value of all pixel points in the RGB color gamut space, and determining the background type of the background image according to the corresponding color of the coordinate mean value in the color gamut space.
In the embodiment of the present application, in order to reduce the memory usage rate, a background type of a background image may be estimated according to the background image, for example, coordinates of pixel points in the background image in an RGB color gamut space are obtained; calculating the coordinate mean value of all pixel points in the RGB color gamut space, and determining the background type of the background image according to the corresponding color of the coordinate mean value in the color gamut space.
In practical applications, different regions may be divided according to color ranges in a color gamut space, and different background types may be set for the different regions. After the coordinate mean value of all pixel points in the background image in the RGB color gamut space is obtained through calculation, the background type is determined according to the corresponding color range of the coordinate mean value in the color gamut space. For example, if the color corresponding to the coordinate mean value of all the pixel points in the background image in the RGB color gamut space in the color gamut space is green, it may be estimated that the background type of the background image is a grassland; if the color corresponding to the coordinate mean value of all pixel points in the background image in the RGB color gamut space in the color gamut space is black, the background type of the background image can be estimated to be a night scene; if the color corresponding to the coordinate mean value of all the pixel points in the background image in the RGB color gamut space in the color gamut space is blue, it can be estimated that the background type of the background image is blue sky.
It should be noted that, in the embodiment of the present application, the RGB color gamut space is taken as an example, and other color gamut spaces may also be used in practical applications.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 4 is a schematic block diagram of a mobile terminal according to an embodiment of the present application, and only a portion related to the embodiment of the present application is shown for convenience of description.
The mobile terminal 4 may be a software unit, a hardware unit or a combination of software and hardware unit built in a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, etc., or may be integrated into a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, etc., as an independent pendant.
The mobile terminal 4 includes:
a determining module 41, configured to determine whether a currently started camera is a front-facing camera after the camera of the mobile terminal is started;
the first foreground identification module 42 is configured to detect, if the currently started camera is a front-facing camera, whether a foreground target exists in a preview picture of the camera through a first detection model;
a first background identification module 43, configured to, if a foreground object is detected in the preview image through a first detection model, obtain a foreground type of the foreground object, and estimate a background type of the preview image;
and the first image processing module 44 is configured to perform image processing on the foreground object according to the foreground type of the foreground object and the estimated background type of the preview picture.
Optionally, the first context recognition module 43 further includes:
a determining unit 431, configured to determine whether the foreground object meets a preset condition before estimating a background type of the preview picture;
the first context identifying unit 432 is further configured to:
and if the foreground target meets a preset condition, estimating the background type of the preview picture.
Optionally, the first context recognition module 43 further includes:
a second background recognition unit 433, configured to detect a background type of the preview image through a second detection model if the foreground target does not meet a preset condition after determining whether the foreground target meets the preset condition;
the mobile terminal 4 further includes:
and a second image processing module 45, configured to perform image processing on the preview image according to the foreground type of the foreground target and the detected background type of the preview image.
Optionally, the judging unit 431 is further configured to:
judging whether the proportion of the foreground target in the preview picture is larger than a preset value or not;
if the proportion of the foreground target in the preview picture is larger than a preset value, determining that the foreground target meets a preset condition;
and if the proportion of the foreground target in the preview picture is less than or equal to the preset value, determining that the foreground target does not meet the preset condition.
Optionally, the first context identifying unit 432 includes:
a background image obtaining subunit, configured to remove the foreground object from the preview image according to position information of the foreground object in the preview image, so as to obtain a background image of the preview image;
and the background type identification subunit is used for estimating the background type of the background image according to the data characteristic value of the pixel point in the background image.
Optionally, the background type identification subunit includes:
the coordinate obtaining subunit is used for obtaining the coordinates of the pixel points in the background image in the RGB color gamut space;
and the background type identification subunit is used for calculating the coordinate mean value of all the pixel points in the RGB color gamut space and determining the background type of the background image according to the corresponding color of the coordinate mean value in the color gamut space.
Optionally, the mobile terminal 4 further includes:
a second foreground identifying module 46, configured to, after determining whether the currently-started camera is a front-facing camera, detect whether a foreground target exists in the preview picture through the first detection model if the currently-started camera is not the front-facing camera;
a second background recognition module 47, configured to, if a foreground object is detected in the preview picture through the first detection model, obtain a foreground type of the foreground object, and detect a background type of the preview picture through the second detection model;
and a third image processing module 48, configured to perform image processing on the preview image according to the foreground type of the foreground object and the detected background type of the preview image.
It will be apparent to those skilled in the art that, for convenience and simplicity of description, the foregoing functional units and modules are merely illustrated in terms of division, and in practical applications, the foregoing functional allocation may be performed by different functional units and modules as needed, that is, the internal structure of the mobile terminal is divided into different functional units or modules to perform all or part of the above described functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the above-mentioned apparatus may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Fig. 5 is a schematic block diagram of a mobile terminal according to another embodiment of the present application. As shown in fig. 5, the mobile terminal 5 of this embodiment includes: one or more processors 50, a memory 51 and a computer program 52 stored in said memory 51 and executable on said processors 50. The processor 50, when executing the computer program 52, implements the steps in the various image processing method embodiments described above, such as steps S101 to S104 shown in fig. 1. Alternatively, the processor 50, when executing the computer program 52, implements the functions of the modules/units in the above-described mobile terminal embodiments, such as the functions of the modules 41 to 44 shown in fig. 4.
Illustratively, the computer program 52 may be partitioned into one or more modules/units, which are stored in the memory 51 and executed by the processor 50 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 52 in the mobile terminal 5. For example, the computer program 52 may be segmented into a determination module, a first foreground identification module, a first background identification module, a first image processing module.
The determining module is used for determining whether the currently started camera is a front camera or not after the camera of the mobile terminal is started;
the first foreground identification module is used for detecting whether a foreground target exists in a preview picture of the camera through a first detection model if the currently started camera is a front-facing camera;
the first background identification module is used for acquiring the foreground type of the foreground target and estimating the background type of the preview picture if the foreground target is detected in the preview picture through the first detection model;
and the first image processing module is used for carrying out image processing on the foreground target according to the foreground type of the foreground target and the estimated background type of the preview picture.
Other modules or units can refer to the description of the embodiment shown in fig. 4, and are not described again here.
The mobile terminal includes, but is not limited to, a processor 50, a memory 51. Those skilled in the art will appreciate that fig. 5 is only one example of a mobile terminal 5 and is not intended to limit the mobile terminal 5 and may include more or fewer components than shown, or some components may be combined, or different components, e.g., the mobile terminal may also include input devices, output devices, network access devices, buses, etc.
The Processor 50 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be an internal storage unit of the mobile terminal 5, such as a hard disk or a memory of the mobile terminal 5. The memory 51 may also be an external storage device of the mobile terminal 5, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, provided on the mobile terminal 5. Further, the memory 51 may also include both an internal storage unit and an external storage device of the mobile terminal 5. The memory 51 is used for storing the computer program and other programs and data required by the mobile terminal. The memory 51 may also be used to temporarily store data that has been output or is to be output.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed mobile terminal and method may be implemented in other ways. For example, the above-described embodiments of the mobile terminal are merely illustrative, and for example, the division of the modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (8)

1. An image processing method applied to a mobile terminal, the method comprising:
after a camera of the mobile terminal is started, determining whether the currently started camera is a front camera or not;
if the currently started camera is a front-facing camera, detecting whether a foreground target exists in a preview picture of the camera through a first detection model;
if a foreground target is detected in the preview picture through a first detection model, acquiring a foreground type of the foreground target, taking an area except the foreground target in the preview picture as a background image, and estimating the background type of the preview picture according to the background image;
performing image processing on the foreground target according to the foreground type of the foreground target and the estimated background type of the preview picture;
if the currently started camera is not the front camera, detecting whether a foreground target exists in the preview picture through a first detection model;
if a foreground target is detected in the preview picture through a first detection model, acquiring a foreground type of the foreground target, and detecting a background type of the preview picture through a second detection model;
performing image processing on the preview picture according to the foreground type of the foreground target and the detected background type of the preview picture;
the estimating the background type of the preview picture comprises:
acquiring an image in a detection frame from the preview picture based on the position information of the detection frame according to the position information of the detection frame corresponding to the foreground target in the preview picture; acquiring a gray threshold sequence, and performing binarization processing on the image in the detection frame through each gray threshold in the gray threshold sequence to obtain a binarization image sequence; based on the gray gradient of the image in the detection frame, identifying the boundary of a foreground target in the image in the detection frame to obtain a foreground target contour line; acquiring a binarized image with the highest matching degree with the foreground target contour line from the binarized image sequence; fusing a binary image with the highest matching degree with a second target contour line with the foreground target contour line to generate a continuous foreground target area, taking an image in the foreground target area in the preview picture as a foreground target, and removing the foreground target in the preview picture to obtain a background image; and estimating the background type of the background image according to the data characteristic value of the pixel point in the background image.
2. The image processing method according to claim 1, further comprising, before estimating the background type of the preview screen:
judging whether the foreground target meets a preset condition or not;
and if the foreground target meets a preset condition, estimating the background type of the preview picture.
3. The image processing method according to claim 2, after determining whether the foreground object meets a preset condition, further comprising:
if the foreground target does not meet the preset condition, detecting the background type of the preview picture through a second detection model;
and processing the preview picture according to the foreground type of the foreground target and the detected background type of the preview picture.
4. The image processing method according to claim 2, wherein the determining whether the foreground object meets a preset condition comprises:
judging whether the proportion of the foreground target in the preview picture is larger than a preset value or not;
if the proportion of the foreground target in the preview picture is larger than a preset value, determining that the foreground target meets a preset condition;
and if the proportion of the foreground target in the preview picture is less than or equal to the preset value, determining that the foreground target does not meet the preset condition.
5. The image processing method of claim 1, wherein the estimating the background type of the background image according to the data feature values of the pixels in the background image comprises:
acquiring coordinates of pixel points in the background image in an RGB color gamut space;
calculating the coordinate mean value of all pixel points in the RGB color gamut space, and determining the background type of the background image according to the corresponding color of the coordinate mean value in the color gamut space.
6. A mobile terminal, comprising:
the determining module is used for determining whether the currently started camera is a front camera or not after the camera of the mobile terminal is started;
the first foreground identification module is used for detecting whether a foreground target exists in a preview picture of the camera through a first detection model if the currently started camera is a front-facing camera;
the first background identification module is used for acquiring the foreground type of a foreground target if the foreground target is detected in the preview picture through a first detection model, taking an area except the foreground target in the preview picture as a background image, and estimating the background type of the preview picture according to the background image;
the first image processing module is used for carrying out image processing on the foreground target according to the foreground type of the foreground target and the estimated background type of the preview picture;
the second foreground identification module is used for detecting whether a foreground target exists in the preview picture through the first detection model if the currently started camera is not the front camera after determining whether the currently started camera is the front camera;
the second background recognition module is used for acquiring the foreground type of the foreground target if the foreground target is detected in the preview picture through the first detection model, and detecting the background type of the preview picture through the second detection model;
the third image processing module is used for carrying out image processing on the preview picture according to the foreground type of the foreground target and the detected background type of the preview picture;
the first background recognition module is further configured to acquire, according to position information of a detection frame corresponding to the foreground object in the preview image, an image in the detection frame from the preview image based on the position information of the detection frame; acquiring a gray threshold sequence, and performing binarization processing on the image in the detection frame through each gray threshold in the gray threshold sequence to obtain a binarization image sequence; based on the gray gradient of the image in the detection frame, identifying the boundary of a foreground target in the image in the detection frame to obtain a foreground target contour line; acquiring a binarized image with the highest matching degree with the foreground target contour line from the binarized image sequence; fusing a binary image with the highest matching degree with a second target contour line with the foreground target contour line to generate a continuous foreground target area, taking an image in the foreground target area in the preview picture as a foreground target, and removing the foreground target in the preview picture to obtain a background image; and estimating the background type of the background image according to the data characteristic value of the pixel point in the background image.
7. A mobile terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 5 when executing the computer program.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by one or more processors, implements the steps of the method according to any one of claims 1 to 5.
CN201810536275.5A 2018-05-30 2018-05-30 Image processing method, mobile terminal and computer readable storage medium Active CN108810407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810536275.5A CN108810407B (en) 2018-05-30 2018-05-30 Image processing method, mobile terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810536275.5A CN108810407B (en) 2018-05-30 2018-05-30 Image processing method, mobile terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108810407A CN108810407A (en) 2018-11-13
CN108810407B true CN108810407B (en) 2021-03-02

Family

ID=64089258

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810536275.5A Active CN108810407B (en) 2018-05-30 2018-05-30 Image processing method, mobile terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108810407B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109669264A (en) * 2019-01-08 2019-04-23 哈尔滨理工大学 Self-adapting automatic focus method based on shade of gray value
CN112329616B (en) * 2020-11-04 2023-08-11 北京百度网讯科技有限公司 Target detection method, device, equipment and storage medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004304526A (en) * 2003-03-31 2004-10-28 Konica Minolta Holdings Inc Digital camera
JP2008108024A (en) * 2006-10-25 2008-05-08 Matsushita Electric Ind Co Ltd Image processor and imaging device
CN103024165B (en) * 2012-12-04 2015-01-28 华为终端有限公司 Method and device for automatically setting shooting mode
CN106101536A (en) * 2016-06-22 2016-11-09 维沃移动通信有限公司 A kind of photographic method and mobile terminal
CN106131418A (en) * 2016-07-19 2016-11-16 腾讯科技(深圳)有限公司 A kind of composition control method, device and photographing device
CN107454315B (en) * 2017-07-10 2019-08-02 Oppo广东移动通信有限公司 The human face region treating method and apparatus of backlight scene
CN107742274A (en) * 2017-10-31 2018-02-27 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment

Also Published As

Publication number Publication date
CN108810407A (en) 2018-11-13

Similar Documents

Publication Publication Date Title
CN110276767B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN108694705B (en) Multi-frame image registration and fusion denoising method
CN110473185B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110335216B (en) Image processing method, image processing apparatus, terminal device, and readable storage medium
EP3644599B1 (en) Video processing method and apparatus, electronic device, and storage medium
CN110796600B (en) Image super-resolution reconstruction method, image super-resolution reconstruction device and electronic equipment
CN110660090B (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN107481186B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
CN109064504B (en) Image processing method, apparatus and computer storage medium
CN111311482B (en) Background blurring method and device, terminal equipment and storage medium
CN104486552A (en) Method and electronic device for obtaining images
CN110796041B (en) Principal identification method and apparatus, electronic device, and computer-readable storage medium
CN108764139B (en) Face detection method, mobile terminal and computer readable storage medium
CN107690804B (en) Image processing method and user terminal
CN109214996B (en) Image processing method and device
CN111368587B (en) Scene detection method, device, terminal equipment and computer readable storage medium
CN107909569B (en) Screen-patterned detection method, screen-patterned detection device and electronic equipment
CN111131688B (en) Image processing method and device and mobile terminal
CN108805838B (en) Image processing method, mobile terminal and computer readable storage medium
CN113395440A (en) Image processing method and electronic equipment
CN113744256A (en) Depth map hole filling method and device, server and readable storage medium
CN111311481A (en) Background blurring method and device, terminal equipment and storage medium
CN108810407B (en) Image processing method, mobile terminal and computer readable storage medium
CN109726613B (en) Method and device for detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant