CN113438420A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113438420A
CN113438420A CN202110728064.3A CN202110728064A CN113438420A CN 113438420 A CN113438420 A CN 113438420A CN 202110728064 A CN202110728064 A CN 202110728064A CN 113438420 A CN113438420 A CN 113438420A
Authority
CN
China
Prior art keywords
feature data
facial feature
preview image
image
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110728064.3A
Other languages
Chinese (zh)
Inventor
邓志成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Software Technology Co Ltd
Original Assignee
Vivo Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Software Technology Co Ltd filed Critical Vivo Software Technology Co Ltd
Priority to CN202110728064.3A priority Critical patent/CN113438420A/en
Publication of CN113438420A publication Critical patent/CN113438420A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method, an image processing device, electronic equipment and a storage medium, and belongs to the technical field of image processing. The image processing method comprises the following steps: acquiring first face characteristic data in a shooting preview image; under the condition that second face feature data which is not matched with the preset face feature data exists in the first face feature data, hiding a second object corresponding to the second face feature data; and shooting the shot preview image after the hiding processing to obtain a target image. According to the method and the device, only the target object corresponding to the preset facial feature data is displayed in the shot target image, so that the situation that other objects except the target object appear in the shot image can be avoided, the satisfaction degree of a user on the shot image can be improved, and the user experience is improved.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present application belongs to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
With the continuous development of electronic devices, most electronic devices have a shooting function, and the shooting function has become an important function frequently used in daily life of users.
At present, when a user uses an electronic device to shoot a target object, other objects than the target object often exist in a shooting interface. This may result in the user capturing an image that is generally not satisfactory to the user, resulting in a poor user experience.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, an image processing apparatus, an electronic device, and a storage medium, which can solve a problem that an image captured by a user is not a satisfactory image of the user, resulting in poor user experience.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring first face characteristic data in a shooting preview image;
when second face feature data which is not matched with preset face feature data exists in the first face feature data, hiding a second object corresponding to the second face feature data;
and shooting the shot preview image after the hiding processing to obtain a target image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: :
the acquisition module is used for acquiring first face characteristic data in the shooting preview image;
the hiding module is used for hiding a second object corresponding to second face feature data under the condition that the second face feature data which is not matched with preset face feature data exists in the first face feature data;
and the shooting module is used for shooting the shot preview image after the hiding processing to obtain a target image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the method according to the first aspect.
In the embodiment of the application, the second face feature data of the object which is not matched with the preset face feature data in the shooting preview image is determined, the second object corresponding to the second face feature data is subjected to hiding processing, and the shooting preview image subjected to the hiding processing is shot to obtain the target image. In this way, the second facial feature data which is not matched with the preset facial feature data is not displayed in the shot target image, and as different facial feature data generally correspond to different objects, only the target object corresponding to the preset facial feature data is displayed in the shot target image. Therefore, the situation that other objects except the target object appear in the shot image can be avoided, the satisfaction degree of the user on the shot image can be improved, and the user experience is improved.
Drawings
Fig. 1 is a schematic flowchart of an image processing method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a face data setting provided in an embodiment of the present application;
FIG. 3 is a diagram illustrating a function of adding face data according to an embodiment of the present application;
fig. 4 is a schematic diagram of setting default face data according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating an embodiment of a scene of an image processing method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a hardware configuration diagram of an electronic device implementing an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image processing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application. The image processing method can be applied to electronic equipment. As shown in fig. 1, the image processing method may include:
s101, first face feature data in the shooting preview image is acquired.
The captured preview image may be a preview image acquired by a capture program of the electronic device before the image is captured, the first facial feature data may be facial feature data corresponding to a person in the captured preview image, the person in the captured preview image may be a first object, the first object may be one or more, and the first facial feature data and the first object are in one-to-one correspondence.
As an example, the first facial feature data in the captured preview image may be obtained by, after the capture program of the electronic device obtains the captured preview image, identifying the people (i.e., the first object) in the captured preview image and obtaining facial feature data of all people in the captured preview image, to obtain the first facial feature data in the captured preview image, that is, the first facial feature data of each first object in the captured preview image. For example, if an object a exists in the captured preview image, the face feature data a corresponding to the object a may be regarded as the first face feature data; if the object a, the object B, and the object C are simultaneously present in the captured preview image, the face feature data a corresponding to the object a, the face feature data B corresponding to the object B, and the face feature data C corresponding to the object C can be regarded as the first face feature data.
It can be understood that the person identification in the shot preview image can be realized through the existing person identification algorithm, and the extraction of the facial feature data in the shot preview image can be realized through the existing facial feature identification algorithm, which is not described herein again.
And S102, under the condition that second face feature data which is not matched with the preset face feature data exists in the first face feature data, hiding a second object corresponding to the second face feature data.
The preset facial feature data may be facial feature data pre-entered into the electronic device. The face feature data is entered into the electronic device in advance, and may be implemented through face data settings, which may include, but are not limited to, adding face data, setting default face data, clearing face data, and instructions for use, as shown in fig. 2. The add facial data function may be used to enter facial feature data, the set default facial data function may be used to determine preset facial feature data from the entered facial feature data, the clear facial data function may be used to clear unwanted facial feature data within the electronic device, and the instructions for use may be used to explain relevant functional introductions and specific operations within the facial data settings.
As shown in fig. 3, the face data function is added, so that the person a can place his face in the face detection frame, so that the electronic device starts to recognize and enter facial feature data, and after the entry succeeds, the facial feature data a of the person a can be directly marked and stored as preset facial feature data, or after the user determines that the facial feature data a of the person a can be marked and stored as preset facial feature data. As shown in fig. 4, a plurality of facial feature data may be entered in the electronic device, and at this time, a function of setting default facial data may be turned on, and preset facial feature data may be determined from the plurality of entered facial feature data according to a user selection.
As an example, after acquiring the first facial feature data in the captured preview image, the first facial feature data may be matched with preset facial feature data, facial feature data in the first facial feature data that does not match the preset facial feature data may be determined as second facial feature data, and an object corresponding to the second facial feature data may be determined as a second object. When second facial feature data which does not match the preset facial feature data exists in the first facial feature data, the second object corresponding to the second facial feature data can be hidden, that is, the object corresponding to the facial feature data which does not match the preset facial feature data can be hidden.
And S103, shooting the shooting preview image after the hiding processing to obtain a target image.
As an example, after the second object corresponding to the second face feature data is subjected to the hiding processing, the shooting preview image after the hiding processing may be shot to obtain the target image. And displaying a target object in the target image, but not displaying a second object which does not match with the preset facial feature data, wherein the target object is a person object of which the facial feature data match with the preset facial feature data.
Taking the first face feature data as the face feature data a, the face feature data B, the face feature data C, and the face feature data E as an example, assuming that the preset face feature data is only the face feature data a, in the first face feature data, the face feature data B, the face feature data C, and the face feature data E can be recognized as the second face feature data, and the corresponding person B, the person C, and the person E can be recognized as the second object, and the person B, the person C, and the person E in the captured preview image can be captured after being subjected to the concealment processing, so that the target image in which the person a is displayed but the person B, the person C, and the person E are not displayed is obtained. If the number of the preset face feature data is plural, for example, the preset face feature data may be face feature data a, face feature data B, and face feature data C, in the first face feature data, the face feature data E may be regarded as second face feature data, a person E corresponding thereto may be regarded as a second object, and the person E in the captured preview image may be captured after the concealment process is performed, so that a target image in which the person a, the person B, and the person C are displayed but the person E is not displayed is obtained.
In the embodiment of the application, the second face feature data of the object which is not matched with the preset face feature data in the shooting preview image is determined, the second object corresponding to the second face feature data is subjected to hiding processing, and the shooting preview image subjected to the hiding processing is shot to obtain the target image. In this way, the second facial feature data which is not matched with the preset facial feature data is not displayed in the shot target image, and as different facial feature data generally correspond to different objects, only the target object corresponding to the preset facial feature data is displayed in the shot target image. Therefore, the situation that other objects except the target object appear in the shot image can be avoided, the satisfaction degree of the user on the shot image can be improved, and the user experience is improved. In some embodiments, before the step S101, the following steps may be further performed:
and determining that the shooting mode is the target mode, and the number of the people in the shot preview image is larger than the number of the people corresponding to the target mode.
When the electronic device is in the target mode, the electronic device can hide an object corresponding to second face feature data which is not matched with preset face feature data in the shot preview image. The target mode can be a single mode or a multi-person mode, in the single mode, the target object matched with the preset facial feature data is one person, namely, the target image obtained in the single mode can display one target object, but does not display a second object unmatched with the preset facial feature data; in the multi-person mode, the target object matched with the preset facial feature data may be multiple persons, that is, the target image obtained in the multi-person mode may display a plurality of target objects matched with the preset facial feature data, but not display a second object not matched with the preset facial feature data. The target mode may be manually turned on by the user before taking a picture.
The number of the persons corresponding to the target mode is the number of the persons that can be shot by the electronic device in the target mode, for example, the number of the persons corresponding to the target mode in the single-person mode is one, the number of the persons corresponding to the target mode in the double-person mode is two, and the number of the persons corresponding to the target mode in the five-person mode is five.
As an example, before acquiring the first facial feature data in the captured preview image, the electronic device may determine whether the current capture mode is the target mode, and whether the number of people in the captured preview image is greater than the number of people corresponding to the target mode. When the shooting mode is not the target mode, the electronic device will shoot all the people in the shooting preview image no matter whether the number of the people in the shooting preview image is larger than the number of the people corresponding to the target mode, namely, any object in the shooting preview image can not be hidden. And under the condition that the shooting mode is the target mode and the number of the people in the shooting preview image is greater than that of the people corresponding to the target mode, after the first face feature data in the shooting preview image is acquired, the first face feature data can be matched with the preset face feature data, a second object corresponding to the second face feature data which is not matched with the preset face feature data is subjected to hiding processing, and then the shooting preview image subjected to the hiding processing is shot to obtain the target image.
In other words, in the target mode, not all the person objects in the captured preview image but a target object matching the preset facial feature data may be displayed in the target image, and a second object corresponding to the second facial feature data may not be displayed.
It is to be understood that the preset facial feature data may be preset or may be set before the image is shot at this time, and in the case that it is determined that the shooting mode currently located is the target mode, the electronic device may shoot based on the set preset facial feature data.
In this way, only when the shooting mode is the target mode and the number of the people in the shooting preview image is greater than the number of the people corresponding to the target mode, the second face feature data in the shooting preview image is hidden, so that a user can control the electronic device to execute the image processing method provided by the embodiment of the application by opening different shooting modes, the flexibility of the image processing method provided by the embodiment of the application can be improved, various shooting requirements of the user can be met, and user experience is improved.
In some embodiments, the second facial feature data may be determined based on the similarity between the first facial feature data and the preset facial feature data, and accordingly, before the step S102, the following steps may be further performed:
calculating the similarity of the first facial feature data and preset facial feature data;
determining that the first facial feature data does not match the preset facial feature data when the similarity is less than a preset threshold;
and determining the first face feature data corresponding to the similarity smaller than the preset threshold value as second face feature data.
As an example, the similarity between each first facial feature data and the preset facial feature data may be calculated, and then the similarity between each first facial feature data and the preset facial feature data may be compared with a preset threshold. If the similarity is smaller than the preset threshold, it may be determined that the first facial feature data corresponding to the similarity does not match the preset facial feature data, and thus the first facial feature data corresponding to the similarity smaller than the preset threshold may be determined as the second facial feature data. If the similarity is greater than or equal to the preset threshold, it may be determined that the first facial feature data corresponding to the similarity matches the preset facial feature data. The preset threshold value may be preset, and the specific value may be set according to actual requirements.
For example, when the preset facial feature data is facial feature data a, and the first facial feature data is facial feature data a, facial feature data b, and facial feature data c, the similarity between each first facial feature data and the preset facial feature data may be calculated, respectively, to obtain the similarity between facial feature data a and the preset facial feature data, the similarity between facial feature data b and the preset facial feature data, and the similarity between facial feature data c and the preset facial feature data. Assuming that the similarity between the facial feature data a and the preset facial feature data is greater than or equal to a preset threshold, the similarity between the facial feature data b and the preset facial feature data, and the similarity between the facial feature data c and the preset facial feature data are all smaller than the preset threshold. It may be determined that the facial feature data b and the facial feature data c do not match the preset facial feature data, that is, the facial feature data b and the facial feature data c may be determined as the second facial feature data.
In the embodiment of the application, given that facial feature data of the same object under different conditions may have a certain difference, the second facial feature data is determined by calculating the similarity, and the situation that the facial feature data matched with the preset facial feature data is determined as the second facial feature data can be avoided, so that the accuracy of the determined second facial feature data can be improved, a target object corresponding to the target facial feature data matched with the preset facial feature data can be accurately displayed in a shot target image, the target image can better meet user requirements, and user experience is further improved.
In some embodiments, the specific implementation manner of step S101 may be as follows:
carrying out contour recognition on each object in the shooting preview image to obtain the contour of each object in the shooting preview image;
and performing facial feature recognition on the outline of each object in the shooting preview image, and recognizing facial feature data of each object in the shooting preview image, wherein the facial feature data comprise first facial feature data.
As an example, when the first facial feature data in the captured preview image is acquired, the contour recognition model may be used to perform contour recognition on each object in the captured preview image, so as to obtain the contour of each object in the captured preview image. And then, facial feature recognition is carried out on the outline of each object by adopting a facial feature recognition model, so that facial feature data of each object in the shot preview image are obtained. The face feature data in the captured preview image may include first face feature data.
As a specific example, the object contour recognition model can be implemented based on a Mask-RCNN semantic segmentation algorithm framework. The scenic figure image can be used as a training sample, a Mask-RCNN framework is used for training, figure outlines in the scenic figure image in the training sample are identified, and an object outline identification model is obtained. Specifically, when the outline of a person in a scene person image in a training sample is identified, the outline of the object can be obtained by scanning a shooting preview image and generating a proposal, the proposal can represent an area of the object included in the shooting preview image, classifying the proposal and generating a bounding box and a mask.
The facial feature recognition model can perform face recognition through CNN (Convolutional Neural Network). Specifically, a face label may be made first, CNN face recognition training is performed, and the trained neural network model, i.e., the facial feature recognition model, is stored.
As an example, a Scale-invariant feature transform (SIFT) algorithm may be used to perform feature value extraction on a face image of the identified first object, so as to obtain first face feature data corresponding to the first object.
In the embodiment of the application, the outline identification can be performed on each object in the shot preview image, and then the facial feature identification is performed on the outline of each object in the shot preview image to obtain the facial feature data of each object in the shot preview image, so that the facial feature data of each object corresponds to the outline of each object one to one, and thus, the outline of the second object corresponding to the second facial feature data can be accurately identified, the second object can be better hidden, the target image can better meet the user requirements, and the user experience is further improved.
In some embodiments, the specific implementation manner in the step S102 may be as follows:
performing blank keeping on an area where a second object corresponding to the second face feature is located;
and performing background recovery on the blank area.
As an example, when the second object corresponding to the second facial feature data is hidden, the area of the second object in the shooting preview image may be first blank, and then the blank area of the second object may be subjected to background restoration. The blank leaving of the area where the second object is located may be to leave a contour area of the second object based on a contour of each object in the captured preview image obtained by recognition after the second face feature data is determined. The background recovery process can be implemented by using an existing image recovery algorithm, for example, the existing BSCB image recovery algorithm can be used to perform background recovery on the blank area, which is not described herein again.
Therefore, the background recovery is carried out on the area where the second object is located, the situation that the content of the target image is incomplete due to the hiding processing of the second object can be avoided, and the target image obtained through shooting can display the target facial feature data matched with the preset facial feature data and the complete background.
In some embodiments, before the step S102, the following steps may also be performed:
marking and displaying second facial feature data and target facial feature data, wherein the target facial feature data are facial feature data matched with preset facial feature data in the first facial feature data;
receiving a first input of target facial feature data by a user;
hiding a second object corresponding to the second face feature data, comprising:
and responding to the first input, and hiding a second object corresponding to the second face characteristic data.
The target facial feature data may be facial feature data in the first facial feature data that matches the preset facial feature data, and the second facial feature data may be facial feature data in the first facial feature data that does not match the preset facial feature data.
When second face feature data which is not matched with the preset face feature data exists in the first face feature data, before the second object corresponding to the second face feature data is subjected to hiding processing, the second face feature data and target face feature data which is matched with the preset face feature data can be marked and displayed respectively. At this time, the user may operate the electronic device as needed, for example, may be an operation on the display area of the target facial feature data, so that the electronic device may receive an input corresponding to the operation, that is, the first input. At this time, the electronic apparatus may consider that the user wants to photograph the display target facial feature data without displaying the second facial feature data, i.e., the user wants to photograph the target object corresponding to the display target facial feature data without displaying the second object corresponding to the second facial feature data. Then, in response to the first input, the electronic device may perform hiding processing on the second object corresponding to the second facial feature data, so that the target object corresponding to the preset facial feature data can be displayed in the shot target image.
In some possible examples, assuming that a second input of the second facial feature data by the user is received, for example, the user clicks a display area of the second facial feature data, it may be considered that the user wants to capture a target object corresponding to the target facial feature data and a second object corresponding to the second facial feature data, and at this time, the electronic device may directly capture the captured preview image, so that all the character objects (including the target object corresponding to the target facial feature data and the second object corresponding to the second facial feature data) can be displayed in the captured image.
In some possible examples, assuming that a third input of the second facial feature data by the user is received, it may be considered that the identification of the second facial feature data is incorrect, and the user may need to manually confirm the target object corresponding to the shooting target facial feature data, that is, at this time, after the target facial feature data and the second facial feature data are re-classified in response to the third input of the second facial feature data, the second object corresponding to the re-confirmed second facial feature data may be hidden, so that the target object corresponding to the preset facial feature data may be displayed in the shot target image.
In other possible examples, assuming that the user does not perform any operation within a preset time period, the user may be considered to determine the target facial feature data by default, that is, the user may be considered to capture the target object corresponding to the target facial feature data by default. At this time, the second object corresponding to the second facial feature data can be directly hidden, so that the target object corresponding to the preset facial feature data can be displayed in the shot target image. The preset time period may be preset, and a specific value thereof may be set according to an actual requirement, for example, the preset time period may be 20 s.
In the embodiment of the application, the second object corresponding to the second facial feature data can be hidden in response to the first input of the target facial feature data by the user, that is, whether the second object is hidden can be determined according to the operation of the user, so that the user can independently select the content of the shot image, the shot target image further meets the requirements of the customer, the satisfaction degree of the user on the shot image is further improved, and the user experience is improved.
In order to facilitate understanding of the image processing method provided by the above embodiment, the following describes the above image processing method with a specific scene embodiment. Fig. 5 is a schematic flowchart of a scene embodiment of an image processing method according to an embodiment of the present application.
The application scenario of the scenario embodiment may be as follows: the shooting mode of the electronic equipment is a target mode, and the shooting of the person image is started. The scenario embodiment may specifically include the following steps:
s501, the shooting mode of the electronic equipment is a target mode, and people images begin to be shot;
s502, the electronic equipment starts to acquire a shooting preview image;
s503, acquiring first face characteristic data in the shooting preview image;
s504, matching the first facial feature data with preset facial feature data;
s505, determining first facial feature data matched with preset facial feature data as target facial feature data;
s506, determining first face feature data which is not matched with preset face feature data as second face feature data;
s507, marking the displayed target face feature data and the second face feature data;
s508, receiving a first input of the target facial feature data by the user;
s509, in response to the first input, performs a hiding process on the second object corresponding to the second face feature data.
In the scene embodiment, the second object corresponding to the second facial feature data which is not matched with the preset facial feature data can be hidden according to the selection of the user, so that the target object corresponding to the preset facial feature data can be only displayed in the shot target image, the situation that other objects except the target object appear in the shot image can be avoided, the satisfaction degree of the user on the shot image can be improved, and the user experience is improved.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module in the image processing apparatus for executing the loaded image processing method. In the embodiment of the present application, an image processing apparatus executes a loaded image processing method as an example, and the image processing method provided in the embodiment of the present application is described.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. The image processing apparatus 600 may include:
an obtaining module 601, configured to obtain first facial feature data in a captured preview image;
a hiding module 602, configured to hide a second object corresponding to second face feature data when the second face feature data that does not match the preset face feature data exists in the first face feature data;
and a shooting module 603, configured to shoot the shot preview image after the hiding processing, so as to obtain a target image.
In the embodiment of the application, the second face feature data of the object which is not matched with the preset face feature data in the shooting preview image is determined, the second object corresponding to the second face feature data is subjected to hiding processing, and the shooting preview image subjected to the hiding processing is shot to obtain the target image. In this way, the second facial feature data which is not matched with the preset facial feature data is not displayed in the shot target image, and as different facial feature data generally correspond to different objects, only the target object corresponding to the preset facial feature data is displayed in the shot target image. Therefore, the situation that other objects except the target object appear in the shot image can be avoided, the satisfaction degree of the user on the shot image can be improved, and the user experience is improved.
In some embodiments, the image processing apparatus 600 may further include:
the first determining module is used for determining that the shooting mode is the target mode, and the number of the people in the shot preview image is larger than the number of the people corresponding to the target mode.
In this way, only when the shooting mode is the target mode, the second face feature data in the shooting preview image is hidden, so that the user can control the electronic device to execute the image processing method provided by the embodiment of the application by starting different shooting modes, thereby improving the flexibility of the image processing method provided by the embodiment of the application, meeting various shooting requirements of the user, and improving user experience.
In some embodiments, the image processing apparatus 600 may further include:
the calculating module is used for calculating the similarity of the first facial feature data and preset facial feature data;
the second determination module is used for determining that the first facial feature data is not matched with the preset facial feature data under the condition that the similarity is smaller than a preset threshold value;
and the third determining module is used for determining the first face feature data corresponding to the similarity smaller than the preset threshold as the second face feature data.
In the embodiment of the application, given that facial feature data of the same object under different conditions may have a certain difference, the second facial feature data is determined by calculating the similarity, and the situation that the facial feature data matched with the preset facial feature data is determined as the second facial feature data can be avoided, so that the accuracy of the determined second facial feature data can be improved, a target object corresponding to the target facial feature data matched with the preset facial feature data can be accurately displayed in a shot target image, the target image can better meet user requirements, and user experience is further improved.
In some embodiments, the obtaining module 601 may further include:
a first identification unit, configured to perform contour identification on each object in the captured preview image to obtain a contour of each object in the captured preview image;
and a second recognition unit configured to perform facial feature recognition on the outline of each object in the captured preview image, and recognize facial feature data of each object in the captured preview image, the facial feature data including the first facial feature data.
In the embodiment of the application, the contour of each object in the shot preview image can be obtained by using the object contour recognition model, and then facial feature recognition is performed on the contour of each object in the shot preview image to obtain the facial feature data of each object in the shot preview image, so that the facial feature data of each object corresponds to the contour of each object one to one, and thus, the contour of the second object corresponding to the second facial feature data can be accurately recognized, the second object can be better hidden, the target image can better meet the user requirements, and the user experience is further improved.
In some embodiments, the image processing apparatus 600 may further include:
the blank reserving unit is used for reserving a region where a second object corresponding to the second face feature is located;
and the background recovery unit is used for performing background recovery on the blank area.
Therefore, when the second object is hidden, the area where the second object is located can be filled and replaced by the background, and after the second object is hidden, the shot target image can display the target facial feature data matched with the preset facial feature data and the complete background. The image processing apparatus 600 in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus 600 in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus 600 provided in this embodiment of the application can implement each process in the image processing method embodiments of fig. 1 to fig. 5, and is not described here again to avoid repetition.
Optionally, an electronic device is further provided in this embodiment of the present application, and includes a processor 710, a memory 709, and a program or an instruction stored in the memory 709 and capable of being executed on the processor 710, where the program or the instruction is executed by the processor 710 to implement each process of the above-mentioned embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, and a processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, and the power supply may be logically coupled to the processor 710 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. Drawing (A)7The electronic device structures shown in the figures do not constitute limitations of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
Wherein, the processor 710 may be configured to:
acquiring first face characteristic data in a shooting preview image;
under the condition that second face feature data which is not matched with the preset face feature data exists in the first face feature data, hiding a second object corresponding to the second face feature data;
and shooting the shot preview image after the hiding processing to obtain a target image.
In the embodiment of the application, the second face feature data of the object which is not matched with the preset face feature data in the shooting preview image is determined, the second object corresponding to the second face feature data is subjected to hiding processing, and the shooting preview image subjected to the hiding processing is shot to obtain the target image. In this way, the second facial feature data which is not matched with the preset facial feature data is not displayed in the shot target image, and as different facial feature data generally correspond to different objects, only the target object corresponding to the preset facial feature data is displayed in the shot target image. Therefore, the situation that other objects except the target object appear in the shot image can be avoided, the satisfaction degree of the user on the shot image can be improved, and the user experience is improved.
In some embodiments, the processor 710 may be further configured to:
and determining that the shooting mode is the target mode, and the number of the people in the shot preview image is larger than the number of the people corresponding to the target mode.
In the embodiment of the application, only when the shooting mode is the target mode and the number of the people in the shooting preview image is greater than the number of the people corresponding to the target mode, the second face feature data in the shooting preview image is hidden, so that a user can control the electronic device to execute the image processing method provided by the embodiment of the application by starting different shooting modes, the flexibility of the image processing method provided by the embodiment of the application can be improved, various shooting requirements of the user can be met, and the user experience is improved.
In some embodiments, the processor 710 may be further configured to:
calculating the similarity of the first facial feature data and preset facial feature data;
determining that the first facial feature data does not match the preset facial feature data when the similarity is less than a preset threshold;
and determining the first face feature data corresponding to the similarity smaller than the preset threshold value as second face feature data.
In the embodiment of the application, given that facial feature data of the same object under different conditions may have a certain difference, the second facial feature data is determined by calculating the similarity, and the situation that the facial feature data matched with the preset facial feature data is determined as the second facial feature data can be avoided, so that the accuracy of the determined second facial feature data can be improved, a target object corresponding to the target facial feature data matched with the preset facial feature data can be accurately displayed in a shot target image, the target image can better meet user requirements, and user experience is further improved.
In some embodiments, the processor 710 may be further configured to:
carrying out contour recognition on each object in the shooting preview image to obtain the contour of each object in the shooting preview image;
and performing facial feature recognition on the outline of each object in the shooting preview image, and recognizing facial feature data of each object in the shooting preview image, wherein the facial feature data comprise first facial feature data.
In the embodiment of the application, the contour of each object in the shot preview image can be obtained by using the object contour recognition model, and then facial feature recognition is performed on the contour of each object in the shot preview image to obtain the facial feature data of each object in the shot preview image, so that the facial feature data of each object corresponds to the contour of each object one to one, and thus, the contour of the second object corresponding to the second facial feature data can be accurately recognized, the second object can be better hidden, the target image can better meet the user requirements, and the user experience is further improved.
In some embodiments, the processor 710 may be further configured to:
performing blank keeping on an area where a second object corresponding to the second face feature is located;
and performing background recovery on the blank area.
Therefore, the background recovery is carried out on the area where the second object is located, the situation that the content of the target image is incomplete due to the hiding processing of the second object can be avoided, and the shot target image can display the target facial feature data matched with the preset facial feature data and the complete background
It should be understood that in the embodiment of the present application, the input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics Processing Unit 7041 processes image data of still pictures or videos obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The first display module 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts of a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 709 may be used to store software programs as well as various data, including but not limited to applications and operating systems. Processor 710 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the embodiment of the image processing method, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. An image processing method, characterized in that the method comprises:
acquiring first face characteristic data in a shooting preview image;
when second face feature data which is not matched with preset face feature data exists in the first face feature data, hiding a second object corresponding to the second face feature data;
and shooting the shot preview image after the hiding processing to obtain a target image.
2. The method of claim 1, wherein prior to obtaining the first facial feature data in the captured preview image, further comprising:
and determining that the shooting mode is the target mode, wherein the number of the people in the shooting preview image is larger than that of the people corresponding to the target mode.
3. The method according to claim 1, wherein before performing the hiding process on the second object corresponding to the second facial feature data in a case that there is second facial feature data in the first facial feature data that does not match preset facial feature data, the method further comprises:
calculating the similarity of the first facial feature data and the preset facial feature data;
determining that the first facial feature data does not match preset facial feature data when the similarity is less than a preset threshold;
and determining the first face feature data corresponding to the similarity smaller than the preset threshold value as second face feature data.
4. The method of claim 1, wherein the obtaining first facial feature data in the captured preview image comprises:
performing contour recognition on each object in the shooting preview image to obtain the contour of each object in the shooting preview image;
and performing facial feature recognition on the outline of each object in the shooting preview image, and recognizing facial feature data of each object in the shooting preview image, wherein the facial feature data comprise the first facial feature data.
5. The method according to claim 1, wherein the hiding the second object corresponding to the second face feature data comprises:
performing blank keeping on an area where a second object corresponding to the second face feature is located;
and performing background recovery on the blank area.
6. An image processing apparatus, characterized in that the apparatus comprises:
the acquisition module is used for acquiring first face characteristic data in the shooting preview image;
the hiding module is used for hiding a second object corresponding to second face feature data under the condition that the second face feature data which is not matched with preset face feature data exists in the first face feature data;
and the shooting module is used for shooting the shot preview image after the hiding processing to obtain a target image.
7. The apparatus of claim 6, further comprising:
the first determining module is used for determining that the shooting mode is the target mode, and the number of the people in the shooting preview image is larger than the number of the people corresponding to the target mode.
8. The apparatus of claim 6, further comprising:
the calculating module is used for calculating the similarity between the first facial feature data and the preset facial feature data;
a second determining module, configured to determine that the first facial feature data does not match preset facial feature data when the similarity is smaller than a preset threshold;
and the third determining module is used for determining the first face feature data corresponding to the similarity smaller than the preset threshold as the second face feature data.
9. The apparatus of claim 6, wherein the obtaining module further comprises:
a first recognition unit, configured to perform contour recognition on each object in the captured preview image to obtain a contour of each object in the captured preview image;
a second recognition unit configured to perform facial feature recognition on an outline of each object in the captured preview image, and recognize facial feature data of each object in the captured preview image, the facial feature data including the first facial feature data.
10. The apparatus of claim 6, wherein the concealment module comprises:
the blank reserving unit is used for reserving a region where a second object corresponding to the second face feature is located;
and the background recovery unit is used for performing background recovery on the blank area.
11. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions, when executed by the processor, implementing the steps of the image processing method according to any one of claims 1 to 5.
12. A readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the image processing method according to any one of claims 1 to 5.
CN202110728064.3A 2021-06-29 2021-06-29 Image processing method, image processing device, electronic equipment and storage medium Pending CN113438420A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110728064.3A CN113438420A (en) 2021-06-29 2021-06-29 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110728064.3A CN113438420A (en) 2021-06-29 2021-06-29 Image processing method, image processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113438420A true CN113438420A (en) 2021-09-24

Family

ID=77757670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110728064.3A Pending CN113438420A (en) 2021-06-29 2021-06-29 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113438420A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008004578A1 (en) * 2006-07-05 2008-01-10 Panasonic Corporation Monitoring system, monitoring device and monitoring method
CN106991395A (en) * 2017-03-31 2017-07-28 联想(北京)有限公司 Information processing method, device and electronic equipment
CN108052883A (en) * 2017-11-30 2018-05-18 广东欧珀移动通信有限公司 User's photographic method, device and equipment
US20180204053A1 (en) * 2017-01-19 2018-07-19 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008004578A1 (en) * 2006-07-05 2008-01-10 Panasonic Corporation Monitoring system, monitoring device and monitoring method
US20180204053A1 (en) * 2017-01-19 2018-07-19 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
CN106991395A (en) * 2017-03-31 2017-07-28 联想(北京)有限公司 Information processing method, device and electronic equipment
CN108052883A (en) * 2017-11-30 2018-05-18 广东欧珀移动通信有限公司 User's photographic method, device and equipment

Similar Documents

Publication Publication Date Title
CN107370942B (en) Photographing method, photographing device, storage medium and terminal
CN112135046B (en) Video shooting method, video shooting device and electronic equipment
CN112422817B (en) Image processing method and device
CN110928411B (en) AR-based interaction method and device, storage medium and electronic equipment
US20090169108A1 (en) System and method for recognizing smiling faces captured by a mobile electronic device
CN112532885B (en) Anti-shake method and device and electronic equipment
CN112492201B (en) Photographing method and device and electronic equipment
CN112099704A (en) Information display method and device, electronic equipment and readable storage medium
CN112887615B (en) Shooting method and device
CN112788244B (en) Shooting method, shooting device and electronic equipment
CN111800574B (en) Imaging method and device and electronic equipment
CN112511743B (en) Video shooting method and device
CN112437231A (en) Image shooting method and device, electronic equipment and storage medium
CN112532884A (en) Identification method and device and electronic equipment
CN112150444A (en) Method and device for identifying attribute features of face image and electronic equipment
CN113766130B (en) Video shooting method, electronic equipment and device
CN113438420A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114466140A (en) Image shooting method and device
CN110991307B (en) Face recognition method, device, equipment and storage medium
CN113676734A (en) Image compression method and image compression device
CN113537127A (en) Film matching method, device, equipment and storage medium
CN113271379A (en) Image processing method and device and electronic equipment
CN111079662A (en) Figure identification method and device, machine readable medium and equipment
CN112764700A (en) Image display processing method, device, electronic equipment and storage medium
CN112565605A (en) Image display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination