CN112419143A - Image processing method, special effect parameter setting method, device, equipment and medium - Google Patents

Image processing method, special effect parameter setting method, device, equipment and medium Download PDF

Info

Publication number
CN112419143A
CN112419143A CN202011315161.1A CN202011315161A CN112419143A CN 112419143 A CN112419143 A CN 112419143A CN 202011315161 A CN202011315161 A CN 202011315161A CN 112419143 A CN112419143 A CN 112419143A
Authority
CN
China
Prior art keywords
face
special effect
region
image
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011315161.1A
Other languages
Chinese (zh)
Other versions
CN112419143B (en
Inventor
陈文琼
谢欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Fanxing Huyu IT Co Ltd
Original Assignee
Guangzhou Fanxing Huyu IT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Fanxing Huyu IT Co Ltd filed Critical Guangzhou Fanxing Huyu IT Co Ltd
Priority to CN202011315161.1A priority Critical patent/CN112419143B/en
Publication of CN112419143A publication Critical patent/CN112419143A/en
Application granted granted Critical
Publication of CN112419143B publication Critical patent/CN112419143B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses an image processing method, a special effect parameter setting device, equipment and a medium, and belongs to the technical field of image processing. The method comprises the following steps: identifying at least one object area in the first image, wherein the object area is an area where a target object is located; acquiring a special effect parameter corresponding to each object region based on the object feature of each object region, wherein the object feature of each object region indicates the shape of the target object in the object region; and carrying out special effect processing on each object region according to the special effect parameter corresponding to each object region to obtain a second image. When the first image is subjected to special effect processing, personalized special effects can be added to different target objects in the first image, and the flexibility of special effect processing is improved.

Description

Image processing method, special effect parameter setting method, device, equipment and medium
Technical Field
The embodiment of the application relates to the technical field of image processing, in particular to an image processing method, a special effect parameter setting device, equipment and a medium.
Background
With the rapid development of image processing technology, various ways of processing images gradually appear. Among them, one of the common processing methods is special effect processing, for example, beautifying the image, adding special effect marks, and the like. By performing special effect processing on the image, the aesthetic property or the interest of the image can be increased.
In the related art, a terminal usually determines an object region where a target object in a first image is located, and performs special effect processing on the object region in the first image according to a special effect parameter set by a user to obtain a second image.
However, in some cases, the image may include a plurality of target objects, and the above method may result in the same special effect being added to the plurality of target objects in the image, and the flexibility of the special effect processing is poor.
Disclosure of Invention
The embodiment of the application provides an image processing method, a special effect parameter setting method, a device, a terminal and a medium, and improves the flexibility of special effect processing. The technical scheme is as follows:
in one aspect, an image processing method is provided, and the method includes:
identifying at least one object area in the first image, wherein the object area is an area where a target object is located;
acquiring a special effect parameter corresponding to each object region based on the object feature of each object region, wherein the object feature of each object region indicates the shape of the target object in the object region;
and carrying out special effect processing on each object region according to the special effect parameter corresponding to each object region to obtain a second image.
In a possible implementation manner, the object regions are human face regions, and before the obtaining of the special effect parameter corresponding to each object region based on the object feature of each object region, the method further includes:
for each face region, acquiring a plurality of face key points of the face region, wherein the face key points comprise at least one of face edge points or face organ edge points;
and acquiring a first face feature of the face region according to the positions of the face key points, wherein the first face feature indicates the relative positions of the face key points in the face region.
In one possible implementation manner, the obtaining, according to the positions of the plurality of face key points, a first face feature of the face region includes at least one of:
acquiring first face sub-features of the face region according to the abscissa of the plurality of face key points, wherein the first face sub-features represent the transverse relative positions of the plurality of face key points in the face;
and acquiring a second face sub-feature of the face region according to the vertical coordinates of the face key points, wherein the second face sub-feature represents the longitudinal relative positions of the face key points in the face.
In one possible implementation manner, the obtaining, according to the positions of the plurality of face key points, a first face feature of the face region includes:
acquiring a first distance between a first face key point and a second distance between the first face key point and a third face key point according to the position of the first face key point, the position of the second face key point and the position of the third face key point;
determining a ratio of the first distance to the second distance as the first facial feature;
the first face key point, the second face key point and the third face key point are any face key points in the plurality of face key points respectively.
In one possible implementation manner, the obtaining the first face feature of the face region according to the positions of the plurality of face key points includes:
selecting face key points positioned in a first face subregion or a second face subregion from the plurality of face key points, wherein the first face subregion is a region to which eyes belong, and the second face subregion is a region to which a face central line belongs;
and acquiring the first face features of the face area according to the positions of the selected face key points.
In one possible implementation manner, the selecting, from the plurality of face key points, a face key point located in a first face sub-region or a second face sub-region includes:
selecting a first canthus key point, a second canthus key point and a first face edge key point which is at the same height with a lower eyelid key point from the plurality of face key points; or,
and selecting a first nose bridge key point, a second nose bridge key point and a third nose bridge key point from the plurality of face key points.
In a possible implementation manner, the object regions are human face regions, and before the obtaining of the special effect parameter corresponding to each object region based on the object feature of each object region, the method further includes:
and for each face region, carrying out recognition processing on the face region to obtain a second face feature of the face region, wherein the second face feature indicates the face shape of the face in the face region.
In a possible implementation manner, the recognizing the face region to obtain the second face feature of the face region includes:
and determining a second face feature of the face region based on the face shape parameter of the face region.
In one possible implementation manner, the determining a second face feature of the face region based on the face shape parameter of the face region includes:
and acquiring the face shape characteristics corresponding to the face shape parameters of the face region according to the first corresponding relation between the face shape parameters and the face shape characteristics, and taking the face shape characteristics as second face characteristics of the face region.
In one possible implementation manner, the obtaining, as the second face feature of the face region, a face feature corresponding to the face shape parameter of the face region according to the first corresponding relationship between the face shape parameter and the face feature includes:
acquiring a difference value between the face shape parameter of the face area and each face shape parameter in the first corresponding relation;
and acquiring the face shape characteristic corresponding to the face shape parameter with the minimum difference value from the first corresponding relation, and taking the face shape characteristic as a second face characteristic of the face region.
In one possible implementation manner, the obtaining, as the second face feature of the face region, a face feature corresponding to the face shape parameter of the face region according to the first corresponding relationship between the face shape parameter and the face feature includes:
acquiring a difference value between the face shape parameter of the face area and any face shape parameter in the first corresponding relation;
and if the difference value does not exceed a reference threshold value, acquiring a face shape feature corresponding to any one face shape parameter as a second face feature of the face region.
In one possible implementation manner, before the obtaining, according to the first corresponding relationship between the face shape parameter and the face shape feature, the face shape feature corresponding to the face shape parameter of the face region as the second face feature of the face region, the method further includes:
acquiring a plurality of sample object regions with the same facial features;
acquiring the face shape parameters of each sample object region;
carrying out statistical processing on the obtained multiple face shape parameters to obtain processed face shape parameters;
establishing a first correspondence of the processed shape parameters to the facial features.
In one possible implementation, the face shape parameters include at least one of:
the length and width of the face;
a forehead width and a chin width of the human face;
the round tip coefficient of the lower jaw of the human face;
the chin apex coefficient of the face.
In a possible implementation manner, the obtaining, by the object feature obtaining unit, a special effect parameter corresponding to each object region based on an object feature of each object region includes:
calling a face recognition model, and carrying out recognition processing on each face area to obtain a user identifier corresponding to each face area;
and acquiring a special effect parameter corresponding to each user identifier based on each acquired user identifier.
In a possible implementation manner, the obtaining, based on the object feature of each object region, a special effect parameter corresponding to each object region includes:
and acquiring a special effect parameter corresponding to each object characteristic according to a second corresponding relation between the object characteristics and the special effect parameters.
In a possible implementation manner, before obtaining the special effect parameter corresponding to each object feature according to the second corresponding relationship between the object feature and the special effect parameter, the method further includes:
obtaining special effect parameters corresponding to a plurality of reference object areas with the same object characteristics;
carrying out statistical processing on the obtained multiple special effect parameters to obtain processed special effect parameters;
and establishing a second corresponding relation between the object characteristics and the processed special effect parameters.
In a possible implementation manner, before obtaining the special effect parameter corresponding to each object feature according to the second corresponding relationship between the object feature and the special effect parameter, the method further includes:
responding to a special effect setting operation on any currently displayed object area, and acquiring a special effect parameter set for the object area;
acquiring object features of the object region;
and establishing a second corresponding relation between the object characteristics and the special effect parameters.
In one possible implementation manner, the obtaining, in response to a special effect setting operation on any currently displayed object region, a special effect parameter set for the object region includes:
responding to the triggering operation of the special effect setting option of the object area, and displaying at least one candidate special effect parameter;
and responding to the triggering operation of any candidate special effect parameter, and determining the candidate special effect parameter as the special effect parameter corresponding to the object area.
In a possible implementation manner, after performing special effect processing on each object region according to the special effect parameter corresponding to each object region to obtain a second image, the method further includes:
responding to a special effect setting operation on any object area in the second image, and acquiring a target special effect parameter set for the object area;
and updating the second corresponding relation according to the object characteristics of the object area and the target special effect parameters.
In a possible implementation manner, the performing, according to the special effect parameter corresponding to each object region, a special effect process on each object region to obtain a second image includes:
cropping said each object region from said first image;
for each object area, carrying out special effect processing on the object area by adopting a special effect parameter corresponding to the object area;
and backfilling each object region after the special effect processing into the first image to obtain the second image.
In a possible implementation manner, after the identifying at least one object region in the first image, before the obtaining, based on the object feature of each object region, the special effect parameter corresponding to each object region, the method further includes:
cropping said each object region from said first image;
the performing special effect processing on each object region according to the special effect parameter corresponding to each object region to obtain a second image after the first image processing includes:
for each object area, carrying out special effect processing on the object area by adopting a special effect parameter corresponding to the object area;
and backfilling each object region after the special effect processing into the first image to obtain the second image.
In one possible implementation, after the cropping the each object region from the first image, the method further includes:
sequentially putting each object area into a buffer queue;
the obtaining of the special effect parameter corresponding to each object region based on the object feature of each object region includes:
extracting an object region from the cache queue, and acquiring a special effect parameter corresponding to the object region based on the object feature of the object region;
and extracting a next object region from the cache queue, and acquiring the special effect parameters corresponding to the next object region based on the object characteristics of the next object region until the special effect parameters corresponding to the last object region in the cache queue are acquired.
In a possible implementation manner, the first image is an image obtained by shooting in a live broadcast process, and after the special effect processing is performed on each object region according to the special effect parameter corresponding to each object region to obtain a second image, the method further includes at least one of the following:
displaying the second image in a live interface;
and sending the second image to a live broadcast server, and sending the second image to a viewer client for watching live broadcast by the live broadcast server.
In another aspect, a special effect parameter setting method is provided, and the method includes:
displaying an image, wherein the image comprises at least one object area, and the object area is an area where a target object is located;
displaying the special effect setting identification of each object area;
and responding to the trigger operation of the special effect setting identification of any object area, and acquiring special effect parameters set for the any object area.
In one possible implementation, the target object is a human face, and the method further includes:
when the camera is started, displaying first prompt information, wherein the first prompt information is used for prompting shooting of a front face;
or displaying second prompt information, wherein the second prompt information is used for prompting that the human faces in multiple directions are shot.
In one possible implementation manner, the displaying the special effect setting identifier of each object region includes:
carrying out object recognition on the image, and determining at least one object area in the image;
displaying at least one recognition frame based on the determined at least one object region, wherein each object region is positioned in the corresponding recognition frame.
In one possible implementation, the method further includes displaying at least one recognition box based on the determined at least one object region, the object region being located behind the corresponding recognition box, and the method further includes:
and displaying third prompt information, wherein the third prompt information is used for prompting the trigger operation executed on the identification frame.
In a possible implementation manner, the obtaining, in response to a setting operation of a special effect setting flag for any object region, a special effect parameter set for the any object region includes:
responding to the trigger operation of the special effect setting identification of any object area, and displaying a special effect setting interface, wherein the special effect setting interface is used for displaying at least one candidate special effect parameter;
in response to a trigger operation on any candidate special effect parameter, determining the candidate special effect parameter as the special effect parameter of any object area.
In a possible implementation manner, after the obtaining of the special effect parameter set for any object region in response to the setting operation of the special effect setting flag for the any object region, the method further includes:
acquiring an object feature of any object region, wherein the object feature indicates the appearance of the target object in any object region;
and establishing a second corresponding relation between the object characteristics and the special effect parameters set for any object region.
In a possible implementation manner, after the establishing of the second corresponding relationship between the object feature and the special effect parameter set for any one of the object regions, the method further includes:
and storing the second corresponding relation in a configuration file.
In another aspect, there is provided an image processing apparatus, the apparatus including:
the image acquisition module is used for identifying at least one object area in the first image, wherein the object area is an area where a target object is located;
the special effect acquisition module is used for acquiring a special effect parameter corresponding to each object region based on the object feature of the object region, wherein the object feature of the object region indicates the appearance of the target object in the object region;
and the special effect processing module is used for carrying out special effect processing on each object region according to the special effect parameter corresponding to each object region to obtain a second image after the first image processing.
In one possible implementation, the object region is a human face region, and the apparatus further includes:
a key point obtaining module, configured to obtain, for each face region, a plurality of face key points of the face region, where the face key points include at least one of face edge points or face organ edge points;
the first feature acquisition module is configured to acquire a first face feature of the face region according to the positions of the plurality of face key points, where the first face feature indicates relative positions between the plurality of face key points in the face region.
In one possible implementation, the first feature obtaining module is configured to perform at least one of:
acquiring first face sub-features of the face region according to the abscissa of the plurality of face key points, wherein the first face sub-features represent the transverse relative positions of the plurality of face key points in the face;
and acquiring a second face sub-feature of the face region according to the vertical coordinates of the face key points, wherein the second face sub-feature represents the longitudinal relative positions of the face key points in the face.
In one possible implementation manner, the first feature obtaining module includes:
the distance acquisition unit is used for acquiring a first distance between a first face key point and a second distance between the first face key point and a third face key point according to the position of the first face key point, the position of the second face key point and the position of the third face key point;
a ratio determination unit configured to determine a ratio of the first distance to the second distance as the first face feature;
the first face key point, the second face key point and the third face key point are any face key points in the plurality of face key points respectively.
In one possible implementation manner, the plurality of face key points include face edge points and face organ edge points, and the first feature obtaining module includes:
the key point selecting unit is used for selecting face key points positioned in a first face subregion or a second face subregion from the plurality of face key points, wherein the first face subregion is a region to which eyes belong, and the second face subregion is a region to which a face central line belongs;
and the first feature acquisition unit is used for acquiring first face features of the face area according to the positions of the selected face key points.
In a possible implementation manner, the key point selecting unit is configured to select a first corner key point, a second corner key point, and a first face edge key point that is at the same height as a lower eyelid key point from the plurality of face key points; or,
the key point selecting unit is used for selecting a first nose bridge key point, a second nose bridge key point and a third nose bridge key point from the face key points.
In one possible implementation, the object region is a human face region, and the apparatus further includes:
the second feature acquisition module is used for identifying each face region to obtain a second face feature of the face region, wherein the second face feature indicates the face shape of the face in the face region.
In a possible implementation manner, the second feature obtaining module is configured to determine a second face feature of the face region based on the face shape parameter of the face region.
In a possible implementation manner, the second feature obtaining module is configured to obtain, according to a first corresponding relationship between a face shape parameter and a face shape feature, the face shape feature corresponding to the face shape parameter of the face region as a second face feature of the face region.
In one possible implementation manner, the second feature obtaining module includes:
a difference value acquisition unit, configured to acquire a difference value between the face shape parameter of the face area and each of the face shape parameters in the first corresponding relationship;
and the second feature acquisition unit is used for acquiring the face feature corresponding to the face shape parameter with the minimum difference value from the first corresponding relation, and the face feature is used as a second face feature of the face region.
In one possible implementation manner, the second feature obtaining module includes:
a difference value obtaining unit, configured to obtain a difference value between a face shape parameter of the face region and any face shape parameter in the first corresponding relationship;
and the second feature acquisition unit is used for acquiring the face feature corresponding to any one face shape parameter as a second face feature of the face region if the difference value does not exceed a reference threshold value.
In one possible implementation, the apparatus further includes:
the system comprises a sample acquisition module, a comparison module and a comparison module, wherein the sample acquisition module is used for acquiring a plurality of sample object areas with the same facial features;
the shape acquisition module is used for acquiring the face shape parameters of each sample object region;
the first statistical module is used for performing statistical processing on the plurality of acquired face shape parameters to obtain processed face shape parameters;
a first establishing module for establishing a first corresponding relationship between the processed shape parameters and the facial form characteristics.
In one possible implementation, the face shape parameters include at least one of:
the length and width of the face;
a forehead width and a chin width of the human face;
the round tip coefficient of the lower jaw of the human face;
the chin apex coefficient of the face.
In one possible implementation manner, the object region is a human face region, and the special effect obtaining module includes:
the identification unit is used for calling a face identification model and carrying out identification processing on each face area to obtain a user identifier corresponding to each face area;
and the first special effect acquisition unit is used for acquiring a special effect parameter corresponding to each user identifier based on each acquired user identifier.
In a possible implementation manner, the special effect obtaining module is configured to obtain a special effect parameter corresponding to each object feature according to a second corresponding relationship between the object feature and the special effect parameter.
In one possible implementation, the apparatus further includes:
the reference special effect acquisition module is used for acquiring special effect parameters corresponding to a plurality of reference object regions with the same object characteristics;
the second statistical module is used for performing statistical processing on the obtained multiple special effect parameters to obtain processed special effect parameters;
and the second establishing module is used for establishing a second corresponding relation between the object characteristics and the processed special effect parameters.
In one possible implementation, the apparatus further includes:
the special effect acquisition module is used for responding to the special effect setting operation of any currently displayed object area and acquiring special effect parameters set for the object area;
a third feature obtaining module, configured to obtain an object feature of the object region;
and the third establishing module is used for establishing a second corresponding relation between the object characteristics and the special effect parameters.
In one possible implementation manner, the special effect obtaining module includes:
the display unit is used for responding to the triggering operation of the special effect setting option of the object area and displaying at least one candidate special effect parameter;
and the determining unit is used for responding to the triggering operation of any candidate special effect parameter and determining the candidate special effect parameter as the special effect parameter corresponding to the object area.
In one possible implementation, the apparatus further includes:
the special effect acquisition module is used for responding to a special effect setting operation on any object area in the second image and acquiring a target special effect parameter set for the object area;
and the updating module is used for updating the second corresponding relation according to the object characteristics of the object area and the target special effect parameters.
In one possible implementation manner, the special effect processing module includes:
a cropping unit operable to crop out each of the object regions from the first image;
the special effect processing unit is used for carrying out special effect processing on each object area by adopting the special effect parameters corresponding to the object areas;
and the backfilling unit is used for backfilling each object region after the special effect processing into the first image to obtain the second image.
In one possible implementation, the apparatus further includes:
a cropping module for cropping said each object region from said first image;
the special effect processing module comprises:
the special effect processing unit is used for carrying out special effect processing on each object area by adopting the special effect parameters corresponding to the object areas;
and the backfilling unit is used for backfilling each object region after the special effect processing into the first image to obtain the second image.
In one possible implementation, the apparatus further includes:
the cache module is used for sequentially placing each object region into a cache queue;
the special effect acquisition module is used for extracting an object region from the cache queue and acquiring a special effect parameter corresponding to the object region based on the object characteristics of the object region; and extracting a next object region from the cache queue, and acquiring the special effect parameters corresponding to the next object region based on the object characteristics of the next object region until the special effect parameters corresponding to the last object region in the cache queue are acquired.
In a possible implementation manner, the first image is an image captured during live broadcasting, and the apparatus further includes at least one of the following modules:
the display module is used for displaying the second image in a live interface;
and the sending module is used for sending the second image to a live broadcast server, and the live broadcast server sends the second image to a viewer client for watching live broadcast.
In another aspect, a special effect parameter setting apparatus is provided, the apparatus including:
the display module is used for displaying an image, wherein the image comprises at least one object area, and the object area is an area where a target object is located;
the display module is also used for displaying the special effect setting identification of each object area;
and the parameter acquisition module is used for responding to the trigger operation of the special effect setting identifier of any object area and acquiring the special effect parameters set for the any object area.
In a possible implementation manner, the target object is a human face, and the display module is configured to display first prompt information when the camera is started, where the first prompt information is used to prompt shooting of a front human face; or displaying second prompt information, wherein the second prompt information is used for prompting that the human faces in multiple directions are shot.
In a possible implementation manner, the display module is configured to perform object recognition on the image, and determine at least one object region in the image; displaying at least one recognition frame based on the determined at least one object region, wherein each object region is positioned in the corresponding recognition frame.
In a possible implementation manner, the display module is further configured to display third prompt information, where the third prompt information is used to prompt a trigger operation performed on the recognition box.
In a possible implementation manner, the parameter obtaining module is configured to display a special effect setting interface in response to a trigger operation on a special effect setting identifier of any one of the object regions, where the special effect setting interface is configured to display at least one candidate special effect parameter; in response to a trigger operation on any candidate special effect parameter, determining the candidate special effect parameter as the special effect parameter of any object area.
In one possible implementation, the apparatus further includes:
a feature obtaining module, configured to obtain an object feature of the any object region, where the object feature indicates an outline of the target object in the any object region;
and the establishing module is used for establishing a second corresponding relation between the object characteristics and the special effect parameters set for any object area.
In one possible implementation, the apparatus further includes:
and the storage module is used for storing the second corresponding relation in a configuration file.
In another aspect, a computer device is provided, which includes a processor and a memory, wherein the memory stores at least one program code, and the at least one program code is loaded and executed by the processor to implement the operations performed in the image processing method according to the above aspect; or to implement the operations performed in the special effects parameter setting method as described in the above aspect.
In another aspect, there is provided a computer-readable storage medium having at least one program code stored therein, the at least one program code being loaded and executed by a processor to implement the operations performed in the image processing method according to the above aspect; or to implement the operations performed in the special effects parameter setting method as described in the above aspect.
In yet another aspect, a computer program product or a computer program is provided, the computer program product or the computer program comprising computer program code stored in a computer-readable storage medium, the computer program code being read by a processor of a computer device from the computer-readable storage medium, the processor executing the computer program code to cause the computer device to implement the operations performed in the image processing method according to the above aspect; alternatively, the computer apparatus is caused to implement the operations performed in the special effect parameter setting method as described in the above aspect.
According to the image processing method, the image processing device, the image processing equipment and the image processing medium, when the image is subjected to special effect processing, the special effect parameters of each target object in the image can be obtained according to the appearance of the target object, the special effect parameters are adopted to carry out the special effect processing on the target object, the special effect processing on the target object according to the appearance of the target object is achieved, personalized special effects can be added to different target objects in the same image, and the flexibility of the special effect processing is improved.
The method, the device, the equipment and the medium for setting the special effect parameters provide special effect setting marks for setting the special effect parameters of different object areas for users, and can set the special effect for any object area in an image through the triggering operation of the special effect setting marks, so that the addition of personalized special effects to different object areas is realized, and the flexibility of special effect processing is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an implementation environment provided in an embodiment of the present application.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the present application.
Fig. 3 is a flowchart of an image processing method according to an embodiment of the present application.
Fig. 4 is a schematic diagram of a face key point according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a face key point according to an embodiment of the present application.
Fig. 6 is a flowchart of an image processing method according to an embodiment of the present application.
Fig. 7 is a schematic view of a facial form according to an embodiment of the present disclosure.
Fig. 8 is a schematic view of a human face according to an embodiment of the present application.
Fig. 9 is a schematic view of another human face provided in the embodiment of the present application.
Fig. 10 is a schematic view of another human face provided in the embodiment of the present application.
Fig. 11 is a schematic view of another human face provided in the embodiment of the present application.
Fig. 12 is a flowchart of a special effect parameter setting method according to an embodiment of the present application.
Fig. 13 is a flowchart of a special effect parameter setting method according to an embodiment of the present application.
Fig. 14 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Fig. 15 is a schematic structural diagram of another image processing apparatus according to an embodiment of the present application.
Fig. 16 is a schematic structural diagram of a special effect parameter setting device according to an embodiment of the present application.
Fig. 17 is a schematic structural diagram of another special effect parameter setting device according to an embodiment of the present application.
Fig. 18 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Fig. 19 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present application more clear, the embodiments of the present application will be further described in detail with reference to the accompanying drawings.
It will be understood that, as used herein, the terms "first," "second," "third," "fourth," "fifth," "sixth," and the like may be used herein to describe various concepts, which are not limited by these terms unless otherwise specified. These terms are only used to distinguish one concept from another. For example, first face keypoints may be referred to as second face keypoints, and second face keypoints may be referred to as first face keypoints, without departing from the scope of the present application.
As used herein, the terms "each," "plurality," "at least one," "any," and the like, at least one of which comprises one, two, or more than two, and a plurality of which comprises two or more than two, each refer to each of the corresponding plurality, and any refer to any one of the plurality. For example, the plurality of face key points include 3 face key points, each face key point refers to each face key point in the 3 face key points, and any face key point refers to any one of the 3 face key points, which may be a first face key point, a second face key point, or a third face key point.
The image processing method provided by the embodiment of the application is applied to computer equipment. In one possible implementation, the computer device is a terminal, e.g., a cell phone, a tablet, a computer, etc. In another possible implementation manner, the computer device is a server, and the server is a server, or a server cluster composed of a plurality of servers, or a cloud computing service center. In another possible implementation, the computer device includes a terminal and a server.
Fig. 1 is a schematic diagram of an implementation environment provided by an embodiment of the present application. Referring to fig. 1, the implementation environment includes at least one terminal 101 and a server 102. The terminal 101 and the server 102 are connected via a wireless or wired network.
The terminal 101 has installed thereon a target application served by the server 102, through which the terminal 101 can implement functions such as data transmission, message interaction, and the like. Optionally, the target application is a target application in an operating system of the terminal 101, or a target application provided by a third party. For example, the target application is an image processing application having a function of image processing, such as performing beauty processing, adding special effect identification processing, or the like on an image. Of course, the image processing application can also have other functions, such as an image sharing function, a comment function, and the like. Optionally, the target application is any image processing application such as a live application and a special effect processing application.
Optionally, the terminal 101 is configured to log in a target application based on the user identifier, upload a first image to the server 102 through the target application, perform special effect processing on the first image by the server 102 to obtain a second image, and return the second image to the terminal 101. Alternatively, the terminal 101 displays the second image after receiving the second image.
Optionally, the target application is a live application, the terminal 101 includes a anchor terminal 101 and a viewer terminal 101, the anchor terminal 101 obtains a first image obtained by shooting, performs special effect processing on the first image to obtain a second image, sends the second image to the live server 102, and sends the second image to the viewer terminal 101 watching the live broadcast by the live server 102.
The image processing method provided by the embodiment of the application can be applied to an image processing scene:
for example, in a live scene.
In the live broadcast process, a main broadcast client side obtains a first image obtained through shooting, special effect processing is carried out on the first image by adopting the image processing method provided by the embodiment of the application to obtain a second image, the second image is sent to a live broadcast server, and the live broadcast server sends the second image to a spectator client side watching the live broadcast.
For example, in a photo scene.
The terminal obtains the shot first image through the beauty application, and the image processing method provided by the embodiment of the application is adopted to carry out special effect processing on the first image to obtain a second image. And the terminal displays the second image, and stores the second image to the local if receiving a storage instruction.
It should be noted that, in the embodiment of the present application, a live view and a shooting view are taken as examples, an image processing view is exemplarily described, and the image processing view is not limited, and optionally, the image processing method provided in the embodiment of the present application can also be applied to any other image processing view, such as post-production of a movie, video recording, and the like.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the present application. The execution subject of the embodiment of the application is computer equipment. Referring to fig. 2, the method includes:
201. at least one object area in the first image is identified, wherein the object area is an area where the target object is located.
202. And acquiring a special effect parameter corresponding to each object region based on the object feature of each object region, wherein the object feature of each object region indicates the appearance of the target object in the object region.
203. And carrying out special effect processing on each object area according to the special effect parameter corresponding to each object area to obtain a second image.
According to the image processing method provided by the embodiment of the application, when the image is subjected to special effect processing, the special effect parameters of each target object in the image can be obtained according to the appearance of the target object, the special effect parameters are adopted to carry out the special effect processing on the target object, the special effect processing on the target object according to the appearance of the target object is realized, the personalized special effects can be added to different target objects in the same image, and the flexibility of the special effect processing is improved.
In a possible implementation manner, the object regions are human face regions, and before obtaining the special effect parameter corresponding to each object region based on the object feature of each object region, the method further includes:
for each face region, acquiring a plurality of face key points of the face region, wherein the face key points comprise at least one of face edge points or face organ edge points;
according to the positions of the face key points, first face features of the face area are obtained, and the first face features indicate the relative positions of the face key points in the face area.
In one possible implementation manner, the obtaining the first face feature of the face region according to the positions of the plurality of face key points includes at least one of the following:
acquiring first face sub-features of a face area according to the horizontal coordinates of the face key points, wherein the first face sub-features represent the horizontal relative positions of the face key points;
and acquiring a second face sub-feature of the face region according to the vertical coordinates of the face key points, wherein the second face sub-feature represents the vertical relative position of the face key points.
In one possible implementation manner, obtaining a first face feature of a face region according to positions of a plurality of face key points includes:
acquiring a first distance between the first face key point and the second face key point and a second distance between the first face key point and the third face key point according to the position of the first face key point, the position of the second face key point and the position of the third face key point;
determining the ratio of the first distance to the second distance as a first face feature;
the first face key point, the second face key point and the third face key point are any face key points in the plurality of face key points respectively.
In one possible implementation manner, the obtaining a first face feature of the face region according to the positions of the plurality of face key points includes:
selecting face key points positioned in a first face subregion or a second face subregion from the plurality of face key points, wherein the first face subregion is a region to which eyes belong, and the second face subregion is a region to which a face central line belongs;
and acquiring the first face features of the face area according to the positions of the selected face key points.
In one possible implementation, selecting a face keypoint located in the first face subregion or the second face subregion from a plurality of face keypoints includes:
selecting a first canthus key point, a second canthus key point and a first face edge key point which is at the same height with a lower eyelid key point from a plurality of face key points; or,
and selecting a first nose bridge key point, a second nose bridge key point and a third nose bridge key point from the plurality of face key points.
In a possible implementation manner, the object regions are human face regions, and before obtaining the special effect parameter corresponding to each object region based on the object feature of each object region, the method further includes:
and for each face area, carrying out recognition processing on the face area to obtain a second face characteristic of the face area, wherein the second face characteristic indicates the face shape of the face in the face area.
In a possible implementation manner, the recognizing the face region to obtain the second face feature of the face region includes:
and determining a second face characteristic of the face area based on the face shape parameter of the face area.
In one possible implementation manner, determining a second face feature of the face region based on the face shape parameter of the face region includes:
and acquiring the face shape characteristic corresponding to the face shape parameter of the face area according to the first corresponding relation between the face shape parameter and the face shape characteristic, and taking the face shape characteristic as a second face characteristic of the face area.
In one possible implementation manner, obtaining a face shape feature corresponding to the face shape parameter of the face region according to a first corresponding relationship between the face shape parameter and the face shape feature, as a second face feature of the face region, includes:
acquiring a difference value between the face shape parameter of the face area and each face shape parameter in the first corresponding relation;
and acquiring the face shape characteristic corresponding to the face shape parameter with the minimum difference value from the first corresponding relation, and taking the face shape characteristic as a second face characteristic of the face region.
In one possible implementation manner, obtaining a face shape feature corresponding to the face shape parameter of the face region according to a first corresponding relationship between the face shape parameter and the face shape feature, as a second face feature of the face region, includes:
acquiring a difference value between the face shape parameter of the face area and any face shape parameter in the first corresponding relation;
and if the difference value does not exceed the reference threshold value, acquiring the face shape characteristic corresponding to any face shape parameter as a second face characteristic of the face area.
In one possible implementation manner, before obtaining the face feature corresponding to the face shape parameter of the face region according to the first corresponding relationship between the face shape parameter and the face feature, as the second face feature of the face region, the method further includes:
acquiring a plurality of sample object regions with the same facial features;
acquiring the face shape parameters of each sample object region;
carrying out statistical processing on the obtained multiple face shape parameters to obtain processed face shape parameters;
a first correspondence of the processed shape parameters to the facial features is established.
In one possible implementation, the face shape parameters include at least one of:
the length and width of the face;
forehead width and chin width of a human face;
the round tip coefficient of the lower jaw of the human face;
the chin dome coefficient of the face.
In one possible implementation manner, the obtaining of the special effect parameter corresponding to each object region based on the object feature of each object region includes:
calling a face recognition model, and carrying out recognition processing on each face area to obtain a user identifier corresponding to each face area;
and acquiring a special effect parameter corresponding to each user identifier based on each acquired user identifier.
In one possible implementation manner, acquiring a special effect parameter corresponding to each object region based on an object feature of each object region includes:
and acquiring a special effect parameter corresponding to each object characteristic according to the second corresponding relation between the object characteristics and the special effect parameters.
In a possible implementation manner, before obtaining a special effect parameter corresponding to each object feature according to the second corresponding relationship between the object feature and the special effect parameter, the method further includes:
obtaining special effect parameters corresponding to a plurality of reference object areas with the same object characteristics;
carrying out statistical processing on the obtained multiple special effect parameters to obtain processed special effect parameters;
and establishing a second corresponding relation between the object characteristics and the processed special effect parameters.
In a possible implementation manner, before obtaining a special effect parameter corresponding to each object feature according to the second corresponding relationship between the object feature and the special effect parameter, the method further includes:
responding to the special effect setting operation of any currently displayed object area, and acquiring special effect parameters set for the object area;
acquiring object characteristics of an object area;
and establishing a second corresponding relation between the object characteristics and the special effect parameters.
In one possible implementation manner, in response to a special effect setting operation on any currently displayed object region, obtaining a special effect parameter set for the object region includes:
displaying at least one candidate special effect parameter in response to a trigger operation on a special effect setting option of the object region;
and responding to the trigger operation of any candidate special effect parameter, and determining the candidate special effect parameter as the special effect parameter corresponding to the object area.
In a possible implementation manner, after performing special effect processing on each object region according to the special effect parameter corresponding to each object region to obtain a second image, the method further includes:
responding to a special effect setting operation of any object area in the second image, and acquiring a target special effect parameter set for the object area;
and updating the second corresponding relation according to the object characteristics of the object area and the target special effect parameters.
In a possible implementation manner, performing special effect processing on each object region according to a special effect parameter corresponding to each object region to obtain a second image includes:
cutting out each object area from the first image;
for each object area, carrying out special effect processing on the object area by adopting special effect parameters corresponding to the object area;
and backfilling each object region after the special effect processing into the first image to obtain a second image.
In a possible implementation manner, after identifying at least one object region in the first image, before acquiring a special effect parameter corresponding to each object region based on an object feature of each object region, the method further includes:
cutting out each object area from the first image;
according to the special effect parameter corresponding to each object area, carrying out special effect processing on each object area to obtain a second image, wherein the method comprises the following steps:
for each object area, carrying out special effect processing on the object area by adopting special effect parameters corresponding to the object area;
and backfilling each object region after the special effect processing into the first image to obtain a second image.
In one possible implementation, after cropping each object region from the first image, the method further comprises:
sequentially putting each object area into a buffer queue;
acquiring a special effect parameter corresponding to each object region based on the object feature of each object region, wherein the special effect parameter comprises the following steps:
extracting an object region from the cache queue, and acquiring a special effect parameter corresponding to the object region based on the object characteristics of the object region;
and extracting a next object region from the cache queue, and acquiring a special effect parameter corresponding to the next object region based on the object characteristics of the next object region until acquiring a special effect parameter corresponding to the last object region in the cache queue.
In a possible implementation manner, the first image is an image obtained by shooting in a live broadcast process, and for each object region, a special effect parameter matched with an object feature of the object region is used to perform special effect processing on the object region, so as to obtain the second image, and after that, the method further includes at least one of the following steps:
displaying a second image in a live interface;
and sending the second image to a live broadcast server, and sending the second image to a viewer client for watching the live broadcast by the live broadcast server.
The image processing method provided by the embodiment of the present application is suitable for performing special effect processing on a target object in a first image, where the target object is any object such as a human face, an animal, a plant, and the like, and the target object is not limited in the embodiment of the present application, and the following takes the target object as a human face as an example, and the image processing method provided by the embodiment of the present application is exemplarily described:
fig. 3 is a flowchart of an image processing method according to an embodiment of the present application. Referring to fig. 3, the method is applied to a computer device, and the method comprises the following steps:
301. at least one face region in the first image is identified.
The first image comprises at least one face area, namely the first image comprises at least one face.
Optionally, the first image is an image which is currently acquired by the terminal and comprises at least one face area; or the first image is an image which is stored in the terminal and comprises at least one face area; alternatively, the first image is an image acquired by other means.
Optionally, the first image is an image acquired by the target application. The target application has a special effect processing function, and optionally, the target application is a live broadcast application, a special effect processing application, or other applications. For example, the target application is a live broadcast application, and the first image is an image captured in a live broadcast process, where the first image includes a plurality of anchor broadcasts.
Optionally, identifying at least one face region in the first image comprises: and performing face recognition on the first image, and determining at least one face area in the first image.
302. And acquiring a plurality of face key points of each face area.
The plurality of face key points of the face region are a plurality of face key points of the face in the face region. Wherein the face key points comprise at least one of face edge points or face organ edge points. Optionally, the number of face key points of a face is 5, 21, 49, 68, or 100, and the like, and the number of face key points of each face is not limited in the present application. As shown in fig. 4, the number of face keypoints is 68.
In a possible implementation manner, the face region is determined by detecting face key points in the first image, for example, the face key points in the first image are detected to obtain a plurality of face key points in the first image, and a region where the plurality of face key points belonging to the same face are located is determined as the face region. Then, when each face region is processed, since a plurality of face key points of each face region are obtained when the face region is determined, a plurality of face key points of each face region can be directly obtained.
In another possible implementation manner, the face region is determined by other methods, for example, a face recognition model is used to perform recognition processing on the first image to determine a face region where each face in the first image is located, and the face recognition model is used to recognize the face in the first image. And then, carrying out face key point detection on each face area to obtain a plurality of face key points of each face area.
Optionally, a face key point detection algorithm is adopted to detect face key points; or, detecting the face key points by adopting a face key point detection model; alternatively, the detection is performed in other ways.
303. And acquiring a first face feature of the face region according to the positions of the face key points, wherein the first face feature indicates the relative positions of the face key points of the face in the face region.
The shape of different faces may be different, wherein the different shapes of faces include: the shape of the face is different, the shape of the face organ is different, the relative position of the face organ is different, or the relative position of the face organ and the edge of the face is different, and the difference can be represented by the relative position among a plurality of face key points of the face. For example, the shapes of human faces are different, and the relative positions of a plurality of face edge points of the human faces are also different; the shapes of the facial organs are different, and the relative positions of the edge points of the facial organs are also different. Therefore, according to the positions of the plurality of face key points, the obtained first face features can better distinguish faces with different appearances.
When an image is shot through the camera device, if the distance between the face and the camera device is short, the distances between the face key points in the shot image are long, and if the distance between the face and the camera device is long, the distances between the face key points in the shot image are short. But the distance between the face and the camera does not influence the ratio of the distances between the key points of the face. In order to avoid that the distance between the human face and the camera device affects the relative positions of the plurality of human face key points, in one possible implementation, the relative positions of the plurality of human face key points at least include the relative positions of 3 human face key points, and the relative positions of at least 3 human face key points are determined by the ratio of the distances between the at least 3 human face key points.
For example, the relative position between the plurality of face key points is the relative position between 3 face key points. According to the positions of a plurality of face key points, acquiring first face features of a face region, and the method comprises the following steps: acquiring a first distance between the first face key point and the second face key point and a second distance between the first face key point and the third face key point according to the position of the first face key point, the position of the second face key point and the position of the third face key point; determining the ratio of the first distance to the second distance as a first face feature; the first face key point, the second face key point and the third face key point are any face key points in the plurality of face key points respectively.
It should be noted that, in the embodiment of the present application, only the first face features of the face region are obtained according to the positions of 3 face key points as an example, the first face features of the face region are obtained according to the positions of a plurality of face key points, and in another embodiment, the first face features of the face region are obtained according to the positions of 4 face key points; or acquiring the first face features of the face region according to the positions of the 6 face key points. The number of the face key points in the embodiment of the application is not limited.
In addition, since the relative positions include a horizontal relative position and a vertical relative position, the relative positions of the plurality of face key points include: at least one of a lateral relative position or a longitudinal relative position of the plurality of face keypoints. In one possible implementation manner, the obtaining the first face feature of the face region according to the positions of the plurality of face key points includes at least one of the following:
(1) and acquiring a first face sub-feature of the face region according to the abscissa of the plurality of face key points, wherein the first face sub-feature represents the transverse relative position of the plurality of face key points in the face.
(2) And acquiring a second face sub-feature of the face region according to the vertical coordinates of the face key points, wherein the second face sub-feature represents the vertical relative position of the face key points.
The first face features are only exemplarily described by taking the first face features as examples including the first face sub-features and the second face sub-features, and the number of the first face sub-features and the second face sub-features is not limited in the embodiments of the present application, that is, the first face features include at least one first face sub-feature and at least one second face sub-feature.
For example, according to the abscissa of the key points 1, 2 and 3 of the human face, obtaining a first human face sub-feature 1 of the human face region; and acquiring a first face sub-feature 2 of the face region according to the abscissa of the face key points 4, 5 and 6. The first face features include a first face sub-feature 1 and a first face sub-feature 2.
The face in the first image may not be a front face but a side face, etc., and if the face in the first image is a side face, some face keypoints may be undetectable, for example, if the face in the first image is an anchor left side face, then the computer device may not detect the face keypoint of the anchor right side face. But includes eyes and a nose bridge regardless of whether the face in the first image is a left or right face. For example, the left face includes the left eye, the right face includes the right eye, and both the left and right faces include the nose bridge. Wherein, the bridge of the nose is located face mid-line, no matter detect left side face or right side face, the homoenergetic detects the people's face key point on the face mid-line.
In order to reduce the influence of the side face on the acquisition of the first face feature and improve the accuracy of the acquisition of the first face feature of the face region, the face key points of the region to which the eyes belong or the face key points of the region to which the center line of the face belongs are adopted in the embodiment of the application to acquire the first face feature of the face region.
In one possible implementation manner, the obtaining a first face feature of the face region according to the positions of the plurality of face key points includes: selecting face key points positioned in a first face subregion or a second face subregion from the plurality of face key points, wherein the first face subregion is a region to which eyes belong, and the second face subregion is a region to which a face central line belongs; and acquiring the first face features of the face area according to the positions of the selected face key points.
It should be noted that, in this embodiment of the present application, by taking as an example only that the first face features of the face region are obtained according to the positions of the plurality of face key points in the region to which the eyes belong, or the first face features of the face region are obtained according to the positions of the plurality of face key points in the region to which the centerline of the face belongs, the obtaining of the first face features of the face region is exemplarily described, and in another embodiment, the obtaining, by the computer device, the first face features of the face region according to the positions of the plurality of face key points includes: according to the positions of a plurality of face key points, acquiring first face features of a face region, and the method comprises the following steps: selecting first face key points positioned in a first face sub-area from the plurality of face key points, and selecting second face key points positioned in a second face sub-area from the plurality of face key points; and acquiring the first face features of the face area according to the position of the selected first face key point and the position of the selected second face key point. The number of the selected first face key points and the number of the selected second face key points are any integer greater than or equal to 1, and the number of the selected first face key points and the number of the selected second face key points are not limited in the embodiment of the application.
Optionally, selecting a face keypoint located in the first face subregion or the second face subregion from the plurality of face keypoints includes: selecting a first canthus key point, a second canthus key point and a first face edge key point which is at the same height with a lower eyelid key point from a plurality of face key points; or selecting a first nose bridge key point, a second nose bridge key point and a third nose bridge key point from the plurality of face key points.
Wherein the first corner of the eye key point and the second corner of the eye key point belong to the same eye, or belong to different eyes. In order to more accurately acquire the first face feature of the face region, optionally, the first canthus key point and the second canthus key point belong to the same eye, and the first face edge key point and the first canthus key point and the second canthus key point which are at the same height as the lower eyelid key point are located on the same side, that is, the first canthus key point and the second canthus key point belong to the left eye, and then the first face edge key point is the left face edge point; and if the first eye corner key point and the second eye corner key point belong to the right eye, the first face edge key point is the right face edge point.
It should be noted that, in the embodiment of the present application, only the first corner key point, the second corner key point, and the first face edge key point are taken as examples to exemplarily describe the face key points selected from the first face sub-region, and in another embodiment, the face key points selected from the first face sub-region are the first corner key point, the second corner key point, and the second face edge key point at the same height as the upper eyelid key point; or the face key points selected from the first face sub-region are the first canthus key point, the pupil key point and the second canthus key point. The embodiment of the application does not limit the key points of the face selected from the first face subregion.
For example, the face key points are selected from the 9 face key points a to I shown in fig. 5. If the face in the image is a right face, the 6 key points of the face are taken A, B, C, G, H, I, and r1 ═ AB/BC and r2 ═ GH/HI are taken as the first face features of the face region. Wherein AB is the lateral distance between the face key point a and the face key point B, BC is the lateral distance between the face key point B and the face key point C, and r1 is the ratio of AB to BC, representing the first face sub-feature of the face region. GH is the longitudinal distance between the face key point G and the face key point H, HI is the longitudinal distance between the face key point H and the face key point I, and r2 is the ratio of GH to HI and represents the second face sub-feature of the face region.
If the face in the image is a left face, D, E, F, G, H, I are taken as 6 face key points, and r1 is equal to EF/DE and r2 is equal to GH/HI, which are taken as first face features of the face region, where EF is a lateral distance between the face key point E and the face key point F, and DE is a lateral distance between the face key point D and the face key point E.
If the face in the image is a front face, the 6 face key points are taken A, B, C, G, H, I; alternatively, the 6 face key points are taken D, E, F, G, H, I; or, the key points of the 9 human faces A to I are taken. If the 9 face key points A to I are selected, r 1-AB/BC, r 2-GH/HI and r 3-EF/DE are used as the first face features of the face area.
The facial sub-features are relatively fixed no matter the key points of the 9 human faces from A to I are front faces or side faces and whether the human faces are expressing or not, the human face sub-features are used as the basis for distinguishing the human faces in the images, the images can be distinguished accurately, and the processing amount required for distinguishing is small.
304. And acquiring special effect parameters corresponding to each face area based on the first face characteristics of each face area.
The special effect parameter is used to indicate a special effect added to the face, and optionally, the special effect parameter is a special effect name, a rendering parameter used for special effect processing, and the like. For example, the special effect parameter is a face-thinning special effect, or is a face-thinning special effect with a face-thinning grade of 7, and the like, where the face-thinning grade indicates a degree of change between the face processed by the face-thinning special effect and the real face. Alternatively, the higher the lean face level, the higher the degree of change.
And (3) carrying out special effect processing on the face region in the image by adopting a certain special effect parameter, namely adding a corresponding special effect to the face region in the image. Optionally, the special effect is any one of a face-thinning special effect, a whitening special effect, a filter special effect, an added special effect mark and the like. For example, the face thinning special effect parameter is used to perform special effect processing on the initial face region to obtain a target face region, and the face in the target face region is thinner than the face cheek in the initial face region.
The shapes of different faces may have some differences, and if the same special effect is added to different faces, part of the faces may be suitable for the special effect, and part of the faces may not be suitable for the special effect. For example, in a live broadcast scene, three anchor studios are in live broadcast in a live broadcast room, the eyes of two anchor studios are small, the eye of one anchor studios is large, and if large-eye special effects are added to the three anchor studios, the special effect of the anchor studios with large eyes in a live broadcast picture is poor.
The image processing method provided by the embodiment of the application can set special effect parameters for each face, and when image processing is carried out, the special effect parameters set for the face in the face area are determined according to the first face characteristics of the face area.
Optionally, the computer device stores a second correspondence between the first face feature and the special effect parameter, and obtains the special effect parameter corresponding to each face region based on the first face feature of each face region, including: and acquiring a special effect parameter corresponding to each first face characteristic according to a second corresponding relation between the first face characteristic and the special effect parameter.
The following is a description of the process of establishing the second corresponding relationship and the process of obtaining the special effect parameter from the second corresponding relationship, respectively:
(1) and the establishment process of the second corresponding relation:
and the second corresponding relation is established by the computer equipment according to the operation of setting special effect parameters for the face area by the user. Before processing the first image, the user can set different special effect parameters for each face region.
Optionally, the computer device can display any face region for the user to set special effect parameters for the face region. In one possible implementation manner, in response to a special effect setting operation on any currently displayed face region, the computer device acquires a special effect parameter set for the face region; acquiring a first face feature of the face region; and establishing a second corresponding relation between the first face characteristic and the special effect parameter.
After the user sets the special effect parameters for any face region, the computer device establishes a second corresponding relationship between the first face features of the face region and the special effect parameters, so that the subsequent computer device can automatically perform special effect processing on any face region with the first face features according to the second corresponding relationship.
For example, in a live broadcast scene, after the anchor sets a special effect parameter for the face of the anchor, in the live broadcast process, the live broadcast client can perform special effect processing on the face of the anchor in each shot image according to the special effect parameter.
In addition, the image processing method provided by the embodiment of the application can set special effect parameters for different face regions in the same image, optionally, the computer device provides corresponding special effect setting options for each face region, and can set special effect parameters for different face regions by triggering different special effect setting options. The trigger operation is any one or a combination of multiple operations such as a click operation, a double click operation, a slide operation, and the like, and the trigger operation is not limited in the embodiment of the present application.
Optionally, the obtaining, by the computer device, special effect parameters set for any currently displayed face region in response to a special effect setting operation on the face region includes: the computer equipment responds to the triggering operation of the special effect setting option of the face area and displays at least one candidate special effect parameter; and responding to the triggering operation of any candidate special effect parameter, and determining the candidate special effect parameter as the special effect parameter corresponding to the face area.
For example, in a live scene, before broadcasting, a anchor client calls a camera device to shoot an image containing an anchor, the anchor client identifies the shot image, determines whether the image includes a face, displays the shot image in a current interface, and displays a plurality of frames in the image, wherein each frame is used for a face area where the anchor face is located in the frame and indicates that the face has been identified. The anchor triggers a square frame of any face area, a special effect setting interface is popped up, a plurality of special effects are displayed in the special effect setting interface, the anchor sets at least one special effect for the face area in the special effect setting interface, and after the setting is finished, the special effect can be set for the next face area until the last face area is set with the special effect. And the live broadcast client records the special effect parameters of each anchor based on the first face characteristic of the anchor.
Optionally, before displaying at least one candidate special effect parameter in response to a triggering operation of a special effect setting option for the face region, the method further includes: and displaying special effect setting reminding information, wherein the special effect setting reminding information is used for reminding a user of carrying out special effect setting. Optionally, the special effect setting reminding information is also used for indicating a specific way of carrying out special effect reminding. For example, after the anchor client displays a shot image in the current interface and displays a plurality of boxes in the image, the special effect setting reminder message "please double click the box to perform special effect setting" is displayed.
Optionally, the second corresponding relationship is established by the computer device according to an operation of setting special effect parameters for the face region by referring to the user. The user may not determine the special effect suitable for the shape of the user, so that the set special effect cannot achieve a good effect, and the reference user is a user capable of setting special effect parameters well, for example, in a live broadcast scene, the reference user is a main broadcast with high popularity, and the like. According to the method and the device, the images are processed by obtaining the special effect parameters set by the reference user, so that the processed images have a good special effect.
In another possible implementation manner, special effect parameters corresponding to a plurality of reference face regions with the same first face features are obtained; carrying out statistical processing on the obtained multiple special effect parameters to obtain processed special effect parameters; and establishing a second corresponding relation between the first face characteristic and the processed special effect parameter. Optionally, the statistical process is an averaging process.
For example, in a live broadcast scene, the live broadcast server periodically acquires special effect parameters used by the high popularity anchor, and establishes a second correspondence or updates the second correspondence according to the first face characteristic value of the high popularity anchor and the used special effect parameters.
It should be noted that the special effect parameter in the second corresponding relationship is a parameter set by default by the computer device, and the embodiment of the present application does not limit the special effect parameter in the second corresponding relationship. In a possible implementation manner, the special effect parameters in the second corresponding relationship include a special effect parameter set by a reference user, a special effect parameter set by a home-end user, and a default special effect parameter of the device, and if the special effect parameter corresponding to the facial feature includes the special effect parameter set by the home-end user, the special effect parameter set by the home-end user is preferentially used for carrying out special effect processing; if the special effect parameters set by the home terminal user are not included, the special effect parameters set by the reference user are preferentially used for carrying out special effect processing; and if the special effect parameters set by the local end user and the special effect parameters set by the reference user are not included, carrying out special effect processing by using the default special effect parameters of the equipment.
It should be noted that the special effect parameter corresponding to the first face feature in the second corresponding relationship is a parameter set, and the parameter set includes a special effect parameter of at least one special effect, so that after the special effect parameter is obtained from the second corresponding relationship, the special effect processing is performed on the face region according to the obtained special effect parameter, and at least one special effect can be added to the face in the face region.
(2) A process of obtaining special effect parameters from the second corresponding relationship:
if the similarity angle between the first face feature of the face region and any first face feature in the second corresponding relationship is higher, the similarity between the face in the face region and the face represented by any first face feature is higher, or the face is the same face; the computer device obtains the special effect parameter corresponding to any one of the first face features as the special effect parameter corresponding to the face region.
In one possible implementation manner, the special effect parameter corresponding to the face with the highest similarity with the current face is obtained from the second corresponding relation. For example, the obtaining, by the computer device, a special effect parameter corresponding to each first face feature according to a second correspondence between the first face feature and the special effect parameter includes: the computer equipment obtains a difference value between the first face feature of any face region and each first face feature in the second corresponding relationship, and obtains a special effect parameter corresponding to the first face feature with the minimum difference value from the second corresponding relationship to serve as the special effect parameter corresponding to the face region.
Wherein, the difference value between the first face feature in the first face region and the first face feature in the second corresponding relationship means: difference, squared difference, or standard deviation, etc. For example, the first face features in the second correspondence are r1 and r2, the first face features in the face region are r1 'and r 2', and the difference value between the first face feature and the first face feature in the face region in the second correspondence is (r1-r1 ') 2+ (r2-r 2') 2.
In a possible implementation manner, the special effect parameter corresponding to the current face is obtained from the second corresponding relationship. The computer equipment acquires the special effect parameters corresponding to each first face feature according to the second corresponding relation between the first face features and the special effect parameters, and the method comprises the following steps: the computer equipment acquires a difference value between a first face feature of any face region and any first face feature in a second corresponding relation; if the difference value does not exceed the reference threshold value, obtaining a special effect parameter corresponding to any first face feature as a special effect parameter corresponding to the face area.
Wherein the difference value not exceeding the reference threshold value is indicative of: the face in the face region and the face represented by the first face feature in the first corresponding relationship may be the same face.
For example, the difference value between the first face feature in the second corresponding relation and the first face feature of the face area (r1-r 1')2+(r2-r2′)2The reference threshold a is not exceeded, where a is any numerical value, and optionally, a is a numerical value set by a default of the computer device or a numerical value set by a user, which is not limited in the embodiment of the present application.
305. And carrying out special effect processing on each face area according to the special effect parameters corresponding to each face area to obtain a second image.
The image processing method provided by the embodiment of the application can flexibly set the special effect parameters of each face, and the adopted special effect parameters may be different when the special effect processing is performed on each face region, so the embodiment of the application also provides a method for performing different special effect processing on different regions of the same image: each face area is cut from the first image, special effect processing is carried out on each face area, each face area after the special effect processing is backfilled into the first image, different special effect processing is carried out on different face areas of the same image, the special effect processing of different face areas is not affected mutually, and the accuracy of the special effect processing is guaranteed.
After the special effect processing is performed on the face region, the face in the face region may be deformed, so that the size of the face is changed, and in order to make the deformed face still located in the face region, optionally, the face region cropped by the computer device is larger than the real face region, for example, the cropped face region has 10 more pixels than the real face region in width and height. Optionally, the shape of the cut face region is rectangular, circular, or matched with the shape of the face, and the like, and the shape of the face region is not limited in the embodiment of the application.
In addition, when each face region after special effect processing is backfilled to the first image, for each face region, the face region after special effect processing is backfilled to the position according to the position of the face region in the first image to obtain a second image, and the face regions before and after special effect processing are ensured to be located at the same position of the first image.
It should be noted that the time for cutting each face region from the first image is as follows: after step 304 and before step 305; alternatively, after step 301 and before step 302; or at other times, the time for cropping the first image is not limited in the embodiment of the present application.
In one possible implementation manner, the cutting of each face region from the first image is performed at the following time: after step 304 and before step 305, the computer device performs special effect processing on each face region according to the special effect parameter corresponding to each face region, to obtain a second image after the first image processing, including: cutting each face area from the first image by the computer equipment; for each face area, adopting special effect parameters corresponding to the face area to carry out special effect processing on the face area; and backfilling each face area after the special effect processing into the first image to obtain a second image.
In another possible implementation manner, the time for cropping each face region from the first image is as follows: after step 301 and before step 302. The computer device cuts out each face region from the first image. According to the special effect parameters corresponding to each face area, carrying out special effect processing on each face area to obtain a second image after the first image processing, wherein the method comprises the following steps: for each face area, adopting special effect parameters corresponding to the face area to carry out special effect processing on the face area; and backfilling each face area after the special effect processing into the first image to obtain a second image.
Optionally, the computer device processes the cut face regions through a plurality of threads, so that parallel processing of at least one face region is realized, and the image processing speed is increased; or the computer device puts the cut human face regions into a cache queue and sequentially carries out special effect processing on the human face regions in the cache queue.
In one possible implementation, after cropping each face region from the first image, the method further comprises: sequentially putting each face area into a cache queue; based on the face features of each face region, obtaining special effect parameters corresponding to each face region, including: extracting a face region from the cache queue, and acquiring special effect parameters corresponding to the face region based on the face features of the face region; and extracting a next face region from the cache queue, and acquiring special effect parameters corresponding to the next face region based on the face features of the next face region until the special effect parameters corresponding to the last face region in the cache queue are acquired. Optionally, when step 305 is executed, the computer device similarly performs special effect processing on the face regions in the buffer queue in sequence; alternatively, for each face region in the buffer queue, steps 302 to 305 are performed in sequence.
For example, after cutting out each face region from the first image, sequentially putting each face region into a cache queue, acquiring the face characteristics of the face region for a first face region in the cache queue, acquiring special effect parameters corresponding to the face region based on the face characteristics of the face region, performing special effect processing on the face region according to the special effect parameters corresponding to the face region to obtain a processed face region, and replacing the corresponding face region with the processed face region in the cache queue; for a next face region in the cache queue, acquiring face features of the next face region, acquiring special effect parameters corresponding to the next face region based on the face features of the next face region, performing special effect processing on the next face region according to the special effect parameters corresponding to the next face region to obtain a processed face region, and replacing the corresponding face region with the processed face region in the cache queue; and performing special effect processing on the last face region in the cache queue to obtain a processed face region, and replacing the corresponding face region with the processed face region in the cache queue. And finally, backfilling the processed face region in the cache queue into the first image to obtain a second image.
Optionally, the special effect processing is performed on the face area, and includes: and performing at least one of beautifying processing on the face area, adding special effect identification in the face area and the like.
It should be noted that, after the first image is subjected to the special effect processing according to the second corresponding relationship, the user may not satisfy the processing effect, and therefore, the embodiment of the present application further provides a method for the user to change the special effect parameter. In a possible implementation manner, after step 305, the computer device displays the second image, obtains a target special effect parameter set for any face region in the second image in response to a special effect setting operation on the face region, and updates the second corresponding relationship according to the face feature of the face region and the target special effect parameter.
In a possible implementation manner, the image processing method provided in the embodiment of the present application is applied to a live broadcast scene, where a first image is an image obtained by shooting in a live broadcast process, and after performing special effect processing on each face region according to a special effect parameter corresponding to each face region to obtain a second image after the first image processing, the method further includes at least one of the following steps: (1) displaying a second image in a live interface; (2) and sending the second image to a live broadcast server, and sending the second image to a viewer client for watching the live broadcast by the live broadcast server.
For example, before live broadcasting, a plurality of anchor broadcasters set respective beauty parameters at a live broadcasting client, and can broadcast the live broadcasting client after the beauty parameters are set by the anchor broadcasters, the live broadcasting client performs beauty treatment on a plurality of anchor broadcasters faces in each shot image in the live broadcasting process by using the respective set beauty parameters, displays the image after the beauty treatment in a live broadcasting interface, sends the image after the beauty treatment to a live broadcasting server, and distributes the live broadcasting to audience clients watching the live broadcasting by the live broadcasting server.
Wherein, the process that a plurality of anchor broadcast set up respective beautiful face parameter at live broadcast client includes: the live broadcast client calls the camera device to shoot the image, displays the image in the current interface and prompts the anchor to appear in the displayed image in a face-up mode. And the live broadcast client performs face recognition on the image in the current interface, displays a plurality of frames after the face is recognized, and shows that the face is recognized in the main broadcast face area in each frame. The live broadcast client prompts the anchor double-click on the boxes corresponding to the faces of the anchor double-click, a beauty setting interface is popped up, and the anchor double-click set beauty parameters are set by the anchor double-click on the boxes in sequence. The live broadcast client records the beauty parameters of the anchor based on the face characteristics of the anchor, and correspondingly stores the face characteristics and the beauty parameters in a local configuration file.
Wherein, the process that the live broadcast client uses the beauty parameters that each set up to carry out beauty treatment to a plurality of anchor faces in every image that shoots in the live broadcast process includes: the live broadcast client performs face recognition on each shot image, determines a face area where each face is located, cuts each face area (the face area is a rectangular area or a circular area and the like) and puts the cut face area into a cache queue, obtains face features one by one for a plurality of main broadcast face areas in the cache queue, obtains face beautifying parameters corresponding to the face features from a locally stored configuration file through the face features so as to determine the face beautifying parameters of each face, performs face beautifying processing on the corresponding face area by using the face beautifying parameters, replaces the face area corresponding to the face area in the cache queue with the face area after face beautifying, performs face beautifying on each face area according to the respective face beautifying parameters, and then backfills the face area after face beautifying into an original image, thereby realizing personalized face beautifying of a plurality of faces.
It should be noted that when the anchor is played next time, the anchor does not need to set the beauty parameter, and after the live broadcast client performs face recognition, if the corresponding beauty parameter is not found according to the face features of the face, a beauty setting interface is popped up for the anchor to set.
According to the image processing method provided by the embodiment of the application, when the image is subjected to special effect processing, the special effect parameters of the face can be obtained according to the appearance of each face in the image, the special effect parameters are adopted to carry out special effect processing on the face, the special effect processing on the face according to the appearance of the face is realized, personalized special effects can be added to different faces in the same image, and the flexibility of the special effect processing is improved.
In addition, in the embodiment of the application, the corresponding relation between the face characteristics and the special effect parameters is also stored, the special effect parameters of different faces can be obtained from the corresponding relation, in a live broadcast scene, the special effect processing can be performed on the face in each image shot in the live broadcast process, and the effect that the special effect of the anchor moves along with the movement of the anchor is achieved.
In addition, in the embodiment of the application, the ratio of the distances between the plurality of face key points is determined as the face feature, so that the face feature can better represent the appearance of the face and is not influenced by the distance between the real face and the camera device, the accuracy of the face feature is improved, the special effect parameters can be acquired more accurately, and the accuracy of special effect processing is also improved.
Fig. 6 is a flowchart of another image processing method according to an embodiment of the present application. Referring to fig. 6, the method is applied to a computer device, and the method includes:
601. at least one face region in the first image is identified.
Step 601 is similar to step 301, and is not described in detail here.
602. And for each face region, carrying out recognition processing on the face region to obtain a second face feature of the face region, wherein the second face feature indicates the face shape of the face in the face region.
The face shape is used to indicate the shape of the face, and the face is divided into a plurality of face shapes because the face shapes are different. The facial shape includes at least one of an oval shape, a diamond shape, a square shape, a heart shape, a long shape, or a circular shape, as shown in fig. 7.
Because the face shape is divided according to the shape of the face, when the face shape of the face is determined, the face shape can be determined according to the face shape parameters of the face, and in a possible implementation manner, the face region is identified to obtain a second face feature of the face region, including: and determining a second face characteristic of the face area based on the face shape parameter of the face area.
Optionally, the face shape parameters include at least one of: the length and width of the face; forehead width and chin width of a human face; the round tip coefficient of the lower jaw of the human face; the chin dome coefficient of the face.
In order to determine the face shape more accurately according to the face shape parameters, optionally, a ratio of the length to the width of the face is obtained, a ratio of the forehead width to the chin width of the face is obtained, and a second face feature corresponding to the face region is determined according to the ratio of the length to the width of the face, the ratio of the forehead width to the chin width of the face, a chin round tip coefficient of the face, and a chin round tip coefficient of the face.
Since the face shape parameter has a correspondence relationship with the face shape, the second face feature of the face region can be determined based on the correspondence relationship. Optionally, determining a second face feature of the face region based on the face shape parameter of the face region includes: and acquiring the face shape characteristic corresponding to the face shape parameter of the face area according to the first corresponding relation between the face shape parameter and the face shape characteristic, and taking the face shape characteristic as a second face characteristic of the face area.
Optionally, the second corresponding relationship is obtained by analyzing a plurality of sample images, in a possible implementation manner, according to the first corresponding relationship between the face shape parameter and the face shape feature, a face shape feature corresponding to the face shape parameter of the face region is obtained, and before the face shape feature is used as the second face feature of the face region, the method further includes: acquiring a plurality of sample face regions with the same face shape characteristics; acquiring a face shape parameter of each sample face area; carrying out statistical processing on the obtained multiple face shape parameters to obtain processed face shape parameters; a first correspondence of the processed shape parameters to the facial features is established. Optionally, the statistical process is an averaging process.
For example, for each face type, m (m is any integer greater than or equal to 1) sample face images are selected, the face shape parameter of each sample face image is obtained, the average value of a plurality of face shape parameters is obtained, and the average value is used as the face shape parameter corresponding to the face type.
For example, for each sample face image, a face length-to-width ratio r1, a forehead jaw width-to-width ratio r2, a face jaw round tip coefficient r3, and a face jaw round tip coefficient r4 are determined. As shown in fig. 8, the length/width ratio r1 is W/H, where W is the maximum width of the face and H is the maximum length of the face. As shown in fig. 9, the ratio r2 of the width of the chin of the forehead is L1/L2, L1 is the maximum width of the forehead area, and L2 is the maximum width of the chin area. As shown in fig. 10, the chin dome coefficient r3 of the human face is tan ABC, a and B are left and right face edge points aligned with the mouth angle, and C is the lowest point of the human face. As shown in fig. 11, the round tip coefficient r4 of the chin of the human face is tan FGC, D and E are the left and right nozzle angle key points, respectively, F and G are the face edge points of the chin region, and DF and EG are both perpendicular to DE and C is the lowest point of the human face.
R1, r2, r3 and r4 of the m sample face images are respectively subjected to summation and averaging to obtain
Figure BDA0002791118780000341
And
Figure BDA0002791118780000342
r is to be1To R4As the face shape parameters corresponding to the face shape.
If the similarity between the face shape parameter of the face region and any face shape parameter in the first corresponding relationship is high, the similarity indicates that the face shape feature of the face in the face region is the same as the face shape feature corresponding to any face shape parameter in the first corresponding relationship, and the computer equipment acquires the face shape feature corresponding to any face shape parameter from the first corresponding relationship as a second face feature of the face region.
In one possible implementation manner, the face shape parameter with the highest similarity to the current face shape parameter is obtained from the first corresponding relationship. Optionally, obtaining a face shape feature corresponding to the face shape parameter of the face region according to the first corresponding relationship between the face shape parameter and the face shape feature, as a second face feature of the face region, including: acquiring a difference value between the face shape parameter of the face area and each face shape parameter in the first corresponding relation; and acquiring the face shape characteristic corresponding to the face shape parameter with the minimum difference value from the first corresponding relation, and taking the face shape characteristic as a second face characteristic of the face region.
Alternatively, if the difference between the two face shape parameters is small, the corresponding face shapes of the two face shape parameters may be the same face shape. In another possible implementation manner, the obtaining, according to the first corresponding relationship between the face shape parameter and the face shape feature, the face shape feature corresponding to the face shape parameter of the face region as the second face feature of the face region includes: acquiring a difference value between the face shape parameter of the face area and any face shape parameter in the first corresponding relation; and if the difference value does not exceed the reference threshold value, acquiring the face shape characteristic corresponding to any face shape parameter as a second face characteristic of the face area. Wherein the difference value not exceeding the reference threshold value is representative of: the face shape of the face in the face area is the same as the face shape corresponding to any one face shape parameter.
603. And acquiring special effect parameters corresponding to each face region based on the second face features of each face region.
At present, the variety of special effects is more and more, and for some users, how to select the special effect suitable for the users becomes a difficult problem and needs to be searched continuously. Since some special effect processes change the shape of a human face and the like, and some special effects are only applicable to partial face shapes, the embodiment of the present application provides a method for acquiring special effect parameters according to the face shapes.
It should be noted that step 603 is similar to step 304, and is not described in detail here.
604. And carrying out special effect processing on each face area according to the special effect parameters corresponding to each face area to obtain a second image.
It should be noted that step 604 is similar to step 305, and is not described in detail here.
Taking a live scene as an example, the embodiment shown in fig. 6 is exemplarily explained:
the live broadcast server stores a second corresponding relation, the second corresponding relation comprises each face and a special effect parameter corresponding to each face, and the special effect parameter corresponding to each face is a special effect parameter which is set by the live broadcast server in a default mode. The anchor starts a live broadcast client, a first corresponding relation is obtained from a live broadcast server, the first corresponding relation comprises each face and face shape parameters corresponding to each face, and a second corresponding relation is obtained from the live broadcast server. And the live broadcast client acquires the shot image, acquires the face shape parameters of the anchor face in the image, and determines the face shape of the anchor according to the first corresponding relation. And acquiring corresponding special effect parameters according to the face shape of the anchor, and performing special effect processing on the shot image according to the special effect parameters.
And the live broadcast client supports the anchor to define the special effect parameters, if the anchor is not satisfied with the image after the special effect processing, the current special effect parameters are revised again, and the local second corresponding relation is updated according to the revised special effect parameters, so that the special effect parameters set by the anchor are continuously adopted to carry out the special effect processing in the live broadcast process.
Optionally, the live broadcast client uploads the face of the anchor, the special effect parameter used by the anchor in the live broadcast process, the number of people paying attention to the anchor, and the like to the live broadcast server, and the live broadcast server determines target numerical value anchor with a large number of people paying attention to each face according to the face of the anchor, the special effect parameter used by the anchor in the live broadcast process, the number of people paying attention to the anchor, and other parameters, acquires the special effect parameter used by the target numerical value anchor in the live broadcast process, performs average processing on the target numerical value special effect parameter, obtains a new special effect parameter, and updates the new special effect parameter to the second corresponding relationship. Therefore, the live broadcast server can update the second corresponding relation according to the most popular special effect parameter of the anchor, and the processing effect of special effect processing through the second corresponding relation is guaranteed.
Optionally, the live broadcast server sets a version number for the second corresponding relationship, the live broadcast client can determine whether the local second corresponding relationship is the latest second corresponding relationship according to the version number of the second corresponding relationship, and if the version number according to the local second corresponding relationship is smaller than the version number of the second corresponding relationship in the live broadcast server, it is determined that the second corresponding relationship is updated by the live broadcast server, and the second corresponding relationship is obtained from the live broadcast server. Optionally, after updating the second correspondence, the live client presents the update identifier to the anchor.
According to the image processing method provided by the embodiment of the application, when the image is subjected to special effect processing, the special effect parameters of the face can be obtained according to the face shape of each face in the image, the special effect parameters are adopted to carry out special effect processing on the face, namely, the special effect processing is carried out on the face according to the face shape of the face, the personalized special effects are added to different faces in the same image, and the flexibility of the special effect processing is improved.
Moreover, the face shape is determined by the shape of the face, so that the face shape of the face can be accurately determined according to the acquired face shape parameters, and on the basis of ensuring that personalized special effects are accurately added to the faces with different face shapes, a user does not need to manually set the face shape, so that the operation of the user is simplified.
In addition, in the embodiment of the application, the live broadcast server can also periodically update the corresponding relation between the face shape and the special effect parameters, and update the special effect parameters in the corresponding relation by selecting the special effect parameters used by the most popular anchor, so that the special effect parameters in the corresponding relation are ensured to be approved by the public, and the special effect processing effect is ensured.
It should be noted that, the embodiment of the present application is only the embodiment shown in fig. 3 and fig. 6 as an example, and an exemplary description is given to acquiring the special effect parameter corresponding to each face region based on the face feature of each face region, and in another embodiment, acquiring the special effect parameter corresponding to each face region based on the face feature of each face region includes: and calling a face recognition model, carrying out recognition processing on each face region to obtain a user identifier corresponding to each face region, and obtaining a special effect parameter corresponding to each user identifier based on each obtained user identifier. Where the user identification is used to identify a unique user, e.g., the user's name, the user's nickname, the user's account number, etc.
That is to say, in the process of processing the image, the computer device can identify the user to which each face belongs in the image through the face recognition model, so as to obtain the special effect parameters set by the user, perform special effect processing on the face, realize different special effect processing for each user, and improve the flexibility of the special effect processing.
Fig. 12 is a flowchart of a special effect parameter setting method provided in an embodiment of the present application, and referring to fig. 12, when applied to a terminal, the method includes:
1201. and displaying an image, wherein the image comprises at least one object area, and the object area is an area where the target object is located.
The target object is any object, for example, the target object is a human face, an animal, a plant, and the like, and the target object is not limited in the embodiment of the present application.
1202. And displaying the special effect setting identification of each object area.
The special effect setting identifier is an identifier in any form, for example, the special effect setting identifier is a square identifier, a circular identifier, or the like.
1203. And responding to the trigger operation of the special effect setting identification of any object area, and acquiring special effect parameters set for the any object area.
The trigger operation is any one or a combination of multiple operations, such as a click operation, a slide operation, a double click operation, a long press operation, and the like, and the trigger operation is not limited in the embodiment of the present application.
According to the special effect parameter setting method provided by the embodiment of the application, the special effect setting identification for setting the special effect parameters of different object areas is provided for the user, the special effect setting can be performed on any object area in the image through the trigger operation of the special effect setting identification, the personalized special effect is added to the different object areas, and the flexibility of special effect processing is improved.
In one possible implementation, the target object is a human face, and the method further includes:
when the camera is started, displaying first prompt information, wherein the first prompt information is used for prompting shooting of a front face;
or displaying second prompt information, wherein the second prompt information is used for prompting that the human faces in multiple directions are shot.
In one possible implementation, displaying the special effect setting identifier of each object region includes:
carrying out object recognition on the image, and determining at least one object area in the image;
and displaying at least one identification frame based on the determined at least one object area, wherein each object area is positioned in the corresponding identification frame.
In one possible implementation, based on the determined at least one object region, at least one recognition box is displayed, and the object region is located behind the corresponding recognition box, and the method further includes:
and displaying third prompt information, wherein the third prompt information is used for prompting the trigger operation executed on the identification frame.
In one possible implementation manner, in response to a setting operation of the special effect setting flag for any object region, acquiring a special effect parameter set for the any object region includes:
responding to the trigger operation of the special effect setting identification of any object area, and displaying a special effect setting interface, wherein the special effect setting interface is used for displaying at least one candidate special effect parameter;
in response to a trigger operation on any candidate special effect parameter, the candidate special effect parameter is determined as the special effect parameter of any object area.
In one possible implementation manner, after the special effect parameter set for any object region is acquired in response to the setting operation of the special effect setting flag for the object region, the method further includes:
acquiring object characteristics of any object area, wherein the object characteristics indicate the appearance of a target object in the object area;
and establishing a second corresponding relation between the object characteristics and the special effect parameters set for any object region.
In a possible implementation manner, after establishing the second corresponding relationship between the object feature and the special effect parameter set for any one of the object regions, the method further includes:
the second correspondence is stored in a configuration file.
The method for setting the special effect parameters provided by the embodiment of the application is suitable for carrying out special effect setting on a target object in any image, wherein the target object is any object such as a human face, an animal, a plant and the like, the target object is not limited in the embodiment of the application, and the method for setting the special effect parameters provided by the embodiment of the application is exemplarily described below by taking the target object as the human face as an example:
fig. 13 is a flowchart of a special effect parameter setting method provided in an embodiment of the present application, and referring to fig. 13, when applied to a terminal, the method includes:
1301. and displaying an image, wherein the image comprises at least one face area, and the face area is an area where a face is located.
The image in step 1301 is any image. Optionally, the image is an image currently acquired by the terminal and including at least one face area, and taking a live broadcast scene as an example, the image is an image acquired by the terminal in a live broadcast process; or the image is an image acquired by a terminal before live broadcasting. For example, before live broadcast, the respective special effect parameters are set by the anchor.
In one possible implementation, displaying an image includes: and starting the camera, and displaying a shooting picture of the camera, wherein the shooting picture comprises an image shot by the camera, and the image comprises at least one face area. Optionally, in order to ensure that the shot picture includes the face region, when the camera is started, the terminal may further display fourth prompt information, where the fourth prompt information is used to prompt to shoot the face.
Optionally, in order to ensure that the face features can be accurately acquired from the face region subsequently, the terminal may further display other prompt information. In one possible implementation, the method further comprises: when the camera is started, displaying first prompt information, wherein the first prompt information is used for prompting shooting of a front face; or displaying second prompt information, wherein the second prompt information is used for prompting that the human faces in multiple directions are shot.
Optionally, the image in step 1301 is an image stored in the terminal and including at least one face region; or the image is an image acquired by other methods, and the image acquisition method is not limited in the embodiment of the present application.
Optionally, displaying an image, comprising: the image is displayed by the target application. The target application has a special effect processing function, and optionally, the target application is a live broadcast application, a special effect processing application, or another application. Displaying the image by the target application means: the image is displayed in an interface provided by the target application.
1302. And displaying the special effect setting identification of each face area.
Each face area corresponds to a special effect setting identifier, and the special effect setting identifier is used for setting special effect parameters of the corresponding face area. Optionally, the special effect setting identifier is a recognition frame, and special effect parameters corresponding to the face region are set through the recognition frame. Alternatively, the identification frame is a frame of any shape, e.g., a square frame, a circular frame, etc.
In one possible implementation manner, displaying the special effect setting identifier of each face region includes: carrying out face recognition on the image, and determining at least one face area in the image; and displaying at least one recognition frame based on the determined at least one face area, wherein each face area is positioned in the corresponding recognition frame.
In order to avoid that the user does not know how to set the special effect parameters based on the recognition boxes, the embodiment of the present application further displays prompt information, and optionally displays at least one recognition box based on the determined at least one face region, where the face region is located behind the corresponding recognition box, and the method further includes: and displaying third prompt information, wherein the third prompt information is used for prompting the trigger operation executed on the identification frame. By displaying the third prompt information, the user can be instructed to perform an operation of setting the special effect parameter. For example, the third prompt message is "please double-click the identification box".
1303. And responding to the trigger operation of the special effect setting identification of any face area, and acquiring special effect parameters set for any face area.
The special effect parameters in step 1303 are the same as the special effect parameters in step 304, and reference may be made to the description of the special effect parameters in step 304, which is not described in detail here.
In one possible implementation manner, in response to a setting operation of a special effect setting identifier for any face region, acquiring a special effect parameter set for the any face region includes: responding to the triggering operation of the special effect setting identification of any one face area, and displaying a special effect setting interface which is used for displaying at least one candidate special effect parameter; and in response to the triggering operation of any candidate special effect parameter, determining the candidate special effect parameter as the special effect parameter of any human face area.
If the image further includes other face regions without special effect parameters, the other face regions without special effect parameters can be further set, for example, in response to a trigger operation of a special effect setting identifier for another face region, the special effect parameters set for the other face region are acquired, and the other face region is another face region in at least one face region of the image except for any one face region.
In one possible implementation manner, in response to a trigger operation on a special effect setting identifier of another face region, a special effect setting interface is displayed, the special effect setting interface is used for displaying at least one candidate special effect parameter, and in response to a trigger operation on any one candidate special effect parameter, the candidate special effect parameter is determined as the special effect parameter of the other face region.
1304. And acquiring the face characteristics of any face region, wherein the face characteristics indicate the appearance of the face in the face region.
Step 1304 is the same as step 304 and step 602, and is not described in detail here.
1305. And establishing a second corresponding relation between the face characteristics and the special effect parameters set for any face area.
1306. The second correspondence is stored in a configuration file.
Wherein, the second corresponding relation comprises the face characteristics and the corresponding special effect parameters, the second corresponding relation between the face characteristics and the special effect parameters set for any face area is established, and the storage of the second corresponding relation in the configuration file means that: and correspondingly storing the human face characteristics and the special effect parameters into a configuration file. In other words, after step 1304, the face features are stored in the configuration file corresponding to the special effect parameters set for any face region.
Optionally, the configuration file is a file local to the terminal, and optionally, the configuration file is a file corresponding to the target application and is used for data generated by the target application. The embodiment of the present application does not limit the configuration file.
It should be noted that, after the second corresponding relationship is stored in the configuration file, the personalized special effect processing can be subsequently performed on the face region according to the face features of the face region based on the configuration file. In one possible implementation manner, after the second corresponding relationship is stored in the configuration file, the first image is acquired, and at least one face area of the first image is identified; acquiring the face characteristics of each face region, and acquiring special effect parameters corresponding to the face characteristics from a configuration file as the special effect parameters corresponding to the face regions based on the face characteristics of each face region; and carrying out special effect processing on each face area according to the special effect parameters corresponding to each face area to obtain a second image. The method for processing the first image is the same as the embodiment shown in fig. 3 and fig. 6, and is not described in detail here.
It should be noted that, in a possible implementation manner, before image processing, a special effect parameter of each face region is set in advance, so that in the process of image processing, processing can be performed according to the special effect parameter corresponding to each object region.
For example, taking a live broadcast scene as an example, before a live broadcast is started, a live broadcast starts a camera through a live broadcast application, the live broadcast application prompts that the live broadcast presents a front face in a camera picture, the live broadcast application performs face recognition on the camera picture, after the face is recognized, a plurality of frames are displayed, each frame frames a face area of the live broadcast to indicate that the face has been recognized, wherein the number of the frames is the same as the number of the faces recognized in the camera picture. The live broadcast application prompts the anchor to double click respective boxes, a beauty setting interface is popped up, a plurality of anchors set beauty parameters in sequence, the live broadcast client records the beauty parameters of the anchor based on live broadcast face characteristics, and local storage files are stored.
It should be noted that, in a possible implementation manner, in the image processing process, the special effect parameter of any face region is updated, so that the special effect of the face region is changed, and the flexibility of special effect processing is improved.
For example, taking a live broadcast scene as an example, in a live broadcast process, a live broadcast application displays an image subjected to special effect processing in a live broadcast interface, if a anchor is unsatisfied with a special effect in the image, a trigger operation is performed on a special effect setting identifier of a face region to which the special effect belongs, a special effect parameter set for the face region is obtained again, and a special effect parameter corresponding to the face region in a configuration file is updated to a newly set special effect parameter, so that the newly set special effect parameter is used for carrying out special effect processing on the face region in the subsequent process.
According to the special effect parameter setting method provided by the embodiment of the application, the special effect setting identification for setting the special effect parameters of different face areas is provided for the user, the special effect setting can be performed on any face area in the image through the trigger operation of the special effect setting identification, the personalized special effect is added to the different face areas, and the flexibility of special effect processing is improved.
In addition, prompt information can be displayed for a user, so that the user can complete the setting of the special effect parameters in sequence when using the special effect parameter setting function for the first time, the success rate of the setting of the special effect parameters is improved, and the user experience is also improved.
In addition, after the special effect parameters are set for the face, the face characteristics of the face area and the special effect parameters set for the face area are correspondingly stored in the configuration file, and then if the face characteristics of any face area are detected to be matched with the stored face characteristics, the corresponding special effect parameters can be obtained from the configuration file to carry out special effect processing on any face area, so that the personalized special effects are added to different faces according to the face appearance, and the flexibility of the special effect processing is improved.
Fig. 14 is a schematic structural diagram of an image processing apparatus provided in the present application. Referring to fig. 14, the apparatus includes:
an image obtaining module 1401, configured to identify at least one object region in the first image, where the object region is an area where a target object is located;
a special effect obtaining module 1402, configured to obtain a special effect parameter corresponding to each object region based on an object feature of the object region, where the object feature of the object region indicates an outline of the target object in the object region;
a special effect processing module 1403, configured to perform special effect processing on each object region according to the special effect parameter corresponding to each object region, so as to obtain a second image.
The image processing device provided by the embodiment of the application can acquire the special effect parameters of each target object according to the appearance of the target object in the image when the image is subjected to special effect processing, and the special effect parameters are adopted to perform the special effect processing on the target object, namely, the special effect processing is performed on the target object according to the appearance of the target object, so that the purpose of adding personalized special effects to different target objects in the same image is realized, and the flexibility of the special effect processing is improved.
As shown in fig. 15, in one possible implementation, the object region is a human face region, and the apparatus further includes:
a key point obtaining module 1404, configured to obtain, for each face region, a plurality of face key points of the face region, where the face key points include at least one of face edge points or face organ edge points;
a first feature obtaining module 1405, configured to obtain a first face feature of the face region according to the positions of the plurality of face key points, where the first face feature indicates a relative position between the plurality of face key points in the face region.
In one possible implementation, the first feature acquisition module 1405 is configured to perform at least one of:
acquiring a first face sub-feature of the face region according to the abscissa of the plurality of face key points, wherein the first face sub-feature represents the transverse relative position of the plurality of face key points in the face;
and acquiring a second face sub-feature of the face region according to the vertical coordinates of the face key points, wherein the second face sub-feature represents the longitudinal relative positions of the face key points in the face.
In one possible implementation, the first feature obtaining module 1405 includes:
a distance obtaining unit 14051, configured to obtain a first distance between a first face key point and a second distance between the first face key point and the third face key point according to the position of the first face key point, the position of the second face key point, and the position of the third face key point;
a ratio determining unit 14052, configured to determine a ratio of the first distance to the second distance as the first facial feature;
the first face key point, the second face key point and the third face key point are any face key points in the plurality of face key points respectively.
In one possible implementation, the plurality of face key points includes face edge points and face organ edge points, and the first feature acquisition module 1405 includes:
a key point selecting unit 14053, configured to select a face key point located in a first face sub-region or a second face sub-region from the plurality of face key points, where the first face sub-region is a region to which eyes belong, and the second face sub-region is a region to which a face centerline belongs;
the first feature obtaining unit 14054 is configured to obtain first face features of the face region according to the positions of the plurality of face key points.
In a possible implementation manner, the keypoint selecting unit 14053 is configured to select a first corner key point, a second corner key point, and a first face edge key point that is at the same height as a lower eyelid key point from the plurality of face keypoints; or,
the key point selecting unit 14053 is configured to select a first nose bridge key point, a second nose bridge key point, and a third nose bridge key point from the plurality of face key points.
In one possible implementation, the object region is a human face region, and the apparatus further includes:
the second feature obtaining module 1406 is configured to, for each face region, perform recognition processing on the face region to obtain a second face feature of the face region, where the second face feature indicates a face shape of a face in the face region.
In one possible implementation manner, the second feature obtaining module 1406 is configured to determine a second face feature of the face region based on the face shape parameter of the face region.
In a possible implementation manner, the second feature obtaining module 1406 is configured to obtain, according to the first corresponding relationship between the face shape parameter and the face shape feature, the face shape feature corresponding to the face shape parameter of the face region as the second face feature of the face region.
In one possible implementation, the second feature obtaining module 1406 includes:
a difference value obtaining unit 14061, configured to obtain a difference value between the face shape parameter of the face area and each face shape parameter in the first corresponding relationship;
the second feature obtaining unit 14062 is configured to obtain, from the first corresponding relationship, a face feature corresponding to the face shape parameter with the smallest difference value as a second face feature of the face area.
In one possible implementation manner, the second feature obtaining module includes:
a difference value obtaining unit 14061, configured to obtain a difference value between the face shape parameter of the face area and any one of the face shape parameters in the first corresponding relationship;
the second feature obtaining unit 14062 is configured to obtain a face feature corresponding to the any one of the face shape parameters as a second face feature of the face region if the difference value does not exceed the reference threshold.
In one possible implementation, the apparatus further includes:
a sample acquiring module 1407, configured to acquire a plurality of sample object regions having the same facial features;
a shape acquisition module 1408 for acquiring face shape parameters for each sample object region;
a first statistical module 1409, configured to perform statistical processing on the obtained multiple face shape parameters to obtain processed face shape parameters;
a first establishing module 1410, configured to establish a first corresponding relationship between the processed shape parameter and the facial feature.
In one possible implementation, the face shape parameters include at least one of:
the length and width of the face;
the forehead width and the chin width of the face;
the round tip coefficient of the lower jaw of the human face;
the chin apex coefficient of the face.
In one possible implementation, the object region is a face region, and the special effect obtaining module 1402 includes:
the identifying unit 14021 is configured to invoke a face identification model, and perform identification processing on each face area to obtain a user identifier corresponding to each face area;
a first special effect obtaining unit 14022, configured to obtain, based on each obtained user identifier, a special effect parameter corresponding to each obtained user identifier.
In a possible implementation manner, the special effect obtaining module 1402 is configured to obtain a special effect parameter corresponding to each object feature according to a second corresponding relationship between the object feature and the special effect parameter.
In one possible implementation, the apparatus further includes:
a reference special effect obtaining module 1411, configured to obtain special effect parameters corresponding to multiple reference object regions with the same object characteristics;
a second statistical module 1412, configured to perform statistical processing on the obtained multiple special effect parameters to obtain processed special effect parameters;
a second establishing module 1413, configured to establish a second corresponding relationship between the object feature and the processed special effect parameter.
In one possible implementation, the apparatus further includes:
the special effect obtaining module 1402, configured to obtain a special effect parameter set for any currently displayed object region in response to a special effect setting operation on the object region;
a third feature obtaining module 1414, configured to obtain a feature of the object in the object region;
a third establishing module 1415, configured to establish a second correspondence between the object feature and the special effect parameter.
In one possible implementation, the special effect obtaining module 1402 includes:
a display unit 14023, configured to display at least one candidate special effect parameter in response to a trigger operation of the special effect setting option for the object area;
a determining unit 14024, configured to, in response to a trigger operation on any candidate special effect parameter, determine the candidate special effect parameter as the special effect parameter corresponding to the object region.
In one possible implementation, the apparatus further includes:
the special effect obtaining module 1402, configured to obtain a target special effect parameter set for any object region in the second image in response to a special effect setting operation on the object region;
an updating module 1416, configured to update the second corresponding relationship according to the object feature of the object region and the target special effect parameter.
In one possible implementation, the special effect processing module 1403 includes:
a cropping unit 14031 for cropping the each object region from the first image;
a special effect processing unit 14032, configured to perform special effect processing on each object region by using the special effect parameter corresponding to the object region;
a backfilling unit 14033, configured to backfill each object region after the special effect processing into the first image to obtain the second image.
In one possible implementation, the apparatus further includes:
a cropping module 1417 for cropping the each object region from the first image;
the special effect processing module 1403 includes:
a special effect processing unit 14032, configured to perform special effect processing on each object region by using the special effect parameter corresponding to the object region;
a backfilling unit 14033, configured to backfill each object region after the special effect processing into the first image to obtain the second image.
In one possible implementation, the apparatus further includes:
a buffer module 1418, configured to sequentially place each object region into a buffer queue;
the special effect obtaining module 1402 is configured to extract an object region from the cache queue, and obtain a special effect parameter corresponding to the object region based on an object feature of the object region; and extracting a next object region from the cache queue, and acquiring the special effect parameter corresponding to the next object region based on the object characteristics of the next object region until acquiring the special effect parameter corresponding to the last object region in the cache queue.
In a possible implementation manner, the first image is an image captured during live broadcasting, and the apparatus further includes at least one of the following modules:
a display module 1419, configured to display the second image in the live interface;
a sending module 1420, configured to send the second image to a live server, where the live server sends the second image to a viewer client that watches live broadcast.
Fig. 16 is a schematic structural diagram of a special effect parameter setting device according to an embodiment of the present application, and referring to fig. 16, the device includes:
a display module 1601, configured to display an image, where the image includes at least one object area, and the object area is an area where a target object is located;
the display module 1601 is further configured to display a special effect setting identifier of each object region;
a parameter obtaining module 1602, configured to obtain a special effect parameter set for any object region in response to a trigger operation on a special effect setting identifier of the object region.
As shown in fig. 17, in a possible implementation manner, the target object is a human face, and the display module 1601 is configured to display first prompt information when the camera is started, where the first prompt information is used to prompt shooting of a front human face; or displaying second prompt information, wherein the second prompt information is used for prompting the shooting of the faces in multiple directions.
In one possible implementation, the display module 1601 is configured to perform object recognition on an image, and determine at least one object region in the image; and displaying at least one identification frame based on the determined at least one object area, wherein each object area is positioned in the corresponding identification frame.
In a possible implementation manner, the display module 1601 is further configured to display a third prompting message, where the third prompting message is used to prompt a triggering operation performed on the recognition box.
In a possible implementation manner, the parameter obtaining module 1602 is configured to display a special effect setting interface in response to a trigger operation on a special effect setting identifier of any one of the object regions, where the special effect setting interface is configured to display at least one candidate special effect parameter; in response to a trigger operation on any candidate special effect parameter, the candidate special effect parameter is determined as the special effect parameter of any object area.
In one possible implementation, the apparatus further includes:
a characteristic obtaining module 1603, configured to obtain an object characteristic of the any object region, where the object characteristic indicates an outline of a target object in the object region;
the establishing module 1604 is configured to establish a second corresponding relationship between the object feature and the special effect parameter set for any one of the object regions.
In one possible implementation, the apparatus further includes:
a storage module 1605, configured to store the second corresponding relationship in the configuration file.
An embodiment of the present application further provides a computer device, where the computer device includes a processor and a memory, where the memory stores at least one program code, and the at least one program code is loaded and executed by the processor to implement the operations performed in the image processing method according to the foregoing embodiment; or to implement the operations performed in the special effect parameter setting method as in the above-described embodiment.
Optionally, the computer device is provided as a terminal. Fig. 18 is a schematic structural diagram of a terminal according to an embodiment of the present application. The terminal 1800 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. The terminal 1800 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
The terminal 1800 includes: a processor 1801 and a memory 1802.
The processor 1801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1801 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1801 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1801 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed by the display screen. In some embodiments, the processor 1801 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1802 may include one or more computer-readable storage media, which may be non-transitory. Memory 1802 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1802 is used to store at least one program code for execution by the processor 1801 to implement the image processing methods provided by the method embodiments herein or to implement the special effect parameter setting methods provided by the method embodiments herein.
In some embodiments, the terminal 1800 may further optionally include: a peripheral interface 1803 and at least one peripheral. The processor 1801, memory 1802, and peripheral interface 1803 may be connected by a bus or signal line. Each peripheral device may be connected to the peripheral device interface 1803 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1804, display 1805, camera assembly 1806, audio circuitry 1807, positioning assembly 1808, and power supply 1809.
The peripheral interface 1803 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1801 and the memory 1802. In some embodiments, the processor 1801, memory 1802, and peripheral interface 1803 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1801, the memory 1802, and the peripheral device interface 1803 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1804 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1804 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals. Optionally, the radio frequency circuitry 1804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1804 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 20G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1804 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1805 is a touch display screen, the display screen 1805 also has the ability to capture touch signals on or over the surface of the display screen 1805. The touch signal may be input to the processor 1801 as a control signal for processing. At this point, the display 1805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1805 may be one, disposed on a front panel of the terminal 1800; in other embodiments, the number of the display screens 1805 may be at least two, and each of the display screens is disposed on a different surface of the terminal 1800 or is in a foldable design; in other embodiments, the display 1805 may be a flexible display disposed on a curved surface or a folded surface of the terminal 1800. Even more, the display 1805 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display 1805 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 1806 is used to capture images or video. Optionally, the camera assembly 1806 includes a front camera and a rear camera. The front camera is arranged on the front panel of the terminal, and the rear camera is arranged on the back of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1806 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1807 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1801 for processing or inputting the electric signals to the radio frequency circuit 1804 to achieve voice communication. The microphones may be provided in a plurality, respectively, at different positions of the terminal 1800 for the purpose of stereo sound collection or noise reduction. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1801 or the radio frequency circuitry 1804 to sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1807 may also include a headphone jack.
The positioning component 1808 is utilized to locate a current geographic position of the terminal 1800 for navigation or LBS (Location Based Service). The Positioning component 1808 may be based on a Global Positioning System (GPS) in the united states, a beidou System in china or a gurney Positioning System in russia, and a galileo Positioning System in the european union.
The power supply 1809 is used to power various components within the terminal 1800. The power supply 1809 may be ac, dc, disposable or rechargeable. When the power supply 1809 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 1800 also includes one or more sensors 1810. The one or more sensors 1810 include, but are not limited to: acceleration sensor 1811, gyro sensor 1812, pressure sensor 1813, fingerprint sensor 1814, optical sensor 1815, and proximity sensor 1816.
The acceleration sensor 1811 may detect the magnitude of acceleration on three coordinate axes of a coordinate system established with the terminal 180. For example, the acceleration sensor 1811 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1801 may control the display 1805 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1811. The acceleration sensor 1811 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1812 may detect a body direction and a rotation angle of the terminal 1800, and the gyro sensor 1812 may cooperate with the acceleration sensor 1811 to collect a 3D motion of the user on the terminal 1800. The processor 1801 may implement the following functions according to the data collected by the gyro sensor 1812: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensors 1813 may be disposed on the side bezel of the terminal 1800 and/or on the lower layer of the display 18020. When the pressure sensor 1813 is disposed on a side frame of the terminal 1800, a user's grip signal on the terminal 1800 can be detected, and the processor 1801 performs left-right hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 1813. When the pressure sensor 1813 is disposed at the lower layer of the display screen 18020, the processor 1801 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 1805. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1814 is used to collect the fingerprint of the user, and the processor 1801 identifies the user according to the fingerprint collected by the fingerprint sensor 1814, or the fingerprint sensor 1814 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1801 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 1814 may be disposed at the front, rear, or side of the terminal 1800. When a physical key or vendor Logo is provided on the terminal 1800, the fingerprint sensor 1814 may be integrated with the physical key or vendor Logo.
The optical sensor 1815 is used to collect the ambient light intensity. In one embodiment, the processor 1801 may control the display brightness of the display screen 1805 based on the ambient light intensity collected by the optical sensor 1815. Specifically, when the ambient light intensity is high, the display brightness of the display screen 1805 is increased; when the ambient light intensity is low, the display brightness of the display 1805 is reduced. In another embodiment, the processor 1801 may also dynamically adjust the shooting parameters of the camera assembly 1806 according to the intensity of the ambient light collected by the optical sensor 1815.
A proximity sensor 1816, also called a distance sensor, is provided at the front panel of the terminal 1800. The proximity sensor 1816 is used to collect the distance between the user and the front surface of the terminal 1800. In one embodiment, when the proximity sensor 1816 detects that the distance between the user and the front surface of the terminal 1800 gradually decreases, the processor 1801 controls the display 1805 to switch from the bright screen state to the dark screen state; when the proximity sensor 1816 detects that the distance between the user and the front surface of the terminal 1800 is gradually increased, the processor 1801 controls the display 1805 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 18 is not intended to be limiting of terminal 1800 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
Optionally, the computer device is provided as a server. Fig. 19 is a schematic structural diagram of a server according to an exemplary embodiment, where the server 1900 may have a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1901 and one or more memories 1902, where the memory 1902 stores at least one program code, and the at least one program code is loaded and executed by the processors 1901 to implement the methods provided by the above method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, where at least one program code is stored in the computer-readable storage medium, and the at least one program code is loaded and executed by a processor to implement the operations executed in the image processing method of the foregoing embodiment; or to implement the operations performed in the special effect parameter setting method of the above-described embodiment.
The embodiment of the present application further provides a computer program, where at least one program code is stored in the computer program, and the at least one program code is loaded and executed by a processor to implement the operations executed in the image processing method of the foregoing embodiment; or to implement the operations performed in the special effect parameter setting method of the above-described embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only an alternative embodiment of the present application and is not intended to limit the present application, and any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (34)

1. An image processing method, characterized in that the method comprises:
identifying at least one object area in the first image, wherein the object area is an area where a target object is located;
acquiring a special effect parameter corresponding to each object region based on the object feature of each object region, wherein the object feature of each object region indicates the shape of the target object in the object region;
and carrying out special effect processing on each object region according to the special effect parameter corresponding to each object region to obtain a second image.
2. The method according to claim 1, wherein the object regions are human face regions, and before the obtaining of the special effect parameter corresponding to each object region based on the object feature of each object region, the method further comprises:
for each face region, acquiring a plurality of face key points of the face region, wherein the face key points comprise at least one of face edge points or face organ edge points;
and acquiring a first face feature of the face region according to the positions of the face key points, wherein the first face feature indicates the relative positions of the face key points in the face region.
3. The method according to claim 2, wherein the obtaining the first face feature of the face region according to the positions of the plurality of face key points comprises at least one of:
acquiring first face sub-features of the face region according to the abscissa of the plurality of face key points, wherein the first face sub-features represent the transverse relative positions of the plurality of face key points in the face;
and acquiring a second face sub-feature of the face region according to the vertical coordinates of the face key points, wherein the second face sub-feature represents the longitudinal relative positions of the face key points in the face.
4. The method according to claim 2, wherein the obtaining the first face feature of the face region according to the positions of the plurality of face key points comprises:
acquiring a first distance between a first face key point and a second distance between the first face key point and a third face key point according to the position of the first face key point, the position of the second face key point and the position of the third face key point;
determining a ratio of the first distance to the second distance as the first facial feature;
the first face key point, the second face key point and the third face key point are any face key points in the plurality of face key points respectively.
5. The method according to claim 2, wherein the plurality of face key points comprise face edge points and face organ edge points, and the obtaining the first face feature of the face region according to the positions of the plurality of face key points comprises:
selecting face key points positioned in a first face subregion or a second face subregion from the plurality of face key points, wherein the first face subregion is a region to which eyes belong, and the second face subregion is a region to which a face central line belongs;
and acquiring the first face features of the face area according to the positions of the selected face key points.
6. The method of claim 5, wherein selecting face keypoints from the plurality of face keypoints that are located in either a first face sub-region or a second face sub-region comprises:
selecting a first canthus key point, a second canthus key point and a first face edge key point which is at the same height with a lower eyelid key point from the plurality of face key points; or,
and selecting a first nose bridge key point, a second nose bridge key point and a third nose bridge key point from the plurality of face key points.
7. The method according to claim 1, wherein the object regions are human face regions, and before the obtaining of the special effect parameter corresponding to each object region based on the object feature of each object region, the method further comprises:
and for each face region, carrying out recognition processing on the face region to obtain a second face feature of the face region, wherein the second face feature indicates the face shape of the face in the face region.
8. The method according to claim 7, wherein the recognizing the face region to obtain a second face feature of the face region comprises:
and determining a second face feature of the face region based on the face shape parameter of the face region.
9. The method of claim 8, wherein the determining the second face feature of the face region based on the face shape parameter of the face region comprises:
and acquiring the face shape characteristics corresponding to the face shape parameters of the face region according to the first corresponding relation between the face shape parameters and the face shape characteristics, and taking the face shape characteristics as second face characteristics of the face region.
10. The method according to claim 9, wherein the obtaining the face feature corresponding to the face shape parameter of the face region according to the first corresponding relationship between the face shape parameter and the face feature as the second face feature of the face region comprises:
acquiring a difference value between the face shape parameter of the face area and each face shape parameter in the first corresponding relation;
and acquiring the face shape characteristic corresponding to the face shape parameter with the minimum difference value from the first corresponding relation, and taking the face shape characteristic as a second face characteristic of the face region.
11. The method according to claim 9, wherein the obtaining the face feature corresponding to the face shape parameter of the face region according to the first corresponding relationship between the face shape parameter and the face feature as the second face feature of the face region comprises:
acquiring a difference value between the face shape parameter of the face area and any face shape parameter in the first corresponding relation;
and if the difference value does not exceed a reference threshold value, acquiring a face shape feature corresponding to any one face shape parameter as a second face feature of the face region.
12. The method of claim 9, wherein before obtaining the face feature corresponding to the face shape parameter of the face region according to the first corresponding relationship between the face shape parameter and the face feature as the second face feature of the face region, the method further comprises:
acquiring a plurality of sample object regions with the same facial features;
acquiring the face shape parameters of each sample object region;
carrying out statistical processing on the obtained multiple face shape parameters to obtain processed face shape parameters;
establishing a first correspondence of the processed shape parameters to the facial features.
13. The method of claim 8, wherein the face shape parameters comprise at least one of:
the length and width of the face;
a forehead width and a chin width of the human face;
the round tip coefficient of the lower jaw of the human face;
the chin apex coefficient of the face.
14. The method according to claim 1, wherein the object regions are human face regions, and the obtaining the special effect parameter corresponding to each object region based on the object feature of each object region includes:
calling a face recognition model, and carrying out recognition processing on each face area to obtain a user identifier corresponding to each face area;
and acquiring a special effect parameter corresponding to each user identifier based on each acquired user identifier.
15. The method according to claim 1, wherein the obtaining the special effect parameter corresponding to each object region based on the object feature of each object region comprises:
and acquiring a special effect parameter corresponding to each object characteristic according to a second corresponding relation between the object characteristics and the special effect parameters.
16. The method according to claim 15, wherein before obtaining the special effect parameter corresponding to each object feature according to the second corresponding relationship between the object feature and the special effect parameter, the method further comprises:
obtaining special effect parameters corresponding to a plurality of reference object areas with the same object characteristics;
carrying out statistical processing on the obtained multiple special effect parameters to obtain processed special effect parameters;
and establishing a second corresponding relation between the object characteristics and the processed special effect parameters.
17. The method according to claim 15, wherein before obtaining the special effect parameter corresponding to each object feature according to the second corresponding relationship between the object feature and the special effect parameter, the method further comprises:
responding to a special effect setting operation on any currently displayed object area, and acquiring a special effect parameter set for the object area;
acquiring object features of the object region;
and establishing a second corresponding relation between the object characteristics and the special effect parameters.
18. The method according to claim 17, wherein the obtaining of the special effect parameter set for any one of the currently displayed object regions in response to the special effect setting operation on the object region comprises:
responding to the triggering operation of the special effect setting option of the object area, and displaying at least one candidate special effect parameter;
in response to a trigger operation on any candidate special effect parameter, determining the candidate special effect parameter as the special effect parameter of the object area.
19. The method according to claim 15, wherein after the performing the special effect processing on each object region according to the special effect parameter corresponding to each object region to obtain the second image, the method further comprises:
responding to a special effect setting operation on any object area in the second image, and acquiring a target special effect parameter set for the object area;
and updating the second corresponding relation according to the object characteristics of the object area and the target special effect parameters.
20. The method according to claim 1, wherein performing a special effect process on each object region according to the special effect parameter corresponding to each object region to obtain a second image comprises:
cropping said each object region from said first image;
for each object area, carrying out special effect processing on the object area by adopting a special effect parameter corresponding to the object area;
and backfilling each object region after the special effect processing into the first image to obtain the second image.
21. The method according to claim 1, wherein after the identifying of the at least one object region in the first image and before the obtaining of the special effect parameter corresponding to each object region based on the object feature of each object region, the method further comprises:
cropping said each object region from said first image;
performing special effect processing on each object region according to the special effect parameter corresponding to each object region to obtain a second image, including:
for each object area, carrying out special effect processing on the object area by adopting a special effect parameter corresponding to the object area;
and backfilling each object region after the special effect processing into the first image to obtain the second image.
22. The method of claim 21, wherein after said cropping said each object region from said first image, the method further comprises:
sequentially putting each object area into a buffer queue;
the obtaining of the special effect parameter corresponding to each object region based on the object feature of each object region includes:
extracting an object region from the cache queue, and acquiring a special effect parameter corresponding to the object region based on the object feature of the object region;
and extracting a next object region from the cache queue, and acquiring the special effect parameters corresponding to the next object region based on the object characteristics of the next object region until the special effect parameters corresponding to the last object region in the cache queue are acquired.
23. The method according to claim 1, wherein the first image is an image captured in a live broadcast process, and after the each object region is subjected to special effect processing according to the special effect parameter corresponding to the each object region to obtain the second image, the method further includes at least one of:
displaying the second image in a live interface;
and sending the second image to a live broadcast server, and sending the second image to a viewer client for watching live broadcast by the live broadcast server.
24. A special effect parameter setting method, the method comprising:
displaying an image, wherein the image comprises at least one object area, and the object area is an area where a target object is located;
displaying the special effect setting identification of each object area;
and responding to the trigger operation of the special effect setting identification of any object area, and acquiring special effect parameters set for the any object area.
25. The method of claim 24, wherein the target object is a human face, the method further comprising:
when the camera is started, displaying first prompt information, wherein the first prompt information is used for prompting shooting of a front face;
or displaying second prompt information, wherein the second prompt information is used for prompting that the human faces in multiple directions are shot.
26. The method of claim 24, wherein displaying the special effects setting indicator for each object region comprises:
carrying out object recognition on the image, and determining at least one object area in the image;
displaying at least one identification frame based on the determined at least one object area, wherein each object area is positioned in the corresponding identification frame.
27. The method of claim 26, wherein after displaying at least one identification box based on the determined at least one object region, the method further comprises:
and displaying third prompt information, wherein the third prompt information is used for prompting the trigger operation executed on the identification frame.
28. The method according to claim 24, wherein the obtaining of the special effect parameter set for any object region in response to the setting operation of the special effect setting flag for the any object region comprises:
responding to the trigger operation of the special effect setting identification of any object area, and displaying a special effect setting interface, wherein the special effect setting interface is used for displaying at least one candidate special effect parameter;
in response to a trigger operation on any candidate special effect parameter, determining the candidate special effect parameter as the special effect parameter of any object area.
29. The method according to claim 24, wherein after the obtaining of the special effect parameter set for any one of the object regions in response to the setting operation of the special effect setting flag for the any one of the object regions, the method further comprises:
acquiring an object feature of any object region, wherein the object feature indicates the appearance of the target object in any object region;
and establishing a second corresponding relation between the object characteristics and the special effect parameters set for any object region.
30. The method according to claim 29, wherein after establishing the second correspondence between the object feature and the special effect parameter set for any one of the object regions, the method further comprises:
and storing the second corresponding relation in a configuration file.
31. An image processing apparatus, characterized in that the apparatus comprises:
the image acquisition module is used for acquiring a first image comprising at least one object area, wherein the object area is an area where a target object is located;
the special effect acquisition module is used for acquiring a special effect parameter corresponding to each object region based on the object feature of the object region, wherein the object feature of the object region indicates the appearance of the target object in the object region;
and the special effect processing module is used for carrying out special effect processing on each object region according to the special effect parameter corresponding to each object region to obtain a second image after the first image processing.
32. A special effect parameter setting apparatus, characterized in that the apparatus comprises:
the display module is used for displaying an image, wherein the image comprises at least one object area, and the object area is an area where a target object is located;
the display module is also used for displaying the special effect setting identification of each object area;
and the parameter acquisition module is used for responding to the trigger operation of the special effect setting identifier of any object area and acquiring the special effect parameters set for the any object area.
33. A computer device comprising a processor and a memory, the memory having stored therein at least one program code, the at least one program code being loaded into and executed by the processor to perform operations carried out in the image processing method according to any one of claims 1 to 23; or to implement the operations performed in the special effects parameter setting method of any of claims 24 to 30.
34. A computer-readable storage medium having at least one program code stored therein, the at least one program code being loaded and executed by a processor to perform the operations performed in the image processing method according to any one of claims 1 to 23; or to implement the operations performed in the special effects parameter setting method of any of claims 24 to 30.
CN202011315161.1A 2020-11-20 2020-11-20 Image processing method, special effect parameter setting method, device, equipment and medium Active CN112419143B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011315161.1A CN112419143B (en) 2020-11-20 2020-11-20 Image processing method, special effect parameter setting method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011315161.1A CN112419143B (en) 2020-11-20 2020-11-20 Image processing method, special effect parameter setting method, device, equipment and medium

Publications (2)

Publication Number Publication Date
CN112419143A true CN112419143A (en) 2021-02-26
CN112419143B CN112419143B (en) 2024-08-06

Family

ID=74777238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011315161.1A Active CN112419143B (en) 2020-11-20 2020-11-20 Image processing method, special effect parameter setting method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN112419143B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023061461A1 (en) * 2021-10-14 2023-04-20 北京字跳网络技术有限公司 Special effect playback method and system for live broadcast room, and device
WO2023142474A1 (en) * 2022-01-28 2023-08-03 上海商汤智能科技有限公司 Image processing method and apparatus, electronic device, storage medium, and computer program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107995415A (en) * 2017-11-09 2018-05-04 深圳市金立通信设备有限公司 A kind of image processing method, terminal and computer-readable medium
CN108958610A (en) * 2018-07-27 2018-12-07 北京微播视界科技有限公司 Special efficacy generation method, device and electronic equipment based on face
CN109614902A (en) * 2018-11-30 2019-04-12 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN111753784A (en) * 2020-06-30 2020-10-09 广州酷狗计算机科技有限公司 Video special effect processing method and device, terminal and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107995415A (en) * 2017-11-09 2018-05-04 深圳市金立通信设备有限公司 A kind of image processing method, terminal and computer-readable medium
CN108958610A (en) * 2018-07-27 2018-12-07 北京微播视界科技有限公司 Special efficacy generation method, device and electronic equipment based on face
CN109614902A (en) * 2018-11-30 2019-04-12 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN111753784A (en) * 2020-06-30 2020-10-09 广州酷狗计算机科技有限公司 Video special effect processing method and device, terminal and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023061461A1 (en) * 2021-10-14 2023-04-20 北京字跳网络技术有限公司 Special effect playback method and system for live broadcast room, and device
WO2023142474A1 (en) * 2022-01-28 2023-08-03 上海商汤智能科技有限公司 Image processing method and apparatus, electronic device, storage medium, and computer program product

Also Published As

Publication number Publication date
CN112419143B (en) 2024-08-06

Similar Documents

Publication Publication Date Title
CN110087123B (en) Video file production method, device, equipment and readable storage medium
CN110336960B (en) Video synthesis method, device, terminal and storage medium
CN110502954B (en) Video analysis method and device
CN111065001B (en) Video production method, device, equipment and storage medium
CN110545476B (en) Video synthesis method and device, computer equipment and storage medium
CN109451343A (en) Video sharing method, apparatus, terminal and storage medium
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN109302632B (en) Method, device, terminal and storage medium for acquiring live video picture
CN111753784A (en) Video special effect processing method and device, terminal and storage medium
CN111083526B (en) Video transition method and device, computer equipment and storage medium
CN111447389B (en) Video generation method, device, terminal and storage medium
CN111355998B (en) Video processing method and device
CN112084811A (en) Identity information determining method and device and storage medium
CN110839174A (en) Image processing method and device, computer equipment and storage medium
CN112565806B (en) Virtual gift giving method, device, computer equipment and medium
CN111586444B (en) Video processing method and device, electronic equipment and storage medium
CN111083513B (en) Live broadcast picture processing method and device, terminal and computer readable storage medium
CN112419143B (en) Image processing method, special effect parameter setting method, device, equipment and medium
CN110189348B (en) Head portrait processing method and device, computer equipment and storage medium
CN111586279A (en) Method, device and equipment for determining shooting state and storage medium
CN112822544A (en) Video material file generation method, video synthesis method, device and medium
CN112235650A (en) Video processing method, device, terminal and storage medium
CN111369434B (en) Method, device, equipment and storage medium for generating spliced video covers
CN112399080A (en) Video processing method, device, terminal and computer readable storage medium
CN111898488A (en) Video image identification method and device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant