CN109302628B - Live broadcast-based face processing method, device, equipment and storage medium - Google Patents

Live broadcast-based face processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN109302628B
CN109302628B CN201811241860.9A CN201811241860A CN109302628B CN 109302628 B CN109302628 B CN 109302628B CN 201811241860 A CN201811241860 A CN 201811241860A CN 109302628 B CN109302628 B CN 109302628B
Authority
CN
China
Prior art keywords
face
target
data
contour
target face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811241860.9A
Other languages
Chinese (zh)
Other versions
CN109302628A (en
Inventor
华路延
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Technology Co Ltd
Original Assignee
Guangzhou Huya Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Technology Co Ltd filed Critical Guangzhou Huya Technology Co Ltd
Priority to CN202110213455.1A priority Critical patent/CN113329252B/en
Priority to CN201811241860.9A priority patent/CN109302628B/en
Publication of CN109302628A publication Critical patent/CN109302628A/en
Application granted granted Critical
Publication of CN109302628B publication Critical patent/CN109302628B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a live broadcast-based face processing method, a live broadcast-based face processing device, live broadcast-based face processing equipment and a live broadcast-based face processing storage medium. The method comprises the following steps: when a live broadcast room is started, collecting image data; performing face detection in the image data to obtain target face data and target face features in the target face data; comparing the target face features with preset standard face features, and carrying out image processing on the target face data according to the comparison result; and generating a live broadcast data stream of the live broadcast room according to the target face data after the image processing. The method solves the problems that in the existing video live broadcast technology, automatic facial beautification is excessively unnatural, manual facial beautification requires a user to spend a large amount of time, debugging steps are troublesome, and parameters are complex.

Description

Live broadcast-based face processing method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to an image processing technology, in particular to a live broadcast-based face processing method, a live broadcast-based face processing device, live broadcast-based face processing equipment and a storage medium.
Background
With the wide popularity of live broadcast this way of entertainment. In order to broadcast a satisfactory result, the anchor will generally need to use live broadcast software with a retouching video function. With the popularization rate of mobile phone video live broadcast software being higher and higher, the requirements of people on the beauty function of the live broadcast software are higher and higher. Especially, the beauty effect is close to the real one, but the defect of the beauty effect is required to be well modified. Users especially have higher requirements on common effects such as skin beautifying, skin grinding, face thinning and the like.
The existing beautifying method aiming at the face image adopts a set of same beautifying templates aiming at different faces, for example, beautiful pictures, beautiful appearance and the like, different face images are processed by adopting a set of fixed same beautifying templates such as whitening, skin grinding and the like after the faces are identified, so that the corresponding beautifying effect can not be provided according to different face characteristics in the face images, and the beautifying effect is single. If more comprehensive beauty optimization is needed, manual adjustment of a user is needed, and the defects of long time for entering the door, troublesome steps, complex parameters, low program running efficiency, excessive unnaturalness and the like exist.
Disclosure of Invention
The invention provides a live broadcast-based face processing method, a live broadcast-based face processing device, live broadcast-based face processing equipment and a live broadcast-based face processing storage medium, and solves the problems that automatic face beautifying is excessively unnatural, manual face beautifying requires a user to spend a large amount of time, debugging steps are troublesome and parameters are complex in the existing live broadcast video technology.
In a first aspect, an embodiment of the present invention provides a live broadcast-based face processing method, including:
when a live broadcast room is started, collecting image data;
performing face detection in the image data to obtain target face data and target face features in the target face data;
comparing the target face features with preset standard face features, and carrying out image processing on the target face data according to the comparison result;
and generating a live broadcast data stream of the live broadcast room according to the target face data after the image processing.
In a second aspect, an embodiment of the present invention further provides a live broadcast-based face processing apparatus, including:
the image acquisition module is used for acquiring image data when the live broadcast room is started;
the characteristic extraction module is used for carrying out face detection in the image data to obtain target face data and target face characteristics in the target face data;
the characteristic comparison module is used for comparing the target face characteristic with a preset standard face characteristic and carrying out image processing on the target face data according to a comparison result;
and the data stream generation module is used for generating the live broadcast data stream of the live broadcast room according to the target face data after the image processing.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes a central processing unit and a graphics processing unit; the central processing unit comprises an image acquisition module, a feature extraction module and a data stream generation module, and the graphic processor comprises a feature comparison module;
the image acquisition module is used for acquiring image data when the live broadcast room is started;
the feature extraction module is used for carrying out face detection in the image data to obtain target face data and target face features in the target face data;
the characteristic comparison module is used for comparing the target face characteristic with a preset standard face characteristic and carrying out image processing on the target face data according to a comparison result;
and the data stream generation module is used for generating the live broadcast data stream of the live broadcast room according to the target face data after image processing.
In a fourth aspect, an embodiment of the present invention further provides an electronic device, including:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a live-based face processing method as in any embodiment.
In a fifth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a live broadcast-based face processing method according to any embodiment.
The method comprises the steps of determining target face characteristics by obtaining target face data; and comparing the target face features with the standard face features, carrying out image processing on the target face data according to a comparison result, and finally applying to city live broadcast data flow. The problems that automatic facial beautification is excessive and unnatural, manual facial beautification requires a user to spend a large amount of time, debugging steps are troublesome and parameters are complex in the existing video live broadcast technology are solved, and the automatic optimization facial beautification operation on the human face according to the information such as the outline of the human face, the size of the eye, the distance and the like in the video live broadcast is realized. The time spent by the user on parameter processing is reduced on the original basis, the high program running efficiency, the low power consumption and the quick response are realized, and the effect of improving the user experience is finally achieved.
Drawings
Fig. 1 is a flowchart of a live broadcast-based face processing method according to an embodiment of the present invention;
fig. 2A is a flowchart of a live broadcast-based face processing method according to a second embodiment of the present invention;
FIG. 2B is a schematic diagram of obtaining target image data from image data according to a second embodiment of the present invention;
fig. 3 is a structural diagram of a face processing apparatus according to a third embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a live broadcast-based face processing method according to an embodiment of the present invention. The technical solution in this embodiment is optionally applicable to a scene in which video information is generated by a camera device when a main broadcast is live broadcast. It can be understood that the technical scheme can also be applied to other application scenes as long as the problem of beautifying the video information exists. The method is executed by a live broadcast-based face processing device, which can be implemented by software and/or software, and is generally configured in an electronic device. Usually, the electronic device needs to have both a CPU (Central Processing Unit) and a GPU (Graphics Processing Unit), but an electronic device with only a CPU can also perform the present operation.
Because this scheme is mainly applicable to the anchor of live platform and carries out the scene that live through camera equipment. The live platform includes a plurality of live rooms, and the live room includes: uniform Resource Locator (URL), room number, current state of the room (in use or idle), and live content of the room. The live broadcast platform can perform clustering processing on the rooms according to the live broadcast contents of the live broadcast rooms. The user group of the live broadcast platform can be divided into two categories of audience users and anchor users, and the roles of the two categories of users in the live broadcast platform are different, so that the two categories of users have different authorities and data processing modes. When the anchor broadcasts directly, the cooperation of live broadcast software and hardware equipment is needed, and the live broadcast can be carried out in a mode of a computer, camera equipment, a mobile terminal and the like.
Referring to fig. 1, the method includes:
s101, when a live broadcast room is started, image data are collected.
Wherein, the live broadcast room is started to start the live broadcast related software for the anchor broadcast. The image data is a set of gradation values of each pixel (pixel) expressed as a numerical value. The step of collecting the image data refers to the step of collecting the live broadcast pictures of the anchor through the camera equipment. It should be understood that, since the present embodiment is applicable to a scene in which a main broadcast is live, audio data should be captured at the same time as image data is captured.
Specifically, when the anchor starts the live broadcast room, the face processing device acquires the live broadcast pictures of the anchor through the camera device, and the image data obtained at the moment is specific to each frame of picture.
S102, carrying out face detection in the image data to obtain target face data and target face features in the target face data.
The face detection means detecting whether a face image exists in the image data and information such as a specific position of the face image by using a face detection method. The target face data refers to face data obtained from image data. The target face features refer to some specific parts in the target face data, such as target face contour features and target eye contour features.
Specifically, the face processing device performs face detection on the image data through the CPU, abstracts the detected face image into target face data, and processes the target face data to obtain target face features.
S103, comparing the target face features with preset standard face features, and carrying out image processing on the target face data according to the comparison result.
The standard face is a beautification target, and can be a well-recognized and best-looking face obtained based on big data or a good-looking face set by a user. The preset standard face features are features obtained after the standard face is processed.
Specifically, the face processing device compares the obtained target face features with standard face features obtained according to a standard face through the GPU, and adjusts target face data according to a comparison result so that the target face data fits the standard face data. The CPU can also extract the target human face features and send the features to the GPU for comparison, or directly compare the features through the CPU.
And S104, generating a live broadcast data stream of the live broadcast room according to the target face data after the image processing.
Wherein the live data stream comprises a data stream for local playback and a data stream for delivery to the viewer user client. Meanwhile, the audio and video are packaged into video files and uploaded to a live broadcast server in a streaming mode, and the live broadcast server can provide the video files for audiences.
Specifically, the face processing apparatus generates, by the CPU, live broadcast data streams of the live broadcast room from the target face data that has been subjected to image processing (target face data is adjusted according to the comparison result so that the target face data fits the standard face data), and the data can be used for video playback and data distribution (data stream transmission is performed by means of a content distribution network or the like).
The embodiment of the invention determines the characteristics of the target face by acquiring the data of the target face; and comparing the target face features with the standard face features, carrying out image processing on the target face data according to a comparison result, and finally generating a live broadcast data stream. The problems that automatic facial beautification is excessive and unnatural, manual facial beautification requires a user to spend a large amount of time, debugging steps are troublesome and parameters are complex in the existing video live broadcast technology are solved, and the automatic optimization facial beautification operation on the human face according to the information such as the outline of the human face, the size of the eye, the distance and the like in the video live broadcast is realized. The time spent by the user on parameter processing is reduced on the original basis, the high program running efficiency, the low power consumption and the quick response are realized, and the effect of improving the user experience is finally achieved.
Example two
Fig. 2A is a flowchart of a live broadcast-based face processing method according to a second embodiment of the present invention. The present embodiment is a refinement based on the first embodiment, and mainly describes how to fit the target face features with the standard face features when the target face features are the target face contour features and the target eye contour features, respectively.
Referring to fig. 2A, the present embodiment specifically includes the following steps:
s201, when the live broadcast room is started, image data are collected.
Specifically, when the anchor starts the live broadcast room, the face processing device acquires the live broadcast pictures of the anchor through the camera device, and the image data obtained at the moment is specific to each frame of picture.
S202, carrying out face detection in the image data to obtain target face data and target face features in the target face data.
Specifically, the face processing device performs face detection on the image data through the CPU, abstracts the detected face image into target face data, and processes the target face data to obtain target face features.
S203, comparing the target face contour feature with the standard face contour feature, and carrying out image processing on the face contour in the target face data according to the comparison result.
Wherein the target face contour feature is a face contour portion in the target face feature. The standard facial contour feature refers to a facial contour part in the standard facial feature.
Specifically, the face processing device compares a face contour part in the target face feature with a face contour part in the standard face, and adjusts the target face contour feature in an appropriate range by taking the standard face contour feature as a fitting target, so that the target face data fits the standard face data.
Optionally, step S203 may be subdivided into the following steps:
determining a first target bending and stretching coefficient through a gradient difference between a first gradient value of the target face contour feature and a second gradient value of the standard face contour feature;
and processing the image according to the first target bending and stretching coefficient on the basis of the face contour in the target face data.
Specifically, a first gradient value of the target face contour feature is calculated; the standard face contour feature is stored in the server in advance, so that the second gradient value of the standard face contour feature can be directly obtained from the server; calculating a gradient difference between the first gradient value and the second gradient value; calculating the gradient difference through a first bending and stretching function to obtain a first target bending and stretching coefficient; and processing the image according to the first target bending and stretching coefficient on the basis of the face contour in the target face data.
Wherein, performing image processing according to the first target bending and stretching coefficient on the basis of the face contour in the target face data specifically comprises:
determining an adjustment reference value;
selecting a point to be adjusted from a face contour in target face data, and determining an adjustment coefficient corresponding to the point to be adjusted; wherein, the number of the points to be adjusted is two or more;
determining an adjustment range by taking the point to be adjusted as a circle center and the product of the adjustment reference value and the adjustment coefficient as a radius;
carrying out image processing on the face contour in the target face data in the adjustment range according to the first target bending and stretching coefficient to obtain a middle face contour;
and mixing the middle face contour corresponding to each point to be adjusted to obtain the face contour after image processing.
The adjustment reference value is a parameter for determining the radius of the adjustment range, and preferably may be set as the distance from the tip of the nose to the chin in the target face data. The points to be adjusted are points in the facial contour in the target face data, and more points are selected as the points to be adjusted to obtain a finer facial contour. The adjustment coefficient is used for correcting the radius of the adjustment range, can be selected to be 0.8-1.2, and has different adjustment coefficients for different points to be adjusted. The blending process includes several processing modes, and preferably, four intermediate facial contours are superimposed, and if there is an overlap, the point closest to the tip of the nose is taken, and then the line is smoothed.
Wherein, the formula for processing the face contour in the target face data in the adjustment range according to the first target bending and stretching coefficient to obtain the middle face contour comprises:
Figure BDA0001839542120000091
wherein the content of the first and second substances,
image _ face 'represents a middle face contour corresponding to a certain point to be adjusted, alpha is an adjustment coefficient corresponding to the certain point to be adjusted, R is an adjustment reference value, (alpha x R) is a radius value taking the point to be adjusted as a center of a circle, namely an adjustment range, sigma is a first gradient value, sigma' is a second gradient value,
Figure BDA0001839542120000092
indicates that the gradient difference was calculated by substituting the first bending stretching function, indicates that the point to be adjusted was centered at the center of the circle, and within the adjustment range (. alpha. times.R)
Figure BDA0001839542120000093
To carry outImage processing, large _ face indicates the feature of the target face outline, (. alpha. times.R) <' >
Figure BDA0001839542120000094
The procedure is processing performed based on the Image _ face.
And S204, in the image data, covering the target face data after the image processing with the target face data before the image processing to obtain target image data.
Wherein, the covering means that the target face characteristic part in the target face data before image processing is filled with pure color, and the target face data after image processing is loaded in the pure color filling part after the filling is finished. The target image data is image data for generating a data stream after being subjected to the beautifying.
Specifically, after the target face features (face contour features and/or eye contour features) are determined, the target face features are partially cut or filled with pure colors in the image data, the target face data after the image processing is overlaid on the target face data before the image processing, and the image data obtained at this time is used as the target image data. Therefore, the accurate beautification of the characteristic part of the target face can be realized, and the distortion of other backgrounds can not be caused.
S205, generating a live broadcast data stream of the live broadcast room based on the target image data.
Specifically, the face processing device generates target image data into a live broadcast data stream of a live broadcast room through the CPU, and the data can be used for video playback and data distribution (data streaming transmission through a content distribution network or the like). On the basis of the above-described embodiment, step S203 describes the case where the target face feature is a target face contour feature. Step S203 may be replaced by a case where the target face feature is the target eye contour feature, and is denoted as step S206. Step S203 and step S206 may be performed alternatively or simultaneously, and preferably, the face contour processing is performed first, and then the eye contour processing is performed.
Step S206 is to compare the target eye contour feature with the standard eye contour feature, and perform image processing on the eye contour in the target face data according to the comparison result.
The target eye contour feature is an eye contour part in the target human face feature. The standard eye contour feature refers to an eye contour part in the standard human face feature.
Specifically, the face processing device compares an eye contour part in the target face feature with an eye contour part in a standard face, and adjusts the target eye contour feature in an appropriate range by taking the standard eye contour feature as a fitting target, so that the target face data fits the standard face data.
Optionally, step S206 may be subdivided into the following steps:
step one, calculating the distance between the target eye contour features to obtain the target eye size and the target eye distance in the target face data;
step two, obtaining the standard eye size and the standard eye distance in the standard face data;
step three, calculating the size difference between the target eye size and the standard eye size;
step four, calculating the distance difference between the target eye distance and the standard eye distance;
step five, calculating the size difference through an amplification and reduction function to obtain a target amplification and reduction coefficient;
step six, calculating the distance difference through a second bending and stretching function to obtain a second target bending and stretching coefficient;
and seventhly, performing image processing according to the target magnification and reduction coefficient and the second target bending and stretching coefficient on the basis of the eye contour in the target face data.
Optionally, step S206 may be refined as follows:
Figure BDA0001839542120000111
wherein the content of the first and second substances,
Image′eyerepresenting the contour of the eye after image processing,
Figure BDA0001839542120000112
the difference in size is indicated by the difference in size,
Figure BDA0001839542120000113
the difference in the distance is represented by,
Figure BDA0001839542120000114
means that the size difference is calculated by the magnification and reduction function to obtain the target magnification and reduction coefficient,
Figure BDA0001839542120000115
means that the distance difference is calculated by a second bending-stretching function to obtain a second target bending-stretching coefficient,
Figure BDA0001839542120000116
to represent
Figure BDA0001839542120000117
And
Figure BDA0001839542120000118
simultaneously, Image _ eye represents the target eye contour
Figure BDA0001839542120000119
The process is performed on an Image _ eye basis.
Fig. 2B is a schematic diagram of obtaining target image data through image data according to the second embodiment of the present invention. Referring to fig. 2B, the target face contour feature 23 in the image data 20 is subjected to the processing of step S203, obtaining a face contour 24 after the image processing; the target eye contour feature 21 in the image data 20 is processed in step S206 to obtain an eye contour 22 after image processing; the eye contour 22 and the face contour 24 after the image processing are combined to obtain target image data 25.
The embodiment of the invention determines the characteristics of the target face by acquiring the data of the target face; and comparing the target face features with the standard face features, carrying out image processing on the target face data according to a comparison result, and finally applying to city live broadcast data flow. The embodiment also discloses how to fit the standard human face features when the target human face features are respectively the target face contour features and the target eye contour features. The problems that automatic facial beautification is excessive and unnatural, manual facial beautification requires a user to spend a large amount of time, debugging steps are troublesome and parameters are complex in the existing video live broadcast technology are solved, and the automatic optimization facial beautification operation on the human face according to the information such as the outline of the human face, the size of the eye, the distance and the like in the video live broadcast is realized. The time spent by the user on parameter processing is reduced on the original basis, the high program running efficiency, the low power consumption and the quick response are realized, and the effect of improving the user experience is finally achieved.
EXAMPLE III
Fig. 3 is a structural diagram of a face processing device according to a third embodiment of the present invention. The device includes: the system comprises an image acquisition module 31, a feature extraction module 32, a feature comparison module 33 and a data stream generation module 34. Wherein:
the image acquisition module 31 is used for acquiring image data when the live broadcast room is started;
a feature extraction module 32, configured to perform face detection on the image data to obtain target face data and target face features in the target face data;
the feature comparison module 33 is configured to compare the target face features with preset standard face features, and perform image processing on the target face data according to a comparison result;
and the data stream generating module 34 is configured to generate a live data stream of the live broadcast room according to the target face data after the image processing.
The embodiment of the invention determines the characteristics of the target face by acquiring the data of the target face; and comparing the target face features with the standard face features, carrying out image processing on the target face data according to a comparison result, and finally applying to city live broadcast data flow. The problems that automatic facial beautification is excessive and unnatural, manual facial beautification requires a user to spend a large amount of time, debugging steps are troublesome and parameters are complex in the existing video live broadcast technology are solved, and the automatic optimization facial beautification operation on the human face according to the information such as the outline of the human face, the size of the eye, the distance and the like in the video live broadcast is realized. The time spent by the user on parameter processing is reduced on the original basis, the high program running efficiency, the low power consumption and the quick response are realized, and the effect of improving the user experience is finally achieved.
On the basis of the above embodiment, the target face features include target face contour features, and the standard face features include standard face contour features in standard face data; the feature comparison module is then configured to:
and comparing the target face contour feature with the standard face contour feature, and carrying out image processing on the face contour in the target face data according to the comparison result.
On the basis of the above embodiment, the comparing the target face contour feature with the standard face contour feature, and performing image processing on the face contour in the target face data according to the comparison result includes:
determining a first target bending and stretching coefficient through a gradient difference between a first gradient value of the target face contour feature and a second gradient value of the standard face contour feature;
and processing the image according to the first target bending and stretching coefficient on the basis of the face contour in the target face data.
On the basis of the above implementation, the image processing according to the first target bending and stretching coefficient on the basis of the face contour in the target face data specifically includes:
determining an adjustment reference value;
selecting a point to be adjusted from a face contour in target face data, and determining an adjustment coefficient corresponding to the point to be adjusted; wherein, the number of the points to be adjusted is two or more;
determining an adjustment range by taking the point to be adjusted as a circle center and the product of the adjustment reference value and the adjustment coefficient as a radius;
carrying out image processing on the face contour in the target face data in the adjustment range according to the first target bending and stretching coefficient to obtain a middle face contour;
and mixing the middle face contour corresponding to each point to be adjusted to obtain the face contour after image processing.
On the basis of the implementation, the target human face features comprise target eye contour features, and the standard human face features comprise standard eye contour features of a standard human face; the feature comparison module is then configured to:
and comparing the target eye contour feature with the standard eye contour feature, and carrying out image processing on the eye contour in the target face data according to the comparison result.
On the basis of the implementation, comparing the target eye contour feature with the standard eye contour feature, and performing image processing on the eye contour in the target face data according to the comparison result to fit the eye contour in the standard face data, includes:
calculating the distance between the target eye contour features to obtain the size of a target eye and the distance between the target eyes in the target face data;
obtaining standard eye size and standard eye distance in the standard face data;
calculating a size difference between the target eye size and the standard eye size;
calculating a distance difference between the target eye separation and the standard eye separation;
calculating the size difference through an amplification and reduction extension function to obtain a target amplification and reduction coefficient;
calculating the distance difference through a second bending and stretching function to obtain a second target bending and stretching coefficient;
and performing image processing according to the target magnification and reduction coefficient and the second target bending and stretching coefficient on the basis of the eye contour in the target face data.
On the basis of the above implementation, the data stream generation module is specifically configured to:
in the image data, covering the target face data after the image processing with the target face data before the image processing as target image data;
and generating a live broadcast data stream of the live broadcast room based on the target image data.
The live broadcast-based face processing device provided by the embodiment can be used for executing the live broadcast-based face processing method provided by any one of the embodiments, and has corresponding functions and beneficial effects.
Example four
Fig. 4 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention. As shown in fig. 4, the electronic apparatus includes a processor 40, a memory 41, a communication module 42, an input device 43, and an output device 44; the number of the processors 40 in the electronic device may be one or more, and may be generally configured to include a central processing unit and a graphics processing unit; the central processor comprises an image acquisition module 31, a feature extraction module 32 and a data stream generation module 33, and the graphics processor 52 comprises a feature comparison module 33; one processor 40 is illustrated in fig. 4; the processor 40, the memory 41, the communication module 42, the input device 43 and the output device 44 in the electronic device may be connected by a bus or other means, and the bus connection is exemplified in fig. 4.
The memory 41 is used as a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as the modules corresponding to the live broadcast-based face processing method in the embodiment (for example, the image acquisition module 31, the feature extraction module 32, the feature comparison module 33, and the data stream generation module 34 in the live broadcast-based face processing apparatus). The processor 40 executes various functional applications and data processing of the electronic device by running software programs, instructions and modules stored in the memory 41, that is, implements the above-mentioned live broadcast-based face processing method.
The memory 41 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 41 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 41 may further include memory located remotely from processor 40, which may be connected to the electronic device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
And the communication module 42 is used for establishing connection with the display screen and realizing data interaction with the display screen. The input device 43 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function controls of the electronic apparatus.
The electronic device provided by this embodiment of the present invention can execute the live broadcast-based face processing method provided by any embodiment of the present invention, and has corresponding functions and advantages.
EXAMPLE five
Fig. 5 is an electronic device according to a fifth embodiment of the present invention. As shown in fig. 5, the electronic device includes a central processing unit 51 and a graphic processing unit 52; the central processor 51 comprises an image acquisition module 31, a feature extraction module 32 and a data stream generation module 33, and the graphics processor 52 comprises a feature comparison module 33;
the image acquisition module is used for acquiring image data when the live broadcast room is started;
the feature extraction module is used for carrying out face detection in the image data to obtain target face data and target face features in the target face data;
the characteristic comparison module is used for comparing the target face characteristic with a preset standard face characteristic and carrying out image processing on the target face data according to a comparison result;
and the data stream generation module is used for generating the live broadcast data stream of the live broadcast room according to the target face data after image processing.
The electronic device provided by this embodiment of the present invention can execute the live broadcast-based face processing method provided by any embodiment of the present invention, and has corresponding functions and advantages.
EXAMPLE six
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, where the computer-executable instructions are executed by a computer processor to perform a live broadcast-based face processing method, where the method includes:
when a live broadcast room is started, collecting image data;
performing face detection in the image data to obtain target face data and target face features in the target face data;
comparing the target face features with preset standard face features, and carrying out image processing on the target face data according to the comparison result;
and generating a live broadcast data stream of the live broadcast room according to the target face data after the image processing.
Of course, the storage medium provided in the embodiments of the present invention includes computer-executable instructions, and the computer-executable instructions are not limited to the above-described method operations, and may also perform related operations in the live broadcast-based face processing method provided in any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes instructions for enabling a computer electronic device (which may be a personal computer, a server, or a network electronic device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the above live broadcast-based face processing apparatus, the included units and modules are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (8)

1. A live broadcast-based face processing method is characterized by comprising the following steps:
when a live broadcast room is started, collecting image data;
performing face detection in the image data to obtain target face data and target face features in the target face data;
comparing the target face features with preset standard face features, and carrying out image processing on the target face data according to the comparison result;
generating a live broadcast data stream of the live broadcast room according to the target face data after image processing;
the target face features comprise target face contour features, and the standard face features comprise standard face contour features in standard face data;
the comparing the target face features with preset standard face features and performing image processing on the target face data according to the comparison result comprises the following steps:
comparing the target face contour feature with the standard face contour feature, and carrying out image processing on the face contour in the target face data according to the comparison result;
the comparing the target face contour feature with the standard face contour feature and performing image processing on the face contour in the target face data according to the comparison result includes:
determining a first target bending and stretching coefficient through a gradient difference between a first gradient value of the target face contour feature and a second gradient value of the standard face contour feature;
performing image processing according to the first target bending and stretching coefficient on the basis of the face contour in the target face data;
performing image processing on the basis of the face contour in the target face data according to the first target bending and stretching coefficient, specifically including:
determining an adjustment reference value;
selecting a point to be adjusted from a face contour in target face data, and determining an adjustment coefficient corresponding to the point to be adjusted; wherein, the number of the points to be adjusted is two or more;
determining an adjustment range by taking the point to be adjusted as a circle center and the product of the adjustment reference value and the adjustment coefficient as a radius;
carrying out image processing on the face contour in the target face data in the adjustment range according to the first target bending and stretching coefficient to obtain a middle face contour;
and mixing the middle face contour corresponding to each point to be adjusted to obtain the face contour after image processing.
2. The method of claim 1, wherein the target face features comprise target eye contour features and the standard face features comprise standard eye contour features of a standard face;
the comparing the target face features with preset standard face features and performing image processing on the target face data according to the comparison result comprises the following steps:
and comparing the target eye contour feature with the standard eye contour feature, and carrying out image processing on the eye contour in the target face data according to the comparison result.
3. The method of claim 2, wherein comparing the target eye contour feature with the standard eye contour feature and performing image processing on the eye contour in the target face data according to the comparison result to fit the eye contour in the standard face data comprises:
calculating the distance between the target eye contour features to obtain the size of a target eye and the distance between the target eyes in the target face data;
obtaining standard eye size and standard eye distance in the standard face data;
calculating a size difference between the target eye size and the standard eye size;
calculating a distance difference between the target eye separation and the standard eye separation;
calculating the size difference through an amplification and reduction extension function to obtain a target amplification and reduction coefficient;
calculating the distance difference through a second bending and stretching function to obtain a second target bending and stretching coefficient;
and performing image processing according to the target magnification and reduction coefficient and the second target bending and stretching coefficient on the basis of the eye contour in the target face data.
4. The method according to claim 1, wherein the generating a live data stream of the live broadcast room according to the target face data after the image processing specifically includes:
in the image data, covering the target face data after the image processing with the target face data before the image processing as target image data;
and generating a live broadcast data stream of the live broadcast room based on the target image data.
5. A live broadcast-based face processing device is characterized by comprising:
the image acquisition module is used for acquiring image data when the live broadcast room is started;
the characteristic extraction module is used for carrying out face detection in the image data to obtain target face data and target face characteristics in the target face data;
the characteristic comparison module is used for comparing the target face characteristic with a preset standard face characteristic and carrying out image processing on the target face data according to a comparison result;
the data stream generation module is used for generating a live broadcast data stream of the live broadcast room according to the target face data after image processing;
the target face features comprise target face contour features, and the standard face features comprise standard face contour features in standard face data;
the feature comparison module is configured to: comparing the target face contour feature with the standard face contour feature, and carrying out image processing on the face contour in the target face data according to the comparison result;
the comparing the target face contour feature with the standard face contour feature and performing image processing on the face contour in the target face data according to the comparison result includes:
determining a first target bending and stretching coefficient through a gradient difference between a first gradient value of the target face contour feature and a second gradient value of the standard face contour feature;
performing image processing according to the first target bending and stretching coefficient on the basis of the face contour in the target face data;
performing image processing on the basis of the face contour in the target face data according to the first target bending and stretching coefficient, specifically including:
determining an adjustment reference value;
selecting a point to be adjusted from a face contour in target face data, and determining an adjustment coefficient corresponding to the point to be adjusted; wherein, the number of the points to be adjusted is two or more;
determining an adjustment range by taking the point to be adjusted as a circle center and the product of the adjustment reference value and the adjustment coefficient as a radius;
carrying out image processing on the face contour in the target face data in the adjustment range according to the first target bending and stretching coefficient to obtain a middle face contour;
and mixing the middle face contour corresponding to each point to be adjusted to obtain the face contour after image processing.
6. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a live-based face processing method as claimed in any one of claims 1-4.
7. An electronic device, comprising a central processing unit and a graphics processing unit; the central processing unit comprises an image acquisition module, a feature extraction module and a data stream generation module, and the graphic processor comprises a feature comparison module;
the image acquisition module is used for acquiring image data when the live broadcast room is started;
the feature extraction module is used for carrying out face detection in the image data to obtain target face data and target face features in the target face data;
the characteristic comparison module is used for comparing the target face characteristic with a preset standard face characteristic and carrying out image processing on the target face data according to a comparison result;
the data stream generation module is used for generating a live broadcast data stream of the live broadcast room according to the target face data after image processing;
the target face features comprise target face contour features, and the standard face features comprise standard face contour features in standard face data;
the feature comparison module is configured to: comparing the target face contour feature with the standard face contour feature, and carrying out image processing on the face contour in the target face data according to the comparison result;
the comparing the target face contour feature with the standard face contour feature and performing image processing on the face contour in the target face data according to the comparison result includes:
determining a first target bending and stretching coefficient through a gradient difference between a first gradient value of the target face contour feature and a second gradient value of the standard face contour feature;
performing image processing according to the first target bending and stretching coefficient on the basis of the face contour in the target face data;
performing image processing on the basis of the face contour in the target face data according to the first target bending and stretching coefficient, specifically including:
determining an adjustment reference value;
selecting a point to be adjusted from a face contour in target face data, and determining an adjustment coefficient corresponding to the point to be adjusted; wherein, the number of the points to be adjusted is two or more;
determining an adjustment range by taking the point to be adjusted as a circle center and the product of the adjustment reference value and the adjustment coefficient as a radius;
carrying out image processing on the face contour in the target face data in the adjustment range according to the first target bending and stretching coefficient to obtain a middle face contour;
and mixing the middle face contour corresponding to each point to be adjusted to obtain the face contour after image processing.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements a live-based face processing method according to any one of claims 1 to 4.
CN201811241860.9A 2018-10-24 2018-10-24 Live broadcast-based face processing method, device, equipment and storage medium Active CN109302628B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110213455.1A CN113329252B (en) 2018-10-24 2018-10-24 Live broadcast-based face processing method, device, equipment and storage medium
CN201811241860.9A CN109302628B (en) 2018-10-24 2018-10-24 Live broadcast-based face processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811241860.9A CN109302628B (en) 2018-10-24 2018-10-24 Live broadcast-based face processing method, device, equipment and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202110213455.1A Division CN113329252B (en) 2018-10-24 2018-10-24 Live broadcast-based face processing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109302628A CN109302628A (en) 2019-02-01
CN109302628B true CN109302628B (en) 2021-03-23

Family

ID=65158666

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110213455.1A Active CN113329252B (en) 2018-10-24 2018-10-24 Live broadcast-based face processing method, device, equipment and storage medium
CN201811241860.9A Active CN109302628B (en) 2018-10-24 2018-10-24 Live broadcast-based face processing method, device, equipment and storage medium

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202110213455.1A Active CN113329252B (en) 2018-10-24 2018-10-24 Live broadcast-based face processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (2) CN113329252B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112188234B (en) * 2019-07-03 2023-01-06 广州虎牙科技有限公司 Image processing and live broadcasting method and related devices
CN110490828B (en) * 2019-09-10 2022-07-08 广州方硅信息技术有限公司 Image processing method and system in video live broadcast
CN110706169A (en) * 2019-09-26 2020-01-17 深圳市半冬科技有限公司 Star portrait optimization method and device and storage device
CN111402352B (en) * 2020-03-11 2024-03-05 广州虎牙科技有限公司 Face reconstruction method, device, computer equipment and storage medium
CN111797754A (en) * 2020-06-30 2020-10-20 上海掌门科技有限公司 Image detection method, device, electronic equipment and medium
CN114760492B (en) * 2022-04-22 2023-10-20 咪咕视讯科技有限公司 Live special effect generation method, device and system and computer readable storage medium
CN116109479B (en) * 2023-04-17 2023-07-18 广州趣丸网络科技有限公司 Face adjusting method, device, computer equipment and storage medium for virtual image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107680033A (en) * 2017-09-08 2018-02-09 北京小米移动软件有限公司 Image processing method and device
CN107730445A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN107818543A (en) * 2017-11-09 2018-03-20 北京小米移动软件有限公司 Image processing method and device
CN108205795A (en) * 2016-12-16 2018-06-26 北京酷我科技有限公司 Face image processing process and system during a kind of live streaming
CN108492247A (en) * 2018-03-23 2018-09-04 成都品果科技有限公司 A kind of eye make-up chart pasting method based on distortion of the mesh

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2633474A4 (en) * 2010-10-28 2017-03-22 Telefonaktiebolaget LM Ericsson (publ) A face data acquirer, end user video conference device, server, method, computer program and computer program product for extracting face data
CN103605975B (en) * 2013-11-28 2018-10-19 小米科技有限责任公司 A kind of method, apparatus and terminal device of image procossing
CN108021308A (en) * 2016-10-28 2018-05-11 中兴通讯股份有限公司 Image processing method, device and terminal
CN107835367A (en) * 2017-11-14 2018-03-23 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108205795A (en) * 2016-12-16 2018-06-26 北京酷我科技有限公司 Face image processing process and system during a kind of live streaming
CN107680033A (en) * 2017-09-08 2018-02-09 北京小米移动软件有限公司 Image processing method and device
CN107730445A (en) * 2017-10-31 2018-02-23 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN107818543A (en) * 2017-11-09 2018-03-20 北京小米移动软件有限公司 Image processing method and device
CN108492247A (en) * 2018-03-23 2018-09-04 成都品果科技有限公司 A kind of eye make-up chart pasting method based on distortion of the mesh

Also Published As

Publication number Publication date
CN113329252A (en) 2021-08-31
CN109302628A (en) 2019-02-01
CN113329252B (en) 2023-01-06

Similar Documents

Publication Publication Date Title
CN109302628B (en) Live broadcast-based face processing method, device, equipment and storage medium
CN110536151B (en) Virtual gift special effect synthesis method and device and live broadcast system
CN110475150B (en) Rendering method and device for special effect of virtual gift and live broadcast system
CN110493630B (en) Processing method and device for special effect of virtual gift and live broadcast system
CN106331850B (en) Browser live broadcast client, browser live broadcast system and browser live broadcast method
CN111369644A (en) Face image makeup trial processing method and device, computer equipment and storage medium
CN106817596B (en) Special effect processing method and device acting on media acquisition device
CN111583415B (en) Information processing method and device and electronic equipment
CN111405339B (en) Split screen display method, electronic equipment and storage medium
CN106530309A (en) Video matting method and system based on mobile platform
CN112532882B (en) Image display method and device
CN105898395A (en) Network video playing method, device and system
CN113132696A (en) Image tone mapping method, device, electronic equipment and storage medium
CN112785488A (en) Image processing method and device, storage medium and terminal
CN105100870A (en) Screenshot method and terminal equipment
CN113327316A (en) Image processing method, device, equipment and storage medium
CN112435173A (en) Image processing and live broadcasting method, device, equipment and storage medium
CN113542909A (en) Video processing method and device, electronic equipment and computer storage medium
CN109089158B (en) Human face image quality parameter processing system for smart television and implementation method thereof
CN111652792A (en) Image local processing method, image live broadcasting method, image local processing device, image live broadcasting equipment and storage medium
CN116600190A (en) Photographing control method and device for mobile phone and computer readable storage medium
CN116962744A (en) Live webcast link interaction method, device and live broadcast system
CN113382276A (en) Picture processing method and system
CN112860941A (en) Cover recommendation method, device, equipment and medium
CN113408452A (en) Expression redirection training method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant