WO2021047433A1 - Image processing method and system in live streaming - Google Patents

Image processing method and system in live streaming Download PDF

Info

Publication number
WO2021047433A1
WO2021047433A1 PCT/CN2020/112970 CN2020112970W WO2021047433A1 WO 2021047433 A1 WO2021047433 A1 WO 2021047433A1 CN 2020112970 W CN2020112970 W CN 2020112970W WO 2021047433 A1 WO2021047433 A1 WO 2021047433A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
video
contour
live video
contour information
Prior art date
Application number
PCT/CN2020/112970
Other languages
French (fr)
Chinese (zh)
Inventor
王云
Original Assignee
广州华多网络科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州华多网络科技有限公司 filed Critical 广州华多网络科技有限公司
Publication of WO2021047433A1 publication Critical patent/WO2021047433A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • This application relates to the technical field of image processing. Specifically, this application relates to an image processing method in live video broadcasting, an image processing system in live video broadcasting, computer equipment, storage media, and terminals.
  • the typical way of realizing special effects for beauty makeup is mainly to paste pre-designed translucent special effects pictures on corresponding parts of the image.
  • An image processing method in live video broadcast includes the following steps:
  • the second image is used to replace the video image in the live video to obtain and output the target live video.
  • the step of extracting a video image from a live video includes:
  • the video image is an image containing a human face
  • the target area includes a human face area
  • the image contour information is a human face feature point
  • the step of recognizing the target area of the video image to obtain first image contour information, and shaping the video image according to the first image contour information to generate the first image includes:
  • the step of adjusting the contour of the shaping part includes:
  • the second recognition of the target area in the first image is performed to obtain second image contour information of the first image, and a beauty texture is superimposed on the first image according to the second image contour information.
  • the steps of generating a second image on the image include:
  • Detect the reshaped face area to obtain a second face feature point retrieve a texture image matching the second face feature point; fuse the texture image in the target area to make the texture image
  • the contour coincides with the shaping contour.
  • the method further includes:
  • the skin area for whitening and dermabrasion in the first image is identified, and the dermabrasion area in the first image on which the beauty map is superimposed is subjected to whitening and dermabrasion to generate the second image. image.
  • the plastic surgery includes any one or more of face thinning, nose reduction, lip augmentation, eye enlargement, plumping apple muscles, and smiling lips.
  • the texture includes any one or more of foundation, nose shadow, lips, eyebrows, eye shadow, cosmetic contact lenses, silkworm, and blush.
  • An image processing system in live video broadcasting including:
  • Extraction module for extracting video images from live video
  • a shaping module configured to identify the target area of the video image, obtain first image contour information, and shape the video image according to the first image contour information to generate a first image
  • the mapping module is used to perform secondary recognition on the target area in the first image, obtain second image contour information of the first image, and superimpose the beauty texture on the first image according to the second image contour information. On the first image and generating a second image;
  • the video module is used to replace the video image in the live video with the second image to obtain and output the target live video.
  • a computer device including a memory, a processor, and a computer program stored on the memory and capable of running on the processor.
  • the processor executes the computer program to implement the image processing method in the live video broadcast of any one of the above embodiments A step of.
  • a computer-readable storage medium has a computer program stored thereon, and when the computer program is executed by a processor, the steps of the image processing method in the live video broadcast of any one of the above embodiments are realized.
  • a terminal which includes:
  • One or more processors are One or more processors;
  • One or more application programs wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs are configured to: execute The image processing method in the live video broadcast described in any of the above embodiments.
  • the above-mentioned image processing method, system, computer equipment, storage medium and terminal in the live video broadcast extract the video image in the live video, and after the target area of the video image is recognized for the first time, the video image is reshaped, and after the reshaping, the second target area is performed.
  • Second recognition the second image contour information obtained after the second recognition can accurately describe the contour in the reshaped image.
  • the beauty texture matching with the first image can be processed, and the beauty texture is similar to the first image.
  • the contour matching of the image avoiding the appearance of unmatched strange graphics near the contour, and improving the coordination effect between the plastic surgery and the beauty special effects in the live video.
  • FIG. 1 is an implementation environment diagram of an image processing method in a live video broadcast provided in an embodiment
  • Fig. 2 is a flowchart of an image processing method in live video broadcasting in an embodiment
  • FIG. 3 is an effect diagram of a beauty treatment effect of applying images first and then shaping in an embodiment
  • Fig. 4 is a flowchart of an image processing method in a live video broadcast in another embodiment
  • FIG. 5 is a schematic diagram of the principle of facial feature points in an embodiment
  • Fig. 6 is a schematic structural diagram of an image processing system in a live video broadcast in an embodiment
  • Figure 7 is a schematic diagram of the internal structure of a computer device in an embodiment
  • Fig. 8 is a schematic diagram of the internal structure of a terminal in an embodiment.
  • FIG. 1 is an implementation environment diagram of an image beauty processing method provided in an embodiment.
  • the implementation environment includes a host 110, a live broadcast platform 120, and an audience 130.
  • the host captures live video through the host 110 camera or captures the host screen, and uploads the live video to the live room 121 of the live broadcast platform 120.
  • the live broadcast platform 120 can report to the audience according to the viewing needs of users in the live room 121 130 transmits the live video of the live room 121.
  • the anchor terminal 110 or the audience terminal 130 can be installed on a smart phone, a tablet computer, a notebook computer, or a desktop computer.
  • the live broadcast platform 120 may run on a computer device, a server device, or a server device group.
  • the client 110 and the live broadcast platform 120 and between the audience 130 and the live broadcast platform 120 can be connected via a network, which is not limited in this application.
  • FIG. 2 is a flowchart of an image beauty processing method in an embodiment.
  • an image beauty processing method is proposed.
  • the method can be applied to the host 110 or the live broadcast platform 120 described above, and the processor in the host 110 or the live broadcast platform 120 executes the steps of the image beauty processing method.
  • the host 110 can collect live video and perform beauty processing on the images in the collected video, or the live broadcast platform 120 can perform beauty processing on the images in the live video after receiving the live video uploaded by the host 110 .
  • the image beauty processing method may specifically include the following steps:
  • Step S210 Extract a video image from the live video.
  • the live video contains multiple picture frames
  • the processor can retrieve the picture frames of the live video as video images.
  • Step S220 Recognizing the target area of the video image to obtain first image contour information, and shaping the video image according to the first image contour information to generate a first image.
  • the processor may have the function of identifying a specific target area in the video image.
  • the processor may detect image features from the video image, identify the target area based on the image features, and generate first image contour information.
  • the image contour information can be used to record image features, and the image contour information can also characterize contour lines in the image.
  • the processor can perform image processing on the pixels in the target area part of the video image, adjust the contour lines specified in the target area according to the shaping characteristics, and display the contour lines in the target area by adjusting the contour lines in the target area. effect.
  • the video image can be an image that includes the face and/or the torso of the body.
  • the target area in the video image may be a face area or a human body area, such as a human face, torso, limbs, and other parts.
  • the video image is an image containing a human face
  • the target area includes a human face area
  • the image contour information is a human face feature point.
  • the processor detects the facial feature points of the video image, and detects the facial feature points in the original image.
  • the processor may call a face recognition algorithm to perform face detection and output the coordinates of 106 facial feature points.
  • the shaping of the face area can include any one or more types of face thinning, nose reduction, lip augmentation, enlarged eyes, plump apple muscle, and smiling lips.
  • the texture of the face area may include any one or more of makeup special effects from foundation, nose shadow, lips, eyebrows, eye shadow, cosmetic contact lenses, silkworm, and blush.
  • the processor can also identify the limbs and torso of the human body in the video image, and use them as the target area to be processed. For example, if there is a leg area in a video image, the processor can detect the contour lines in the video image, and record the detected contour lines through the image contour information; the processor can identify the characteristic areas of the legs according to the contour lines and determine the video image The leg area in the middle; image processing of the leg area to change the contour lines of the leg, the leg in this area realizes the shaping of the thin leg or the stretched leg, or the shaping of the leg to achieve thick or fattening.
  • the processor can also have the function of detecting the skin area in the video image, such as filtering out the smooth skin area in the image with the help of a filter, identifying the contour lines of the skin area, and further determining whether the skin area is based on the contour lines. It is the leg area, which is conducive to accurately identifying the leg area.
  • the target area is the face area
  • the chin part to be reshaped in the face area can be identified through the image contour information.
  • the image pixels of the chin part will be "squeezed" from the two sides to the middle chin contour, especially the closer the chin contour is, the stronger the squeezing will be, and finally the chin contour will be adjusted to the target contour of the shaping.
  • Step S230 Perform secondary recognition on the target area in the first image to obtain second image contour information of the first image, and superimpose the beauty texture on the first image according to the second image contour information to generate a second image.
  • the second image contour information of the first image is extracted, the target area is recognized again, and the reshaped target area is updated.
  • the recognition mode of the secondary recognition in this step can be the same as the recognition mode in step S220.
  • the processor can detect image features from the first image, recognize the target area based on the image features, and generate second image contour information.
  • the processor can select the beauty texture map matching the target area according to the contour information of the second image, and determine the accurate area to be superimposed, superimpose the beauty texture map on the first image and generate a second image, thereby obtaining accurate beauty makeup effects. The second image together.
  • Step S240 Use the second image to replace the video image in the live video to obtain and output the target live video.
  • the video image of the live video is updated to the processed second image, the reshaping of the live video and the processing of the beauty texture map are realized, and the target live video is obtained and output.
  • the image processing method in the above video live broadcast extracts the video image in the live video, performs the video image shaping after the first recognition of the target area of the video image, and then performs the second recognition of the target area after the reshaping, and the second recognition is obtained after the second recognition.
  • the contour information of the image can accurately describe the contour in the reshaped image.
  • the beauty texture matching the first image can be processed.
  • the beauty texture matches the contour of the first image to avoid mismatches near the contour.
  • the strange graphics enhance the coordination effect between plastic surgery and beauty special effects in live video.
  • the step of extracting a video image from a live video may include:
  • Step S241 Obtain the live video, and determine whether the video image in the live video has been subjected to texture processing; if so, determine whether the target area of the texture processing overlaps with the target area to be shaped.
  • Step S242 If they overlap, retrieve the live video before the texture, and obtain the video image from the original live video.
  • this step if the live video has been subjected to texture processing, and there is an overlap between the texture area and the target area to be reshaped, it is necessary to retrieve the original live video before texture, and retrieve the picture frame in the original live image as the video image .
  • the original live video may refer to the original video captured by the host's camera or captured by the host's screen.
  • the original live video may also be the live video that was initially uploaded to the live broadcast platform by the host. At this time, the host does not perform image processing on the original live video.
  • the image processing method in the above video live broadcast determines whether the video image in the live video has been subjected to texture processing, and whether there is an overlap between the texture processed area and the shaping target area. If there is an overlap, it needs to be recalled without texture processing.
  • the processed original image can prevent the mismatched area between the original texture and the contour before the reshaping from being enlarged after the reshaping, thereby improving the matching effect between the reshaping and the beauty special effects.
  • the target area in face-lift plastic surgery as an example to illustrate the overlap of the target area.
  • the target area of face-lift plastic surgery is not only in the chin area, but the target area of face-lift plastic surgery also affects the change of lip shape and position. Therefore, the target area of face-lift plastic surgery also overlaps with the target area of lip texture.
  • the foregoing embodiments describe the process of extracting video images.
  • the following embodiments will take the shaping of the face area as an example to illustrate the shaping of the face area.
  • the video image is an image containing a human face
  • the target area includes a human face area
  • the image contour information is a human face feature point
  • step S220 the step of identifying the target area of the video image to obtain first image contour information, and shaping the video image according to the first image contour information to generate the first image may include:
  • Step S221 Identify the face area in the video image, detect the face area, and obtain the first face feature point.
  • the face area of the video image is recognized, and the face feature points are detected on the face area to obtain the first face feature point.
  • the processor may call a face recognition algorithm to perform face detection and output the coordinates of 106 facial feature points.
  • Step S222 Determine a plastic part corresponding to the plastic type from the face area according to the first facial feature point.
  • the region to be shaped can be determined according to the first facial feature points.
  • the type of plastic surgery for the face area may include any one or more types of face-lifting, reduced nose, enlarged lips, enlarged eyes, plumped apple muscles, and smiling lips.
  • the chin area can be determined according to the contour of the facial feature points in the chin area, and then the chin area in the original image can be reshaped.
  • the contours of various parts can be determined with the help of facial feature points.
  • facial feature points can determine the contours of the face shape, nose shape, lip shape, and eye shape.
  • Step S223 Adjust the contour of the plastic part to obtain a primary beauty image.
  • the image processing method in the above-mentioned live video broadcast recognizes the plastic parts corresponding to the plastic type, and adjusts the contours of the plastic parts to realize the plastic parts in the target area.
  • the adjustment of the contour of the reshaping part can be realized by means of image processing.
  • the contour of the reshaping part can be adjusted by locally “squeezing” and locally “stretching” the image pixels of the shaping part.
  • the image pixels of the chin part will be “squeezed” from the two sides to the middle chin contour, and the image pixels outside the chin part will be “stretched” outwards, especially the area closer to the chin contour
  • the stronger the squeezing strength, the final adjustment of the chin contour to the target contour of the shaping, the stronger the stretching of the area closer to the chin contour, and the outward stretching can avoid obvious deformation in other areas of the image.
  • Fig. 3 is an effect drawing of the beauty treatment after applying the texture first and then shaping.
  • the foundation texture is to cover the entire face with a layer of translucent face skin, the foundation texture of the chin and the original chin
  • the gap between the contours is stretched and enlarged under the face-lifting, and weird graphics appear at the contour of the chin, as indicated by the arrow in Figure 3.
  • the matching effect of the image processing between the foundation effect and the face-lifting is poor.
  • the step of adjusting the contour of the shaping part in step S223 may include:
  • Step S2231 Extract the current contour of the plastic object according to the first face feature point; Step S2232: Adjust the current contour to the plastic contour corresponding to the plastic type.
  • the current contour in the original image is adjusted to the plastic contour to obtain a primary beauty image.
  • step S230 the target area in the first image is identified twice to obtain second image contour information of the first image, and a beauty texture is superimposed on the first image according to the second image contour information.
  • the step of generating a second image on one image may include:
  • Step S231 Detect the face area after shaping to obtain a second face feature point
  • the second face recognition is performed to obtain the second face feature points in the primary beauty image.
  • Step S232 retrieve a texture image matching the second face feature point;
  • Step S233 Fusion the texture image in the target area, so that the contour of the texture image coincides with the shaping contour.
  • the second face feature points after the secondary face recognition are fitted to the plastic contour, so that the contour of the texture acquired according to the second facial feature coincides with the plastic contour.
  • the above-mentioned image processing method in the live video broadcast can make the texture image and the plastic contour overlap, and improve the matching effect of the image processing between the special effects of the beauty makeup and the plastic surgery.
  • the method may further include:
  • Step S260 Recognizing the skin-whitening skin area in the first image according to the contour information of the second image, and performing the skin-whitening skin skinning area on the first image with the superimposed beauty texture map to generate a second image.
  • the image contour information in the second image has not changed.
  • the dermabrasion area identified by continuing to use the second image contour information is also accurate, and the dermabrasion area can be accelerated at this time Recognition efficiency.
  • FIG. 4 is a flowchart of an image processing method in live video in another embodiment
  • Figure 5 is a face in an embodiment Schematic diagram of the principle of feature points.
  • the image processing method in the live video broadcast provided by this embodiment example includes: collecting live video through a USB external camera device, extracting video images from the live video; performing the first face recognition on the video image, and detecting the first facial feature point, As shown by the black feature points in Figure 5, the face area is determined by face recognition, and after the first face recognition, face-lifting of the face area is performed to obtain the first image; then the second face recognition is performed, Detect the second face feature points on the first image, as shown by the white feature points in Figure 5, to obtain more accurate face feature points after shaping, re-determine the face area for the first image, and according to the second face Recognize and apply the beauty texture to obtain the target beauty image, and enhance the fit effect of the beauty texture on the edge.
  • the foundation texture will not be stretched and deformed during the face-lifting process, avoiding the gap between the foundation texture and the original chin contour. It is magnified under face-lifting and shaping to improve the coordination effect of image processing between foundation special effects and face-lifting.
  • FIG. 6 is a schematic structural diagram of an image processing system in a live video broadcast in an embodiment.
  • an image beauty processing system is provided, which may specifically include an extraction module 610, The shaping module 620, the texture module 630 and the video module 640, of which:
  • the extraction module 610 is used to extract video images from the live video.
  • the live video contains multiple picture frames, and the processor can retrieve the picture frames of the live video as video images.
  • the shaping module 620 is configured to identify the target area of the video image, obtain first image contour information, and shape the video image according to the first image contour information to generate a first image.
  • the processor may have a function of identifying a specific target area in the video image, and the processor may detect image features from the video image, identify the target area based on the image features, and generate first image contour information.
  • the image contour information can be used to record image features, and the image contour information can also characterize contour lines in the image.
  • the processor can perform image processing on the pixels in the target area part of the video image, adjust the contour lines specified in the target area according to the shaping characteristics, and display the contour lines in the target area by adjusting the contour lines in the target area. effect.
  • the video image can be an image that includes the face and/or the torso of the body.
  • the target area in the video image can be the face area or the human body area, such as the human face, torso, limbs and other parts of the area.
  • the video image is an image containing a human face
  • the target area includes a human face area
  • the image contour information is a human face feature point.
  • the processor detects the facial feature points of the video image, and detects the facial feature points in the original image.
  • the processor may call a face recognition algorithm to perform face detection and output the coordinates of 106 facial feature points.
  • the shaping of the face area can include any one or more types of face thinning, nose reduction, lip augmentation, enlarged eyes, plump apple muscle, and smiling lips.
  • the texture of the face area may include any one or more of makeup special effects from foundation, nose shadow, lips, eyebrows, eye shadow, cosmetic contact lenses, silkworm, and blush.
  • the processor can also identify the limbs and torso of the human body in the video image, and use them as the target area to be processed. For example, if there is a leg area in a video image, the processor can detect the contour lines in the video image, and record the detected contour lines through the image contour information; the processor can identify the characteristic areas of the legs according to the contour lines and determine the video image The leg area in the middle; image processing of the leg area to change the contour lines of the leg, the leg in this area realizes the shaping of the thin leg or the stretched leg, or the shaping of the leg to achieve thick or fattening.
  • the processor can also have the function of detecting the skin area in the video image, such as filtering out the smooth skin area in the image with the help of a filter, identifying the contour lines of the skin area, and further determining whether the skin area is based on the contour lines. It is the leg area, which is conducive to accurately identifying the leg area.
  • the target area is the face area
  • the chin part to be reshaped in the face area can be identified through the image contour information.
  • the image pixels of the chin part will be "squeezed" from the two sides to the middle chin contour, especially the closer the chin contour is, the stronger the squeezing will be, and finally the chin contour will be adjusted to the target contour of the shaping.
  • the mapping module 630 is configured to perform secondary recognition on the target area in the first image, obtain second image contour information of the first image, and superimpose the beauty texture on the second image contour information according to the second image contour information. And generate a second image on the first image.
  • the contour information of the second image of the first image is extracted, the target area is recognized again, and the reshaped target area is updated.
  • the recognition method of the secondary recognition in this step can be the same as the recognition method in step S220.
  • the processor can detect image features from the first image, identify the target area based on the image features, and generate second image contour information.
  • the processor can select the beauty texture map matching the target area according to the contour information of the second image, and determine the accurate area to be superimposed, superimpose the beauty texture map on the first image and generate a second image, thereby obtaining accurate beauty makeup effects. The second image together.
  • the video module 640 is configured to replace the video image in the live video with the second image to obtain and output the target live video.
  • the video image of the live video is updated to the processed second image, the reshaping of the live video and the beauty mapping process are realized, and the target live video is obtained and output.
  • the image processing system in the above video live broadcast extracts the video image in the live video, reshapes the video image after the first recognition of the target area of the video image, and then performs the second recognition of the target area after the reshaping, and the second recognition is obtained after the second recognition.
  • the contour information of the image can accurately describe the contour in the reshaped image.
  • the beauty texture matching the first image can be processed.
  • the beauty texture matches the contour of the first image to avoid mismatches near the contour.
  • the strange graphics enhance the coordination effect between plastic surgery and beauty special effects in live video.
  • the various modules in the image processing system in the above-mentioned live video broadcast can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • FIG. 7 is a schematic diagram of the internal structure of a computer device in an embodiment.
  • the computer device includes a processor, a non-volatile storage medium, a memory, and a network interface connected through a system bus.
  • the non-volatile storage medium of the computer device stores an operating system, a database, and a computer program.
  • the database may store control information sequences.
  • the processor can realize a live video broadcast.
  • Image processing method The processor of the computer equipment is used to provide computing and control capabilities, and supports the operation of the entire computer equipment.
  • a computer program can be stored in the memory of the computer device, and when the computer program is executed by the processor, the processor can execute an image processing method in live video broadcasting.
  • the network interface of the computer device is used to connect and communicate with the terminal.
  • FIG. 7 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • a computer device in one embodiment, includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the processor executes the computer program, the video of any of the above embodiments is implemented. The steps of the image processing method in the live broadcast.
  • a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the image processing method in the live video broadcast of any of the above embodiments are realized.
  • a terminal which includes: one or more processors; a memory; one or more application programs, wherein one or more application programs are stored in the memory and configured to be operated by one or more Multiple processors are executed, and one or more programs are configured to execute the image processing method in the live video broadcast according to any one of the foregoing embodiments.
  • FIG. 8 is a schematic diagram of the internal structure of the terminal in an embodiment.
  • the terminal can be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales, sales terminal), a car computer, etc. Take the terminal as a mobile phone as an example:
  • FIG. 8 shows a block diagram of a part of the structure of a mobile phone related to a terminal provided in an embodiment of the present application.
  • the mobile phone includes: a radio frequency (RF) circuit 810, a memory 820, an input unit 830, a display unit 840, a sensor 850, an audio circuit 860, a wireless fidelity (Wi-Fi) module 870, a processing 880, power supply 890 and other components.
  • RF radio frequency
  • the processor 880 included in the terminal also has the following functions: extracting video images from live video; identifying the target area of the video image to obtain first image contour information, and according to the first image An image contour information reshapes the video image to generate a first image; performs secondary recognition on the target area in the first image to obtain the second image contour information of the first image, according to the The second image contour information superimposes the beauty texture map on the first image to generate a second image; replace the video image in the live video with the second image to obtain and output the target live video. That is, the processor 880 has the function of executing the image processing method in the live video broadcast of any of the foregoing embodiments, and details are not described herein again.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

An image processing method and system in live streaming. The method comprises: extracting a video image from a live video; recognizing a target area of the video image to obtain first image contour information, and shaping the video image according to the first image contour information to generate a first image; performing secondary recognition on the target area in the first image to obtain second image contour information of the first image, and superposing a beauty makeup sticker on the first image according to the second image contour information to generate a second image; and replacing the video image in the live video with the second image to obtain a target live video, and outputting the target live video. According to the method above, the secondary recognition of the target area is performed after shaping, the second image contour information obtained after the secondary recognition can accurately describe the contour in the shaped image, and image contour matching between the beauty makeup sticker and the first image is performed according to the second image contour information, thereby improving the matching effect between shaping and beauty makeup special effects in the live video.

Description

视频直播中的图像处理方法和系统Image processing method and system in live video broadcast
本申请要求于2019年9月10日提交中国专利局、申请号为201910854350.7、发明名称为“视频直播中的图像处理方法和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on September 10, 2019, the application number is 201910854350.7, and the invention title is "Image Processing Method and System in Live Video Broadcasting", the entire content of which is incorporated herein by reference. Applying.
技术领域Technical field
本申请涉及图像处理技术领域,具体而言,本申请涉及一种视频直播中图像处理方法、视频直播中图像处理系统、计算机设备、存储介质和终端。This application relates to the technical field of image processing. Specifically, this application relates to an image processing method in live video broadcasting, an image processing system in live video broadcasting, computer equipment, storage media, and terminals.
背景技术Background technique
在图像处理中,典型美妆特效的实现方式主要是将预先设计好的半透明的特效图片贴在图像相应的部位。In image processing, the typical way of realizing special effects for beauty makeup is mainly to paste pre-designed translucent special effects pictures on corresponding parts of the image.
随着直播视频中对美颜追求的日益进步,通过多项美颜图像处理的叠加组合获得更好的美颜效果,例如对图像中的人物进行美妆特效的处理后,继续进行整形的图像处理,进一步优化图像中人物的美颜效果。其中,在图像处理中,美妆特效的实现方式主要是将预先设计好的半透明的特效图片贴在图像相应的部位。With the increasing progress in the pursuit of beauty in live video, a better beauty effect can be obtained through the superimposed combination of multiple beauty image processing. For example, after the person in the image is processed with beauty special effects, the image will continue to be reshaped Processing to further optimize the beauty effect of the characters in the image. Among them, in image processing, the way of realizing the special effects of beauty is mainly to paste pre-designed translucent special effect pictures on corresponding parts of the image.
但是,在使用美妆特效和整形进行图像处理后,图像处理依次叠加后往往出现明显的奇异图形,美妆特效和整形之间图像处理的配合效果差。However, after the image processing is performed with the special effects of beauty makeup and plastic surgery, the image processing is superimposed in sequence, and obvious strange graphics often appear, and the matching effect of image processing between beauty special effects and plastic surgery is poor.
发明内容Summary of the invention
基于此,有必要针对上述的技术缺陷,特别是美妆特效和整形之间图像处理的配合效果差的技术缺陷,提供一种视频直播中图像处理方法、系统、计算机设备、存储介质和终端。Based on this, it is necessary to provide an image processing method, system, computer equipment, storage medium, and terminal in live video broadcasting in response to the above technical shortcomings, especially the poor matching effect of image processing between beauty special effects and plastic surgery.
一种视频直播中图像处理方法,包括如下步骤:An image processing method in live video broadcast includes the following steps:
从直播视频中提取视频图像;Extract video images from live video;
对所述视频图像的目标区域进行识别,获得第一图像轮廓信息,根据所述第一图像轮廓信息对所述视频图像进行整形,生成第一图像;Recognizing the target area of the video image to obtain first image contour information, and shaping the video image according to the first image contour information to generate a first image;
对所述第一图像中的所述目标区域进行二次识别,获得所述第一图像的第二图像轮廓信息,根据所述第二图像轮廓信息将美妆贴图叠加在所述第一图像上并生成第二图像;Perform secondary recognition on the target area in the first image to obtain second image contour information of the first image, and superimpose a beauty map on the first image according to the second image contour information And generate a second image;
利用所述第二图像替换所述直播视频中的视频图像,得到目标直播视频并输出。The second image is used to replace the video image in the live video to obtain and output the target live video.
在一个实施例中,所述从直播视频中提取视频图像的步骤,包括:In an embodiment, the step of extracting a video image from a live video includes:
获取所述直播视频,判断所述直播视频中的视频图像是否进行了贴图处理;若是,判断所述贴图处理的目标区域和待整形的目标区域之间是否重叠;若重叠,则调取贴图前的原始直播视频,并从所述原始直播视频中提取所述视频图像。Obtain the live video, and determine whether the video image in the live video has been subjected to texture processing; if so, determine whether the target area of the texture processing and the target area to be reshaped are overlapped; if they overlap, call the pre-texture And extract the video image from the original live video.
在一个实施例中,所述视频图像为包含人脸的图像,所述目标区域包括人脸区域,所述图像轮廓信息为人脸特征点;In one embodiment, the video image is an image containing a human face, the target area includes a human face area, and the image contour information is a human face feature point;
所述对所述视频图像的目标区域进行识别,获得第一图像轮廓信息,根据所述第一图像轮廓信息对所述视频图像进行整形,生成第一图像的步骤,包括:The step of recognizing the target area of the video image to obtain first image contour information, and shaping the video image according to the first image contour information to generate the first image includes:
识别所述视频图像中的人脸区域,检测所述人脸区域并获得第一人脸特征点;根据所述第一人脸特征点从所述人脸区域中确定整形类型对应的整形部位;对所述整形部位的轮廓进行调整,获得所述初级美妆图像。Identifying the face area in the video image, detecting the face area and obtaining a first face feature point; determining a plastic part corresponding to the plastic type from the face area according to the first face feature point; The contour of the plastic part is adjusted to obtain the primary beauty image.
在一个实施例中,所述对所述整形部位的轮廓进行调整的步骤,包括:In an embodiment, the step of adjusting the contour of the shaping part includes:
根据所述第一人脸特征点提取所述整形对象的当前轮廓;将所述当前轮廓调整至所述整形类型对应的整形轮廓;Extracting the current contour of the shaping object according to the first facial feature points; adjusting the current contour to the shaping contour corresponding to the shaping type;
所述对所述第一图像中的所述目标区域进行二次识别,获得所述第一图像的第二图像轮廓信息,根据所述第二图像轮廓信息将美妆贴图叠加在所述第一图像上并生成第二图像的步骤,包括:The second recognition of the target area in the first image is performed to obtain second image contour information of the first image, and a beauty texture is superimposed on the first image according to the second image contour information. The steps of generating a second image on the image include:
检测整形后的人脸区域,获得第二人脸特征点;调取与所述第二人脸特征点匹配的贴图图像;在所述目标区域中融合所述贴图图像,使得所述贴图图像的轮廓与所述整形轮廓重合。Detect the reshaped face area to obtain a second face feature point; retrieve a texture image matching the second face feature point; fuse the texture image in the target area to make the texture image The contour coincides with the shaping contour.
在一个实施例中,在所述根据所述第二图像轮廓信息将美妆贴图叠加在所述第一图像上的步骤之后,还包括:In an embodiment, after the step of superimposing a beauty texture map on the first image according to the contour information of the second image, the method further includes:
根据所述第二图像轮廓信息识别所述第一图像中进行美白磨皮的磨皮区域,并对叠加美妆贴图的第一图像中所述磨皮区域进行美白磨皮,生成所述第二图像。According to the contour information of the second image, the skin area for whitening and dermabrasion in the first image is identified, and the dermabrasion area in the first image on which the beauty map is superimposed is subjected to whitening and dermabrasion to generate the second image. image.
在一个实施例中,所述整形包括瘦脸、缩鼻、丰唇、放大眼睛、丰满苹果肌、微笑唇中任意一种或多种。In one embodiment, the plastic surgery includes any one or more of face thinning, nose reduction, lip augmentation, eye enlargement, plumping apple muscles, and smiling lips.
在一个实施例中,所述贴图包括粉底、鼻影、嘴唇、眉毛、眼影、美瞳、卧蚕、腮红中任意一种或多种。In one embodiment, the texture includes any one or more of foundation, nose shadow, lips, eyebrows, eye shadow, cosmetic contact lenses, silkworm, and blush.
一种视频直播中图像处理系统,包括:An image processing system in live video broadcasting, including:
提取模块,用于从直播视频中提取视频图像;Extraction module for extracting video images from live video;
整形模块,用于对所述视频图像的目标区域进行识别,获得第一图像轮廓信息,根据所述第一图像轮廓信息对所述视频图像进行整形,生成第一图像;A shaping module, configured to identify the target area of the video image, obtain first image contour information, and shape the video image according to the first image contour information to generate a first image;
贴图模块,用于对所述第一图像中的所述目标区域进行二次识别,获得所述第一图像的第二图像轮廓信息,根据所述第二图像轮廓信息将美妆贴图叠加在所述第一图像上并生成第二图像;The mapping module is used to perform secondary recognition on the target area in the first image, obtain second image contour information of the first image, and superimpose the beauty texture on the first image according to the second image contour information. On the first image and generating a second image;
视频模块,用于利用所述第二图像替换所述直播视频中的视频图像,得到目标直播视频并输出。The video module is used to replace the video image in the live video with the second image to obtain and output the target live video.
一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述任一实施例所述视频直播中图像处理方法的步骤。A computer device, including a memory, a processor, and a computer program stored on the memory and capable of running on the processor. The processor executes the computer program to implement the image processing method in the live video broadcast of any one of the above embodiments A step of.
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述任一实施例所述视频直播中图像处理方法的步骤。A computer-readable storage medium has a computer program stored thereon, and when the computer program is executed by a processor, the steps of the image processing method in the live video broadcast of any one of the above embodiments are realized.
一种终端,其包括:A terminal, which includes:
一个或多个处理器;One or more processors;
存储器;Memory
一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于:执行上述任一实施例所述的视频直播中图像处理方法。One or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, and the one or more programs are configured to: execute The image processing method in the live video broadcast described in any of the above embodiments.
上述的视频直播中图像处理方法、系统、计算机设备、存储介质和终端,提取直播视频中的视频图像,对视频图像的目标区域首次识别后进行视频图像的整形,整形后再进行目标区域的二次识别,二次识别后得到的第二图像轮廓信息可以准确描述整形后图像中的轮廓,根据第二图像轮廓信息可以进行与第一图像匹配的美妆贴图处理,美妆贴图与第一图像的图像轮廓匹配,避免轮廓附近出现不匹配的奇异图形,提升直播视频中整形和美妆特效之间的配合效果。The above-mentioned image processing method, system, computer equipment, storage medium and terminal in the live video broadcast extract the video image in the live video, and after the target area of the video image is recognized for the first time, the video image is reshaped, and after the reshaping, the second target area is performed. Second recognition, the second image contour information obtained after the second recognition can accurately describe the contour in the reshaped image. According to the second image contour information, the beauty texture matching with the first image can be processed, and the beauty texture is similar to the first image. The contour matching of the image, avoiding the appearance of unmatched strange graphics near the contour, and improving the coordination effect between the plastic surgery and the beauty special effects in the live video.
本申请附加的方面和优点将在下面的描述中部分给出,这些将从下面的描述中变得明显,或通过实践了解到。The additional aspects and advantages of the present application will be partly given in the following description, which will become obvious from the following description, or be learned through practice.
附图说明Description of the drawings
上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages will become obvious and easy to understand from the following description of the embodiments in conjunction with the accompanying drawings, in which:
图1为一个实施例中提供的视频直播中图像处理方法的实施环境图;FIG. 1 is an implementation environment diagram of an image processing method in a live video broadcast provided in an embodiment;
图2为一个实施例中视频直播中图像处理方法的流程图;Fig. 2 is a flowchart of an image processing method in live video broadcasting in an embodiment;
图3为一个实施例中先贴图后整形的美妆处理效果图;FIG. 3 is an effect diagram of a beauty treatment effect of applying images first and then shaping in an embodiment;
图4为又一个实施例中视频直播中图像处理方法的流程图;Fig. 4 is a flowchart of an image processing method in a live video broadcast in another embodiment;
图5为一个实施例中人脸特征点的原理示意图;FIG. 5 is a schematic diagram of the principle of facial feature points in an embodiment;
图6为一个实施例中视频直播中图像处理系统的结构示意图;Fig. 6 is a schematic structural diagram of an image processing system in a live video broadcast in an embodiment;
图7为一个实施例中计算机设备的内部结构示意图;Figure 7 is a schematic diagram of the internal structure of a computer device in an embodiment;
图8为一个实施例中终端的内部结构示意图。Fig. 8 is a schematic diagram of the internal structure of a terminal in an embodiment.
具体实施方式detailed description
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本申请,而不能解释为对本申请的限制。The embodiments of the present application are described in detail below. Examples of the embodiments are shown in the accompanying drawings, in which the same or similar reference numerals denote the same or similar elements or elements with the same or similar functions. The embodiments described below with reference to the drawings are exemplary, and are only used to explain the present application, and cannot be construed as a limitation to the present application.
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式 “一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。Those skilled in the art can understand that, unless specifically stated, the singular forms "a", "an", "said" and "the" used herein may also include plural forms. It should be further understood that the term "comprising" used in this specification refers to the presence of the described features, integers, steps, operations, elements and/or components, but does not exclude the presence or addition of one or more other features, integers, Steps, operations, elements, components, and/or groups of them. It should be understood that when we refer to an element as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element, or intervening elements may also be present. In addition, “connected” or “coupled” used herein may include wireless connection or wireless coupling. The term "and/or" as used herein includes all or any unit and all combinations of one or more of the associated listed items.
本技术领域技术人员可以理解,除非另外定义,这里使用的所有术语(包括技术术语和科学术语),具有与所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语,应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非像这里一样被特定定义,否则不会用理想化或过于正式的含义来解释。Those skilled in the art can understand that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meanings as those commonly understood by those of ordinary skill in the art. It should also be understood that terms such as those defined in general dictionaries should be understood to have a meaning consistent with the meaning in the context of the prior art, and unless specifically defined as here, they will not be idealized or overly Explain the formal meaning.
如图1所示,图1为一个实施例中提供的图像的美妆处理方法的实施环境图,在该实施环境中,包括主播端110、直播平台120、以及观众端130。主播通过主播端110摄像头拍摄或者通过抓取主播端屏幕等方式采集直播视频,并上传直播视频至直播平台120的直播间121中,直播平台120可以根据直播间121内用户的观看需求向观众端130传输直播间121的直播视频。As shown in FIG. 1, FIG. 1 is an implementation environment diagram of an image beauty processing method provided in an embodiment. The implementation environment includes a host 110, a live broadcast platform 120, and an audience 130. The host captures live video through the host 110 camera or captures the host screen, and uploads the live video to the live room 121 of the live broadcast platform 120. The live broadcast platform 120 can report to the audience according to the viewing needs of users in the live room 121 130 transmits the live video of the live room 121.
需要说明的是,主播端110或观众端130可以安装在智能手机、平板电脑、笔记本电脑、台式计算机上。直播平台120可以运行在计算机设备、服务器设备或服务器设备群上。客户端110和直播平台120之间以及观众端130和直播平台120之间可以通过网络进行连接,本申请在此不做限制。It should be noted that the anchor terminal 110 or the audience terminal 130 can be installed on a smart phone, a tablet computer, a notebook computer, or a desktop computer. The live broadcast platform 120 may run on a computer device, a server device, or a server device group. The client 110 and the live broadcast platform 120 and between the audience 130 and the live broadcast platform 120 can be connected via a network, which is not limited in this application.
在一个实施例中,如图2所示,图2为一个实施例中图像的美妆处理方法的流程图,本实施例中提出了一种图像的美妆处理方法,该图像的美妆处理方法可以应用于上述的主播端110或直播平台120中,由主播端110或直播平台120中的处理器执行该图像的美妆处理方法的步骤。主播端110可以采集直播视频并对采集视频中的图像进行美妆处理处理,或者直播平台120在接收主播端110上传的直播视频后,直播平台120可以对直播视频中的图像进行美妆处理处理。图像的美妆处理方法具体可以包括以下步骤:In an embodiment, as shown in FIG. 2, FIG. 2 is a flowchart of an image beauty processing method in an embodiment. In this embodiment, an image beauty processing method is proposed. The method can be applied to the host 110 or the live broadcast platform 120 described above, and the processor in the host 110 or the live broadcast platform 120 executes the steps of the image beauty processing method. The host 110 can collect live video and perform beauty processing on the images in the collected video, or the live broadcast platform 120 can perform beauty processing on the images in the live video after receiving the live video uploaded by the host 110 . The image beauty processing method may specifically include the following steps:
步骤S210:从直播视频中提取视频图像。Step S210: Extract a video image from the live video.
本步骤中,直播视频包含有多个图片帧,处理器可以调取直播视频的图片帧作为视频图像。In this step, the live video contains multiple picture frames, and the processor can retrieve the picture frames of the live video as video images.
步骤S220:对所述视频图像的目标区域进行识别,获得第一图像轮廓信息,根据所述第一图像轮廓信息对视频图像进行整形,生成第一图像。Step S220: Recognizing the target area of the video image to obtain first image contour information, and shaping the video image according to the first image contour information to generate a first image.
本步骤中,处理器可以具有识别视频图像中特定目标区域的功能,处理器可以从视频图像中检测图像特征,依据图像特征识别目标区域,以及生成第一图像轮廓信息。图像轮廓信息可以是用于记录图像特征,图像轮廓信息还可以表征图像中轮廓的线条等。根据所述第一图像轮廓信息,处理器可以对视频图像中目标区域部分的像素进行图像处理,按照整形特点来调整目标区域中指定的轮廓线条,通过目标区域中轮廓线条的调整可以展现整形的效果。In this step, the processor may have the function of identifying a specific target area in the video image. The processor may detect image features from the video image, identify the target area based on the image features, and generate first image contour information. The image contour information can be used to record image features, and the image contour information can also characterize contour lines in the image. According to the first image contour information, the processor can perform image processing on the pixels in the target area part of the video image, adjust the contour lines specified in the target area according to the shaping characteristics, and display the contour lines in the target area by adjusting the contour lines in the target area. effect.
视频图像可以是包含了脸部和/或者身体躯干的图像。视频图像中的目标区域可以是脸部区域或者人体区域,例如人脸、躯干、四肢等部位区域等。The video image can be an image that includes the face and/or the torso of the body. The target area in the video image may be a face area or a human body area, such as a human face, torso, limbs, and other parts.
以包含人脸的视频图像为例,在一个实施例中,视频图像为包含人脸的图像,目标区域包括人脸区域,图像轮廓信息为人脸特征点。处理器对视频图像进行人脸特征点的检测,检测出原始图像中的人脸特征点。例如,处理器可以调用人脸识别算法进行人脸检测并输出106个人脸特征点的坐标。人脸区域的整形可以包括瘦脸、缩鼻、丰唇、放大眼睛、丰满苹果肌、微笑唇中任意一种或多种类型。人脸区域的贴图可以包括粉底、鼻影、嘴唇、眉毛、眼影、美瞳、卧蚕、腮红中任意一种或多种美妆特效。Taking a video image containing a human face as an example, in one embodiment, the video image is an image containing a human face, the target area includes a human face area, and the image contour information is a human face feature point. The processor detects the facial feature points of the video image, and detects the facial feature points in the original image. For example, the processor may call a face recognition algorithm to perform face detection and output the coordinates of 106 facial feature points. The shaping of the face area can include any one or more types of face thinning, nose reduction, lip augmentation, enlarged eyes, plump apple muscle, and smiling lips. The texture of the face area may include any one or more of makeup special effects from foundation, nose shadow, lips, eyebrows, eye shadow, cosmetic contact lenses, silkworm, and blush.
另外处理器也可以识别出视频图像中人体的四肢和躯干,将其作为待处理的目标区域。例如,视频图像中存在腿部区域,处理器可以检测出视频图像中的轮廓线条,通过图像轮廓信息记录所检测的轮廓线条;处理器可以根据轮廓线条辨别出腿部的特征区域,确定视频图像中的腿部区域;对腿部区域进行图像处理,改变腿部的轮廓线条,该区域的腿部实现瘦腿或拉伸腿部的整形,或者使得腿部实现变粗或变胖的整形。而且,处理器还可以具有检测出视频图像中皮肤区域的功能,如借助滤波器滤除出图像 中具有平滑特性的皮肤区域,识别皮肤区域的轮廓线条,根据轮廓线条可以进一步判断该皮肤区域是否为腿部区域,有利于准确识别腿部区域。In addition, the processor can also identify the limbs and torso of the human body in the video image, and use them as the target area to be processed. For example, if there is a leg area in a video image, the processor can detect the contour lines in the video image, and record the detected contour lines through the image contour information; the processor can identify the characteristic areas of the legs according to the contour lines and determine the video image The leg area in the middle; image processing of the leg area to change the contour lines of the leg, the leg in this area realizes the shaping of the thin leg or the stretched leg, or the shaping of the leg to achieve thick or fattening. Moreover, the processor can also have the function of detecting the skin area in the video image, such as filtering out the smooth skin area in the image with the help of a filter, identifying the contour lines of the skin area, and further determining whether the skin area is based on the contour lines. It is the leg area, which is conducive to accurately identifying the leg area.
在对视频图像中像素进行图像处理,实现轮廓线条的调整中,以作用于下巴轮廓的瘦脸整形为例,目标区域为人脸区域,通过图像轮廓信息可以识别人脸区域的待整形的下巴部分,对于下巴部分的图像像素会被从两边向中间的下巴轮廓进行“挤压”,特别是越靠近下巴轮廓的区域挤压的强度越强,最终将下巴轮廓调整至整形的目标轮廓。In the image processing of the pixels in the video image to realize the adjustment of the contour lines, take the face-lift shaping acting on the chin contour as an example, the target area is the face area, and the chin part to be reshaped in the face area can be identified through the image contour information. The image pixels of the chin part will be "squeezed" from the two sides to the middle chin contour, especially the closer the chin contour is, the stronger the squeezing will be, and finally the chin contour will be adjusted to the target contour of the shaping.
步骤S230:对第一图像中的目标区域进行二次识别,获得第一图像的第二图像轮廓信息,根据第二图像轮廓信息将美妆贴图叠加在第一图像上并生成第二图像。Step S230: Perform secondary recognition on the target area in the first image to obtain second image contour information of the first image, and superimpose the beauty texture on the first image according to the second image contour information to generate a second image.
本步骤中,提取第一图像的第二图像轮廓信息,再次识别目标区域,更新整形后的目标区域。本步骤中二次识别的识别方式可以与步骤S220中的识别方式相同,处理器可以从第一图像中检测图像特征,依据图像特征识别目标区域,以及生成第二图像轮廓信息。处理器可以根据第二图像轮廓信息选择与目标区域匹配的美妆贴图,以及确定进行叠加的准确区域,将美妆贴图叠加在第一图像上并生成第二图像,从而获得美妆特效准确贴合的第二图像。In this step, the second image contour information of the first image is extracted, the target area is recognized again, and the reshaped target area is updated. The recognition mode of the secondary recognition in this step can be the same as the recognition mode in step S220. The processor can detect image features from the first image, recognize the target area based on the image features, and generate second image contour information. The processor can select the beauty texture map matching the target area according to the contour information of the second image, and determine the accurate area to be superimposed, superimpose the beauty texture map on the first image and generate a second image, thereby obtaining accurate beauty makeup effects. The second image together.
步骤S240:利用第二图像替换直播视频中的视频图像,得到目标直播视频并输出。Step S240: Use the second image to replace the video image in the live video to obtain and output the target live video.
本步骤中,实现直播视频的视频图像更新为进过处理后的第二图像,实现对直播视频的整形和美妆贴图处理,得到和输出目标直播视频。In this step, the video image of the live video is updated to the processed second image, the reshaping of the live video and the processing of the beauty texture map are realized, and the target live video is obtained and output.
上述视频直播中图像处理方法,提取直播视频中的视频图像,对视频图像的目标区域首次识别后进行视频图像的整形,整形后再进行目标区域的二次识别,二次识别后得到的第二图像轮廓信息可以准确描述整形后图像中的轮廓,根据第二图像轮廓信息可以进行与第一图像匹配的美妆贴图处理,美妆贴图与第一图像的图像轮廓匹配,避免轮廓附近出现不匹配的奇异图形,提升直播视频中整形和美妆特效之间的配合效果。The image processing method in the above video live broadcast extracts the video image in the live video, performs the video image shaping after the first recognition of the target area of the video image, and then performs the second recognition of the target area after the reshaping, and the second recognition is obtained after the second recognition. The contour information of the image can accurately describe the contour in the reshaped image. According to the contour information of the second image, the beauty texture matching the first image can be processed. The beauty texture matches the contour of the first image to avoid mismatches near the contour. The bizarre graphics enhance the coordination effect between plastic surgery and beauty special effects in live video.
在一个实施例中,从直播视频中提取视频图像的步骤,可以包括:In an embodiment, the step of extracting a video image from a live video may include:
步骤S241:获取直播视频,判断直播视频中的视频图像是否进行了贴 图处理;若是,判断贴图处理的目标区域和待整形的目标区域之间是否重叠。Step S241: Obtain the live video, and determine whether the video image in the live video has been subjected to texture processing; if so, determine whether the target area of the texture processing overlaps with the target area to be shaped.
步骤S242:若重叠,则调取贴图前的直播视频,并从原始直播视频中获取视频图像。Step S242: If they overlap, retrieve the live video before the texture, and obtain the video image from the original live video.
本步骤中,若直播视频进行了贴图处理,且贴图区域与待整形的目标区域之间存在重叠,则需要调取贴图前的原始直播视频,并调取原始直播图像中的图片帧作为视频图像。In this step, if the live video has been subjected to texture processing, and there is an overlap between the texture area and the target area to be reshaped, it is necessary to retrieve the original live video before texture, and retrieve the picture frame in the original live image as the video image .
原始直播视频可以指的是主播端摄像头拍摄或者通过抓取主播端屏幕等方式采集直播视频的原始视频。原始直播视频也可以是主播端最初上传至直播平台的直播视频,此时的主播端并未对原始直播视频进行图像处理。The original live video may refer to the original video captured by the host's camera or captured by the host's screen. The original live video may also be the live video that was initially uploaded to the live broadcast platform by the host. At this time, the host does not perform image processing on the original live video.
上述视频直播中图像处理方法,判断直播视频中的视频图像是否进行过贴图处理,并且贴图处理的区域是否与整形的目标区域之间存在重叠部分,若存在重叠部分,则需要调取未进行贴图处理的原始图像,可以防止原来的贴图与整形前轮廓之间不匹配区域在整形后被放大,从而提升整形和美妆特效之间的配合效果。The image processing method in the above video live broadcast determines whether the video image in the live video has been subjected to texture processing, and whether there is an overlap between the texture processed area and the shaping target area. If there is an overlap, it needs to be recalled without texture processing. The processed original image can prevent the mismatched area between the original texture and the contour before the reshaping from being enlarged after the reshaping, thereby improving the matching effect between the reshaping and the beauty special effects.
以瘦脸整形中目标区域为例来阐述目标区域的重叠。例如,粉底贴图的目标区域与瘦脸整形的目标区域之间明显存在重叠。另外,瘦脸整形的目标区域不仅仅在下巴区域,瘦脸整形的目标区域还会影响唇型及其位置的变化,因此,瘦脸整形的目标区域还与嘴唇贴图的目标区域之间是存在重叠。Take the target area in face-lift plastic surgery as an example to illustrate the overlap of the target area. For example, there is a clear overlap between the target area of the foundation texture and the target area of face-lifting. In addition, the target area of face-lift plastic surgery is not only in the chin area, but the target area of face-lift plastic surgery also affects the change of lip shape and position. Therefore, the target area of face-lift plastic surgery also overlaps with the target area of lip texture.
上述实施例中阐述了提取视频图像的过程,下述实施例将以人脸区域的整形为例,阐述人脸区域的整形。The foregoing embodiments describe the process of extracting video images. The following embodiments will take the shaping of the face area as an example to illustrate the shaping of the face area.
在一个实施例中,视频图像为包含人脸的图像,目标区域包括人脸区域,图像轮廓信息为人脸特征点。In one embodiment, the video image is an image containing a human face, the target area includes a human face area, and the image contour information is a human face feature point.
步骤S220中对所述视频图像的目标区域进行识别,获得第一图像轮廓信息,根据所述第一图像轮廓信息对所述视频图像进行整形,生成第一图像的步骤,可以包括:In step S220, the step of identifying the target area of the video image to obtain first image contour information, and shaping the video image according to the first image contour information to generate the first image may include:
步骤S221:识别视频图像中的人脸区域,检测人脸区域并获得第一人脸特征点。Step S221: Identify the face area in the video image, detect the face area, and obtain the first face feature point.
识别视频图像的人脸区域,对人脸区域进行人脸特征点的检测,得到第一人脸特征点。例如,处理器可以调用人脸识别算法进行人脸检测并输出106个人脸特征点的坐标。The face area of the video image is recognized, and the face feature points are detected on the face area to obtain the first face feature point. For example, the processor may call a face recognition algorithm to perform face detection and output the coordinates of 106 facial feature points.
步骤S222:根据第一人脸特征点从人脸区域中确定整形类型对应的整形部位。Step S222: Determine a plastic part corresponding to the plastic type from the face area according to the first facial feature point.
根据人脸特征点与整形类型对应的整形部位之间的关联关系,可以根据第一人脸特征点确定待整形的区域。According to the relationship between the facial feature points and the plastic parts corresponding to the plastic surgery type, the region to be shaped can be determined according to the first facial feature points.
人脸区域的整形类型可以包括瘦脸、缩鼻、丰唇、放大眼睛、丰满苹果肌、微笑唇中任意一种或多种的类型。在进行瘦脸的整形时,可以根据人脸特征点在下巴区域的轮廓确定下巴部位,后续可以对原始图像中下巴部位进行整形。另外,借助人脸特征点还能确定各个部位的轮廓,例如人脸特征点可以确定脸型、鼻型、唇形、眼型等部位的轮廓。The type of plastic surgery for the face area may include any one or more types of face-lifting, reduced nose, enlarged lips, enlarged eyes, plumped apple muscles, and smiling lips. When performing face-lifting, the chin area can be determined according to the contour of the facial feature points in the chin area, and then the chin area in the original image can be reshaped. In addition, the contours of various parts can be determined with the help of facial feature points. For example, facial feature points can determine the contours of the face shape, nose shape, lip shape, and eye shape.
步骤S223:对整形部位的轮廓进行调整,获得初级美妆图像。Step S223: Adjust the contour of the plastic part to obtain a primary beauty image.
上述视频直播中图像处理方法,识别整形类型对应的整形部位,通过对整形部位的轮廓进行调整,可以实现对目标区域中部分位置进行整形。The image processing method in the above-mentioned live video broadcast recognizes the plastic parts corresponding to the plastic type, and adjusts the contours of the plastic parts to realize the plastic parts in the target area.
整形部位轮廓的调整可以借助图像处理来实现,通过对整形部位的图像像素进行局部“挤压”以及局部“拉伸”的图像调整整形部位的轮廓。以瘦脸的整形类型为例,下巴部分的图像像素会被从两边向中间的下巴轮廓进行“挤压”,下巴部分外部的图像像素会向外“拉伸”,特别是越靠近下巴轮廓的区域挤压的强度越强,最终将下巴轮廓调整至整形的目标轮廓,越靠近下巴轮廓的区域拉伸的强度也越强,向外拉伸可以避免图像其他区域出现明显的变形。The adjustment of the contour of the reshaping part can be realized by means of image processing. The contour of the reshaping part can be adjusted by locally "squeezing" and locally "stretching" the image pixels of the shaping part. Take the face-lifting type as an example, the image pixels of the chin part will be "squeezed" from the two sides to the middle chin contour, and the image pixels outside the chin part will be "stretched" outwards, especially the area closer to the chin contour The stronger the squeezing strength, the final adjustment of the chin contour to the target contour of the shaping, the stronger the stretching of the area closer to the chin contour, and the outward stretching can avoid obvious deformation in other areas of the image.
正是由于越靠近轮廓的区域,图像处理中变形的强度越强。在先贴图后整形的操作时,贴图边沿与轮廓之间的缝隙会在变形时得到放大,这就导致了人物轮廓附近出现明显的奇异图形。如图3所示,图3为一个实施例中先贴图后整形的美妆处理效果图,粉底贴图是在整个脸部覆盖一层半透明的人脸蒙皮,下巴部位的粉底贴图与原下巴轮廓之间的缝隙在瘦脸的整形下被拉伸而放大,在下巴轮廓处出现怪异的图形,如图3中箭头所指,此时粉底特效和瘦脸整形之间图像处理的配合效果差。It is precisely because the closer the area is to the contour, the stronger the intensity of the deformation in image processing. In the operation of mapping first and then shaping, the gap between the edge of the texture and the contour will be enlarged when deformed, which leads to obvious strange shapes near the contours of the characters. As shown in Fig. 3, Fig. 3 is an effect drawing of the beauty treatment after applying the texture first and then shaping. The foundation texture is to cover the entire face with a layer of translucent face skin, the foundation texture of the chin and the original chin The gap between the contours is stretched and enlarged under the face-lifting, and weird graphics appear at the contour of the chin, as indicated by the arrow in Figure 3. At this time, the matching effect of the image processing between the foundation effect and the face-lifting is poor.
在一个实施例中,步骤S223中对整形部位的轮廓进行调整的步骤,可以包括:In an embodiment, the step of adjusting the contour of the shaping part in step S223 may include:
步骤S2231:根据第一人脸特征点提取整形对象的当前轮廓;步骤S2232:将当前轮廓调整至整形类型对应的整形轮廓。Step S2231: Extract the current contour of the plastic object according to the first face feature point; Step S2232: Adjust the current contour to the plastic contour corresponding to the plastic type.
本步骤中,将原始图像中的当前轮廓调整至整形轮廓,获得初级美妆图像。In this step, the current contour in the original image is adjusted to the plastic contour to obtain a primary beauty image.
步骤S230中对所述第一图像中的所述目标区域进行二次识别,获得所述第一图像的第二图像轮廓信息,根据所述第二图像轮廓信息将美妆贴图叠加在所述第一图像上并生成第二图像的步骤,可以包括:In step S230, the target area in the first image is identified twice to obtain second image contour information of the first image, and a beauty texture is superimposed on the first image according to the second image contour information. The step of generating a second image on one image may include:
步骤S231:检测整形后的人脸区域,获得第二人脸特征点;Step S231: Detect the face area after shaping to obtain a second face feature point;
本步骤中,进行二次人脸识别,获得初级美妆图像中的第二人脸特征点。In this step, the second face recognition is performed to obtain the second face feature points in the primary beauty image.
步骤S232:调取与第二人脸特征点匹配的贴图图像;步骤S233:在目标区域中融合贴图图像,使得贴图图像的轮廓与整形轮廓重合。Step S232: Retrieve a texture image matching the second face feature point; Step S233: Fusion the texture image in the target area, so that the contour of the texture image coincides with the shaping contour.
二次人脸识别后的第二人脸特征点与整形轮廓贴合,使得根据第二人脸特征获取的贴图的轮廓与整形轮廓重合。The second face feature points after the secondary face recognition are fitted to the plastic contour, so that the contour of the texture acquired according to the second facial feature coincides with the plastic contour.
上述视频直播中图像处理方法,可以使得贴图图像与整形轮廓之间可以重合,提高美妆特效和整形之间图像处理的配合效果。The above-mentioned image processing method in the live video broadcast can make the texture image and the plastic contour overlap, and improve the matching effect of the image processing between the special effects of the beauty makeup and the plastic surgery.
在一个实施例中,在根据所述第二图像轮廓信息将美妆贴图叠加在所述第一图像上的步骤之后,还可以包括:In an embodiment, after the step of superimposing a beauty texture map on the first image according to the second image contour information, the method may further include:
步骤S260:根据第二图像轮廓信息识别第一图像中进行美白磨皮的磨皮区域,并对叠加美妆贴图的第一图像中磨皮区域进行美白磨皮,生成第二图像。Step S260: Recognizing the skin-whitening skin area in the first image according to the contour information of the second image, and performing the skin-whitening skin skinning area on the first image with the superimposed beauty texture map to generate a second image.
上述视频直播中图像处理方法,美妆贴图后,第二图像中的图像轮廓信息并未发生改变,继续沿用第二图像轮廓信息所识别的磨皮区域也是准确的,此时可以加快磨皮区域的识别效率。In the above-mentioned live video image processing method, after the beauty mapping, the image contour information in the second image has not changed. The dermabrasion area identified by continuing to use the second image contour information is also accurate, and the dermabrasion area can be accelerated at this time Recognition efficiency.
在一个实施示例中,以瘦脸和粉底贴图为例,如图4和图5所示,图4为又一个实施例中视频直播中图像处理方法的流程图,图5为一个实施例中人脸特征点的原理示意图。本实施示例提供的视频直播中图像处理方 法包括:通过USB外接的摄像设备采集直播视频,从直播视频中提取视频图像;对视频图像进行第一次人脸识别,检测第一人脸特征点,如图5中的黑色特征点所示,通过人脸识别确定人脸区域,在第一次人脸识别后进行人脸区域的瘦脸整形,获得第一图像;再进行第二次人脸识别,对第一图像检测第二人脸特征点,如图5中的白色特征点所示,获得整形后更加准确的人脸特征点,对第一图像重新确定人脸区域,根据第二次人脸识别进行美妆贴图,得到目标美妆图像,增强美妆贴图在边缘部分的贴合效果,粉底贴图不会在瘦脸整形过程中被拉伸变形,避免粉底贴图与原下巴轮廓之间的缝隙在瘦脸整形下被放大,提升粉底特效和瘦脸整形之间图像处理的配合效果。在美妆贴图之后,还可以继续根据第二次人脸识别对目标美妆图像进行美白磨皮,加快磨皮区域的识别效率。In an implementation example, take thin face and foundation texture as examples, as shown in Figures 4 and 5. Figure 4 is a flowchart of an image processing method in live video in another embodiment, and Figure 5 is a face in an embodiment Schematic diagram of the principle of feature points. The image processing method in the live video broadcast provided by this embodiment example includes: collecting live video through a USB external camera device, extracting video images from the live video; performing the first face recognition on the video image, and detecting the first facial feature point, As shown by the black feature points in Figure 5, the face area is determined by face recognition, and after the first face recognition, face-lifting of the face area is performed to obtain the first image; then the second face recognition is performed, Detect the second face feature points on the first image, as shown by the white feature points in Figure 5, to obtain more accurate face feature points after shaping, re-determine the face area for the first image, and according to the second face Recognize and apply the beauty texture to obtain the target beauty image, and enhance the fit effect of the beauty texture on the edge. The foundation texture will not be stretched and deformed during the face-lifting process, avoiding the gap between the foundation texture and the original chin contour. It is magnified under face-lifting and shaping to improve the coordination effect of image processing between foundation special effects and face-lifting. After the beauty stickers are applied, you can continue to perform whitening and dermabrasion on the target beauty image based on the second face recognition to speed up the recognition efficiency of the dermabrasion area.
在一个实施例中,如图6所示,图6为一个实施例中视频直播中图像处理系统的结构示意图,本实施例中提供一种图像的美妆处理系统,具体可以包括提取模块610、整形模块620、贴图模块630和视频模块640,其中:In an embodiment, as shown in FIG. 6, FIG. 6 is a schematic structural diagram of an image processing system in a live video broadcast in an embodiment. In this embodiment, an image beauty processing system is provided, which may specifically include an extraction module 610, The shaping module 620, the texture module 630 and the video module 640, of which:
提取模块610,用于从直播视频中提取视频图像。The extraction module 610 is used to extract video images from the live video.
提取模块610中,直播视频包含有多个图片帧,处理器可以调取直播视频的图片帧作为视频图像。In the extraction module 610, the live video contains multiple picture frames, and the processor can retrieve the picture frames of the live video as video images.
整形模块620,用于对所述视频图像的目标区域进行识别,获得第一图像轮廓信息,根据所述第一图像轮廓信息对所述视频图像进行整形,生成第一图像。The shaping module 620 is configured to identify the target area of the video image, obtain first image contour information, and shape the video image according to the first image contour information to generate a first image.
处理器可以具有识别视频图像中特定目标区域的功能,处理器可以从视频图像中检测图像特征,依据图像特征识别目标区域,以及生成第一图像轮廓信息。图像轮廓信息可以是用于记录图像特征,图像轮廓信息还可以表征图像中轮廓的线条等。根据所述第一图像轮廓信息,处理器可以对视频图像中目标区域部分的像素进行图像处理,按照整形特点来调整目标区域中指定的轮廓线条,通过目标区域中轮廓线条的调整可以展现整形的效果。The processor may have a function of identifying a specific target area in the video image, and the processor may detect image features from the video image, identify the target area based on the image features, and generate first image contour information. The image contour information can be used to record image features, and the image contour information can also characterize contour lines in the image. According to the first image contour information, the processor can perform image processing on the pixels in the target area part of the video image, adjust the contour lines specified in the target area according to the shaping characteristics, and display the contour lines in the target area by adjusting the contour lines in the target area. effect.
视频图像可以是包含了脸部和/或者身体躯干的图像。视频图像中的目 标区域可以是脸部区域或者人体区域,例如人脸、躯干、四肢等部位区域等。The video image can be an image that includes the face and/or the torso of the body. The target area in the video image can be the face area or the human body area, such as the human face, torso, limbs and other parts of the area.
以包含人脸的视频图像为例,在一个实施例中,视频图像为包含人脸的图像,目标区域包括人脸区域,图像轮廓信息为人脸特征点。处理器对视频图像进行人脸特征点的检测,检测出原始图像中的人脸特征点。例如,处理器可以调用人脸识别算法进行人脸检测并输出106个人脸特征点的坐标。人脸区域的整形可以包括瘦脸、缩鼻、丰唇、放大眼睛、丰满苹果肌、微笑唇中任意一种或多种类型。人脸区域的贴图可以包括粉底、鼻影、嘴唇、眉毛、眼影、美瞳、卧蚕、腮红中任意一种或多种美妆特效。Taking a video image containing a human face as an example, in one embodiment, the video image is an image containing a human face, the target area includes a human face area, and the image contour information is a human face feature point. The processor detects the facial feature points of the video image, and detects the facial feature points in the original image. For example, the processor may call a face recognition algorithm to perform face detection and output the coordinates of 106 facial feature points. The shaping of the face area can include any one or more types of face thinning, nose reduction, lip augmentation, enlarged eyes, plump apple muscle, and smiling lips. The texture of the face area may include any one or more of makeup special effects from foundation, nose shadow, lips, eyebrows, eye shadow, cosmetic contact lenses, silkworm, and blush.
另外处理器也可以识别出视频图像中人体的四肢和躯干,将其作为待处理的目标区域。例如,视频图像中存在腿部区域,处理器可以检测出视频图像中的轮廓线条,通过图像轮廓信息记录所检测的轮廓线条;处理器可以根据轮廓线条辨别出腿部的特征区域,确定视频图像中的腿部区域;对腿部区域进行图像处理,改变腿部的轮廓线条,该区域的腿部实现瘦腿或拉伸腿部的整形,或者使得腿部实现变粗或变胖的整形。而且,处理器还可以具有检测出视频图像中皮肤区域的功能,如借助滤波器滤除出图像中具有平滑特性的皮肤区域,识别皮肤区域的轮廓线条,根据轮廓线条可以进一步判断该皮肤区域是否为腿部区域,有利于准确识别腿部区域。In addition, the processor can also identify the limbs and torso of the human body in the video image, and use them as the target area to be processed. For example, if there is a leg area in a video image, the processor can detect the contour lines in the video image, and record the detected contour lines through the image contour information; the processor can identify the characteristic areas of the legs according to the contour lines and determine the video image The leg area in the middle; image processing of the leg area to change the contour lines of the leg, the leg in this area realizes the shaping of the thin leg or the stretched leg, or the shaping of the leg to achieve thick or fattening. Moreover, the processor can also have the function of detecting the skin area in the video image, such as filtering out the smooth skin area in the image with the help of a filter, identifying the contour lines of the skin area, and further determining whether the skin area is based on the contour lines. It is the leg area, which is conducive to accurately identifying the leg area.
在对视频图像中像素进行图像处理,实现轮廓线条的调整中,以作用于下巴轮廓的瘦脸整形为例,目标区域为人脸区域,通过图像轮廓信息可以识别人脸区域的待整形的下巴部分,对于下巴部分的图像像素会被从两边向中间的下巴轮廓进行“挤压”,特别是越靠近下巴轮廓的区域挤压的强度越强,最终将下巴轮廓调整至整形的目标轮廓。In the image processing of the pixels in the video image to realize the adjustment of the contour lines, take the face-lift shaping acting on the chin contour as an example, the target area is the face area, and the chin part to be reshaped in the face area can be identified through the image contour information. The image pixels of the chin part will be "squeezed" from the two sides to the middle chin contour, especially the closer the chin contour is, the stronger the squeezing will be, and finally the chin contour will be adjusted to the target contour of the shaping.
贴图模块630,用于对所述第一图像中的所述目标区域进行二次识别,获得所述第一图像的第二图像轮廓信息,根据所述第二图像轮廓信息将美妆贴图叠加在所述第一图像上并生成第二图像。The mapping module 630 is configured to perform secondary recognition on the target area in the first image, obtain second image contour information of the first image, and superimpose the beauty texture on the second image contour information according to the second image contour information. And generate a second image on the first image.
贴图模块630中,提取第一图像的第二图像轮廓信息,再次识别目标区域,更新整形后的目标区域。本步骤中二次识别的识别方式可以与步骤S220中的识别方式相同,处理器可以从第一图像中检测图像特征,依据图 像特征识别目标区域,以及生成第二图像轮廓信息。处理器可以根据第二图像轮廓信息选择与目标区域匹配的美妆贴图,以及确定进行叠加的准确区域,将美妆贴图叠加在第一图像上并生成第二图像,从而获得美妆特效准确贴合的第二图像。In the mapping module 630, the contour information of the second image of the first image is extracted, the target area is recognized again, and the reshaped target area is updated. The recognition method of the secondary recognition in this step can be the same as the recognition method in step S220. The processor can detect image features from the first image, identify the target area based on the image features, and generate second image contour information. The processor can select the beauty texture map matching the target area according to the contour information of the second image, and determine the accurate area to be superimposed, superimpose the beauty texture map on the first image and generate a second image, thereby obtaining accurate beauty makeup effects. The second image together.
视频模块640,用于利用所述第二图像替换所述直播视频中的视频图像,得到目标直播视频并输出。The video module 640 is configured to replace the video image in the live video with the second image to obtain and output the target live video.
视频模块640中,实现直播视频的视频图像更新为进过处理后的第二图像,实现对直播视频的整形和美妆贴图处理,得到和输出目标直播视频。In the video module 640, the video image of the live video is updated to the processed second image, the reshaping of the live video and the beauty mapping process are realized, and the target live video is obtained and output.
上述视频直播中图像处理系统,提取直播视频中的视频图像,对视频图像的目标区域首次识别后进行视频图像的整形,整形后再进行目标区域的二次识别,二次识别后得到的第二图像轮廓信息可以准确描述整形后图像中的轮廓,根据第二图像轮廓信息可以进行与第一图像匹配的美妆贴图处理,美妆贴图与第一图像的图像轮廓匹配,避免轮廓附近出现不匹配的奇异图形,提升直播视频中整形和美妆特效之间的配合效果。The image processing system in the above video live broadcast extracts the video image in the live video, reshapes the video image after the first recognition of the target area of the video image, and then performs the second recognition of the target area after the reshaping, and the second recognition is obtained after the second recognition. The contour information of the image can accurately describe the contour in the reshaped image. According to the contour information of the second image, the beauty texture matching the first image can be processed. The beauty texture matches the contour of the first image to avoid mismatches near the contour. The bizarre graphics enhance the coordination effect between plastic surgery and beauty special effects in live video.
关于视频直播中图像处理系统的具体限定可以参见上文中对于视频直播中图像处理方法的限定,在此不再赘述。上述视频直播中图像处理系统中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。For the specific definition of the image processing system in the live video broadcast, please refer to the above definition of the image processing method in the live video broadcast, which will not be repeated here. The various modules in the image processing system in the above-mentioned live video broadcast can be implemented in whole or in part by software, hardware, and a combination thereof. The above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
如图7所示,图7为一个实施例中计算机设备的内部结构示意图。该计算机设备包括通过系统总线连接的处理器、非易失性存储介质、存储器和网络接口。其中,该计算机设备的非易失性存储介质存储有操作系统、数据库和计算机程序,数据库中可存储有控件信息序列,该计算机程序被处理器执行时,可使得处理器实现一种视频直播中图像处理方法。该计算机设备的处理器用于提供计算和控制能力,支撑整个计算机设备的运行。该计算机设备的存储器中可存储有计算机程序,该计算机程序被处理器执行时,可使得处理器执行一种视频直播中图像处理方法。该计算机设备的网络接口用于与终端连接通信。本领域技术人员可以理解,图7中示出的 结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。As shown in FIG. 7, FIG. 7 is a schematic diagram of the internal structure of a computer device in an embodiment. The computer device includes a processor, a non-volatile storage medium, a memory, and a network interface connected through a system bus. Among them, the non-volatile storage medium of the computer device stores an operating system, a database, and a computer program. The database may store control information sequences. When the computer program is executed by the processor, the processor can realize a live video broadcast. Image processing method. The processor of the computer equipment is used to provide computing and control capabilities, and supports the operation of the entire computer equipment. A computer program can be stored in the memory of the computer device, and when the computer program is executed by the processor, the processor can execute an image processing method in live video broadcasting. The network interface of the computer device is used to connect and communicate with the terminal. Those skilled in the art can understand that the structure shown in FIG. 7 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied. The specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
在一个实施例中,提出了一种计算机设备,计算机设备包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行计算机程序时实现上述任一实施例的视频直播中图像处理方法的步骤。In one embodiment, a computer device is proposed. The computer device includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor. When the processor executes the computer program, the video of any of the above embodiments is implemented. The steps of the image processing method in the live broadcast.
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述任一实施例的视频直播中图像处理方法的步骤。In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the image processing method in the live video broadcast of any of the above embodiments are realized.
在一个实施例中,提供了一种终端,其包括:一个或多个处理器;存储器;一个或多个应用程序,其中一个或多个应用程序被存储在存储器中并被配置为由一个或多个处理器执行,一个或多个程序配置用于:执行根据上述任一实施例中的视频直播中图像处理方法。In one embodiment, a terminal is provided, which includes: one or more processors; a memory; one or more application programs, wherein one or more application programs are stored in the memory and configured to be operated by one or more Multiple processors are executed, and one or more programs are configured to execute the image processing method in the live video broadcast according to any one of the foregoing embodiments.
本申请实施例还提供了移动终端,如图8所示,图8为一个实施例中终端的内部结构示意图。为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。该终端可以为包括手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑等任意终端设备,以终端为手机为例:An embodiment of the present application also provides a mobile terminal, as shown in FIG. 8, which is a schematic diagram of the internal structure of the terminal in an embodiment. For ease of description, only the parts related to the embodiments of the present application are shown. For specific technical details that are not disclosed, please refer to the method part of the embodiments of the present application. The terminal can be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales, sales terminal), a car computer, etc. Take the terminal as a mobile phone as an example:
图8示出的是与本申请实施例提供的终端相关的手机的部分结构的框图。参考图8,手机包括:射频(Radio Frequency,RF)电路810、存储器820、输入单元830、显示单元840、传感器850、音频电路860、无线保真(wireless fidelity,Wi-Fi)模块870、处理器880、以及电源890等部件。本领域技术人员可以理解,图8中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。FIG. 8 shows a block diagram of a part of the structure of a mobile phone related to a terminal provided in an embodiment of the present application. Referring to FIG. 8, the mobile phone includes: a radio frequency (RF) circuit 810, a memory 820, an input unit 830, a display unit 840, a sensor 850, an audio circuit 860, a wireless fidelity (Wi-Fi) module 870, a processing 880, power supply 890 and other components. Those skilled in the art can understand that the structure of the mobile phone shown in FIG. 8 does not constitute a limitation on the mobile phone, and may include more or less components than those shown in the figure, or a combination of some components, or different component arrangements.
在本申请实施例中,该终端所包括的处理器880还具有以下功能:从直播视频中提取视频图像;对所述视频图像的目标区域进行识别,获得第一图像轮廓信息,根据所述第一图像轮廓信息对所述视频图像进行整形,生成第一图像;对所述第一图像中的所述目标区域进行二次识别,获得所 述第一图像的第二图像轮廓信息,根据所述第二图像轮廓信息将美妆贴图叠加在所述第一图像上并生成第二图像;利用所述第二图像替换所述直播视频中的视频图像,得到目标直播视频并输出。也即处理器880具备执行上述的任一实施例视频直播中图像处理方法的功能,在此不再赘述。In the embodiment of the present application, the processor 880 included in the terminal also has the following functions: extracting video images from live video; identifying the target area of the video image to obtain first image contour information, and according to the first image An image contour information reshapes the video image to generate a first image; performs secondary recognition on the target area in the first image to obtain the second image contour information of the first image, according to the The second image contour information superimposes the beauty texture map on the first image to generate a second image; replace the video image in the live video with the second image to obtain and output the target live video. That is, the processor 880 has the function of executing the image processing method in the live video broadcast of any of the foregoing embodiments, and details are not described herein again.
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。It should be understood that although the various steps in the flowchart of the drawings are displayed in sequence as indicated by the arrows, these steps are not necessarily executed in sequence in the order indicated by the arrows. Unless explicitly stated in this article, the execution of these steps is not strictly limited in order, and they can be executed in other orders. Moreover, at least part of the steps in the flowchart of the drawings may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times, and the order of execution is also It is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
以上所述仅是本申请的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。The above are only part of the implementation of this application. It should be pointed out that for those of ordinary skill in the art, without departing from the principles of this application, several improvements and modifications can be made, and these improvements and modifications are also Should be regarded as the scope of protection of this application.

Claims (7)

  1. 一种视频直播中图像处理方法,其特征在于,包括如下步骤:An image processing method in live video broadcast, characterized in that it comprises the following steps:
    从直播视频中提取视频图像;Extract video images from live video;
    对所述视频图像的目标区域进行识别,获得第一图像轮廓信息,根据所述第一图像轮廓信息对所述视频图像进行整形,生成第一图像;Recognizing the target area of the video image to obtain first image contour information, and shaping the video image according to the first image contour information to generate a first image;
    对所述第一图像中的所述目标区域进行二次识别,获得所述第一图像的第二图像轮廓信息,根据所述第二图像轮廓信息将美妆贴图叠加在所述第一图像上并生成第二图像;Perform secondary recognition on the target area in the first image to obtain second image contour information of the first image, and superimpose a beauty map on the first image according to the second image contour information And generate a second image;
    利用所述第二图像替换所述直播视频中的视频图像,得到目标直播视频并输出。The second image is used to replace the video image in the live video to obtain and output the target live video.
  2. 根据权利要求1所述的视频直播中图像处理方法,其特征在于,所述从直播视频中提取视频图像的步骤,包括:The method for image processing in a live video broadcast according to claim 1, wherein the step of extracting a video image from the live video comprises:
    获取所述直播视频,判断所述直播视频中的视频图像是否进行了贴图处理;若是,判断所述贴图处理的目标区域和待整形的目标区域之间是否重叠;Obtain the live video, and determine whether the video image in the live video has been subjected to texture processing; if so, determine whether the target area of the texture processing overlaps with the target area to be reshaped;
    若重叠,则调取贴图前的原始直播视频,并从所述原始直播视频中提取所述视频图像。If they overlap, the original live video before the texture is retrieved, and the video image is extracted from the original live video.
  3. 根据权利要求1所述的视频直播中图像处理方法,其特征在于,所述视频图像为包含人脸的图像,所述目标区域包括人脸区域,所述图像轮廓信息为人脸特征点;The method for image processing in live video broadcast according to claim 1, wherein the video image is an image containing a human face, the target area includes a human face area, and the image contour information is a human face feature point;
    所述对所述视频图像的目标区域进行识别,获得第一图像轮廓信息,根据所述第一图像轮廓信息对所述视频图像进行整形,生成第一图像的步骤,包括:The step of recognizing the target area of the video image to obtain first image contour information, and shaping the video image according to the first image contour information to generate the first image includes:
    识别所述视频图像中的人脸区域,检测所述人脸区域并获得第一人脸特征点;Identifying the face area in the video image, detecting the face area, and obtaining a first face feature point;
    根据所述第一人脸特征点从所述人脸区域中确定整形类型对应的整形部位;Determining a plastic part corresponding to a plastic plastic type from the human face area according to the first facial feature point;
    对所述整形部位的轮廓进行调整,获得所述初级美妆图像。The contour of the plastic part is adjusted to obtain the primary beauty image.
  4. 根据权利要求3所述的视频直播中图像处理方法,其特征在于,所述对所述整形部位的轮廓进行调整的步骤,包括:The image processing method in live video broadcast according to claim 3, wherein the step of adjusting the contour of the shaping part comprises:
    根据所述第一人脸特征点提取所述整形对象的当前轮廓;Extracting the current contour of the shaping object according to the first face feature point;
    将所述当前轮廓调整至所述整形类型对应的整形轮廓;Adjusting the current contour to a shaping contour corresponding to the shaping type;
    所述对所述第一图像中的所述目标区域进行二次识别,获得所述第一图像的第二图像轮廓信息,根据所述第二图像轮廓信息将美妆贴图叠加在所述第一图像上并生成第二图像的步骤,包括:The second recognition of the target area in the first image is performed to obtain second image contour information of the first image, and a beauty texture is superimposed on the first image according to the second image contour information. The steps of generating a second image on the image include:
    检测整形后的人脸区域,获得第二人脸特征点;Detect the face area after shaping to obtain the second face feature point;
    调取与所述第二人脸特征点匹配的贴图图像;Retrieve a texture image matching the second face feature point;
    在所述目标区域中融合所述贴图图像,使得所述贴图图像的轮廓与所述整形轮廓重合。The texture image is merged in the target area so that the contour of the texture image coincides with the shaping contour.
  5. 根据权利要求1所述的视频直播中图像处理方法,其特征在于,在所述根据所述第二图像轮廓信息将美妆贴图叠加在所述第一图像上的步骤之后,还包括:The method for image processing in live video broadcasting according to claim 1, wherein after the step of superimposing a beauty texture map on the first image according to the second image contour information, the method further comprises:
    根据所述第二图像轮廓信息识别所述第一图像中进行美白磨皮的磨皮区域,并对叠加美妆贴图的第一图像中所述磨皮区域进行美白磨皮,生成所述第二图像。According to the contour information of the second image, the skin area for whitening and dermabrasion in the first image is identified, and the dermabrasion area in the first image on which the beauty map is superimposed is subjected to whitening and dermabrasion to generate the second image. image.
  6. 根据权利要求3所述的视频直播中图像处理方法,其特征在于,所述整形包括瘦脸、缩鼻、丰唇、放大眼睛、丰满苹果肌、微笑唇中任意一种或多种,和/或,所述贴图包括粉底、鼻影、嘴唇、眉毛、眼影、美瞳、卧蚕、腮红中任意一种或多种。The image processing method in live video according to claim 3, wherein the shaping includes any one or more of face-lifting, nasal reduction, lip augmentation, enlarged eyes, full apple muscle, smiling lips, and/or The texture includes any one or more of foundation, nose shadow, lips, eyebrows, eye shadow, cosmetic contact lenses, silkworm, and blush.
  7. 一种视频直播中图像处理系统,其特征在于,包括:An image processing system in live video broadcasting, which is characterized in that it comprises:
    提取模块,用于从直播视频中提取视频图像;Extraction module for extracting video images from live video;
    整形模块,用于对所述视频图像的目标区域进行识别,获得第一图像轮廓信息,根据所述第一图像轮廓信息对所述视频图像进行整形,生成第一图像;A shaping module, configured to identify the target area of the video image, obtain first image contour information, and shape the video image according to the first image contour information to generate a first image;
    贴图模块,用于对所述第一图像中的所述目标区域进行二次识别,获得所述第一图像的第二图像轮廓信息,根据所述第二图像轮廓信息将美妆贴图叠加在所述第一图像上并生成第二图像;The mapping module is used to perform secondary recognition on the target area in the first image, obtain second image contour information of the first image, and superimpose the beauty texture on the first image according to the second image contour information. On the first image and generating a second image;
PCT/CN2020/112970 2019-09-10 2020-09-02 Image processing method and system in live streaming WO2021047433A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910854350.7 2019-09-10
CN201910854350.7A CN110490828B (en) 2019-09-10 2019-09-10 Image processing method and system in video live broadcast

Publications (1)

Publication Number Publication Date
WO2021047433A1 true WO2021047433A1 (en) 2021-03-18

Family

ID=68557252

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/112970 WO2021047433A1 (en) 2019-09-10 2020-09-02 Image processing method and system in live streaming

Country Status (2)

Country Link
CN (1) CN110490828B (en)
WO (1) WO2021047433A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628132A (en) * 2021-07-26 2021-11-09 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN117314794A (en) * 2023-11-30 2023-12-29 深圳市美高电子设备有限公司 Live broadcast beautifying method and device, electronic equipment and storage medium

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110490828B (en) * 2019-09-10 2022-07-08 广州方硅信息技术有限公司 Image processing method and system in video live broadcast
CN111127352B (en) * 2019-12-13 2020-12-01 北京达佳互联信息技术有限公司 Image processing method, device, terminal and storage medium
CN111091512B (en) * 2019-12-18 2024-03-01 广州酷狗计算机科技有限公司 Image processing method and device and computer readable storage medium
CN111179156B (en) * 2019-12-23 2023-09-19 北京中广上洋科技股份有限公司 Video beautifying method based on face detection
CN111597984B (en) * 2020-05-15 2023-09-26 北京百度网讯科技有限公司 Label paper testing method, device, electronic equipment and computer readable storage medium
CN112218107B (en) * 2020-09-18 2022-07-08 广州虎牙科技有限公司 Live broadcast rendering method and device, electronic equipment and storage medium
CN112218111A (en) * 2020-09-30 2021-01-12 珠海格力电器股份有限公司 Image display method and device, storage medium and electronic equipment
CN113382275B (en) * 2021-06-07 2023-03-07 广州博冠信息科技有限公司 Live broadcast data generation method and device, storage medium and electronic equipment
CN113490009B (en) * 2021-07-06 2023-04-21 广州虎牙科技有限公司 Content information implantation method, device, server and storage medium
CN113709519B (en) * 2021-08-27 2023-11-17 上海掌门科技有限公司 Method and equipment for determining live broadcast shielding area

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106341720A (en) * 2016-08-18 2017-01-18 北京奇虎科技有限公司 Method for adding face effects in live video and device thereof
CN107705248A (en) * 2017-10-31 2018-02-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
US20180075524A1 (en) * 2016-09-15 2018-03-15 GlamST LLC Applying virtual makeup products
CN107945188A (en) * 2017-11-20 2018-04-20 北京奇虎科技有限公司 Personage based on scene cut dresss up method and device, computing device
CN110490828A (en) * 2019-09-10 2019-11-22 广州华多网络科技有限公司 Image processing method and system in net cast

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109120850B (en) * 2018-09-20 2021-10-22 维沃移动通信有限公司 Image processing method and mobile terminal
CN113329252B (en) * 2018-10-24 2023-01-06 广州虎牙科技有限公司 Live broadcast-based face processing method, device, equipment and storage medium
CN109614902A (en) * 2018-11-30 2019-04-12 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium
CN109754375B (en) * 2018-12-25 2021-05-14 广州方硅信息技术有限公司 Image processing method, system, computer device, storage medium and terminal
CN110012352B (en) * 2019-04-17 2020-07-24 广州华多网络科技有限公司 Image special effect processing method and device and video live broadcast terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106341720A (en) * 2016-08-18 2017-01-18 北京奇虎科技有限公司 Method for adding face effects in live video and device thereof
US20180075524A1 (en) * 2016-09-15 2018-03-15 GlamST LLC Applying virtual makeup products
CN107705248A (en) * 2017-10-31 2018-02-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN107945188A (en) * 2017-11-20 2018-04-20 北京奇虎科技有限公司 Personage based on scene cut dresss up method and device, computing device
CN110490828A (en) * 2019-09-10 2019-11-22 广州华多网络科技有限公司 Image processing method and system in net cast

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628132A (en) * 2021-07-26 2021-11-09 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN117314794A (en) * 2023-11-30 2023-12-29 深圳市美高电子设备有限公司 Live broadcast beautifying method and device, electronic equipment and storage medium
CN117314794B (en) * 2023-11-30 2024-03-01 深圳市美高电子设备有限公司 Live broadcast beautifying method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110490828A (en) 2019-11-22
CN110490828B (en) 2022-07-08

Similar Documents

Publication Publication Date Title
WO2021047433A1 (en) Image processing method and system in live streaming
US11043011B2 (en) Image processing method, apparatus, terminal, and storage medium for fusing images of two objects
González-Briones et al. A multi-agent system for the classification of gender and age from images
WO2019128508A1 (en) Method and apparatus for processing image, storage medium, and electronic device
WO2019100282A1 (en) Face skin color recognition method, device and intelligent terminal
WO2015074476A1 (en) Image processing method, apparatus, and storage medium
CN108012081B (en) Intelligent beautifying method, device, terminal and computer readable storage medium
KR101141643B1 (en) Apparatus and Method for caricature function in mobile terminal using basis of detection feature-point
CN106682632B (en) Method and device for processing face image
WO2017000764A1 (en) Gesture detection and recognition method and system
JP2006012062A (en) Image processor and its method, program and imaging device
JP2022548915A (en) Human body attribute recognition method, device, electronic device and computer program
WO2021197186A1 (en) Auxiliary makeup method, terminal device, storage medium and program product
WO2022135574A1 (en) Skin color detection method and apparatus, and mobile terminal and storage medium
WO2022135579A1 (en) Skin color detection method and device, mobile terminal, and storage medium
CN109242760B (en) Face image processing method and device and electronic equipment
JP2014186505A (en) Visual line detection device and imaging device
CN113344837B (en) Face image processing method and device, computer readable storage medium and terminal
Rahim et al. Dynamic hand gesture based sign word recognition using convolutional neural network with feature fusion
Zhang et al. A skin color model based on modified GLHS space for face detection
Blumrosen et al. Towards automated recognition of facial expressions in animal models
Chen et al. Fast eye detection using different color spaces
US20220207917A1 (en) Facial expression image processing method and apparatus, and electronic device
WO2022258013A1 (en) Image processing method and apparatus, electronic device and readable storage medium
US20230186425A1 (en) Face image processing method and apparatus, device, and computer readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20862816

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20862816

Country of ref document: EP

Kind code of ref document: A1