CN110956106B - Live broadcast on-demand processing method, device, storage medium and equipment - Google Patents

Live broadcast on-demand processing method, device, storage medium and equipment Download PDF

Info

Publication number
CN110956106B
CN110956106B CN201911144390.9A CN201911144390A CN110956106B CN 110956106 B CN110956106 B CN 110956106B CN 201911144390 A CN201911144390 A CN 201911144390A CN 110956106 B CN110956106 B CN 110956106B
Authority
CN
China
Prior art keywords
image
rotated
face feature
original image
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911144390.9A
Other languages
Chinese (zh)
Other versions
CN110956106A (en
Inventor
王云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Cubesili Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Cubesili Information Technology Co Ltd filed Critical Guangzhou Cubesili Information Technology Co Ltd
Priority to CN201911144390.9A priority Critical patent/CN110956106B/en
Publication of CN110956106A publication Critical patent/CN110956106A/en
Application granted granted Critical
Publication of CN110956106B publication Critical patent/CN110956106B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/762Media network packet handling at the source 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras

Abstract

The specification provides a processing method, a device, a storage medium and equipment for live broadcast. According to the method, the current placing gesture of the camera is obtained, the original image is rotated, the coordinates of the face feature points in the rotated image are obtained through identification, the coordinates of the face feature points in the original image are correspondingly converted, the original image is subjected to image beautifying processing, and the original image is sent to a live broadcast streaming tool to be rotated and pushed, so that live broadcast is realized. According to the method, the images are correctly rotated and then face recognition is carried out, and the face feature point coordinates are converted into correct face feature point coordinates, so that the problem that the face cannot be beautified in a scene of vertically arranging the USB camera for vertically live broadcasting is solved.

Description

Live broadcast on-demand processing method, device, storage medium and equipment
Technical Field
The present disclosure relates to the field of live broadcast, and in particular, to a method, an apparatus, a storage medium, and a device for processing live broadcast.
Background
In general, the effect exhibited by live video can be in a horizontal screen state or a vertical screen state. When some anchor is live, the live effect similar to the vertical screen of the mobile phone is realized by utilizing the USB cameras on the computer, and as the transverse screen effect is shown by the normal arrangement postures of most USB cameras, the anchor realizes the vertical screen live broadcast by the following steps: the USB cameras are placed vertically, the cameras are opened in live broadcasting beautifying software, various special effects are added, and then the cameras are rotated for 90 degrees in a live broadcasting plug-flow tool to achieve a vertical screen effect.
However, this approach has drawbacks: the image captured by the vertically placed camera is a 'lying' figure, and because of the limitation of the face beautifying software for live broadcasting in the prior art, the face recognition module in the face beautifying software for live broadcasting cannot automatically correct and recognize the 'lying' figure, so that no face characteristic point data is output, so that many face recognition-based functions in the face beautifying software for live broadcasting, such as expression special effects, makeup and the like, cannot be used.
Of course, a similar problem can occur when a camera which presents a vertical screen effect in a normal placing posture is used for realizing a horizontal screen effect. That is, the existing horizontal and vertical screen effect alternative has a certain defect, and brings inconvenience to users.
Disclosure of Invention
In order to overcome the problems in the related art, the present specification provides a method, an apparatus, a storage medium, and a device for processing live broadcast.
According to a first aspect of embodiments of the present disclosure, there is provided a method for processing a live broadcast, the method including:
acquiring user input information by using a UI interaction component on a live broadcast on-demand interface so as to determine the current placement posture of a camera;
acquiring an original image captured by the camera;
determining whether the original image is rotated according to the current placing gesture of the camera, and if the original image is rotated, rotating the image to a first angle, wherein the first angle is an angle capable of identifying face feature points;
obtaining face feature points in the rotated image and coordinates of the face feature points in the rotated image;
converting the coordinates of the face feature points in the rotated image into coordinates of the face feature points in the original image;
carrying out image beautifying treatment on the original image according to the coordinates of the face feature points in the original image;
and sending the image after the image beautifying treatment to a live broadcast pushing tool so that the live broadcast pushing tool rotates the image after the image beautifying treatment back to the first angle and pushes.
In some examples, options are set in the UI interaction component, where the options are used to display parameters related to the placement gesture of the camera; the user input information is one of the options determined by the user; parameters related to the camera placing posture comprise: normal pose, clockwise rotation by 90 degrees, rotation by 180 degrees and counterclockwise rotation by 90 degrees.
In some examples, the image beautifying process includes one or more of the following: special effect of expression, make-up and lens rhythm.
In some examples, the step of pushing the stream includes:
and packaging the image rotated back to the first angle to form stream data, and distributing the stream data to each client through a server.
In some examples, the original image is a horizontal screen effect image, and the rotated image is a vertical screen effect image.
In some examples, the number of points of the face feature points is 106 points.
According to a second aspect of embodiments of the present specification, there is provided a processing apparatus for live streaming, the apparatus comprising:
the acquisition module is used for acquiring user input information at the live broadcast opening interface by utilizing the UI interaction component so as to determine the current placing gesture of the camera and acquire an original image captured by the camera;
the determining module is used for determining whether the original image is rotated or not according to the current placing gesture of the camera;
the rotating module is used for rotating the image to a first angle when the determination result is yes, wherein the first angle is an angle of the identifiable face feature points;
the identification module is used for obtaining the face characteristic points in the rotated image and the coordinates of each face characteristic point in the rotated image;
the computing module is used for converting the coordinates of the face feature points in the rotated image into the coordinates of the face feature points in the original image;
the beautifying module is used for carrying out image beautifying treatment on the original image according to the coordinates of each face characteristic point in the original image;
and the live broadcast pushing tool is used for rotating the image after the image beautifying treatment back to a first angle and pushing the image.
According to a third aspect of embodiments of the present specification, there is provided a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements any of the methods of the embodiments of the present specification when the program is executed.
According to a fourth aspect of embodiments of the present description, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements any of the methods of the embodiments of the present description.
The technical scheme provided by the embodiment of the specification can comprise the following beneficial effects:
in an embodiment of the specification, a method, a device, a storage medium and equipment for processing live broadcast are disclosed. According to the method, the rotation angle of the camera is acquired, the original image is rotated, the coordinates of each face feature point in the rotated image are obtained through recognition, the coordinates of each face feature point in the original image are converted, further, the original image is subjected to image beautifying processing, and the original image is sent to a live broadcast plug-flow tool to be rotated and plug-flow, so that live broadcast and on-stream are realized. According to the method, the images are correctly rotated and then face recognition is carried out, and the face feature point coordinates are converted into correct face feature point coordinates, so that the problem that the face cannot be beautified in a scene of vertically arranging the USB camera for vertically live broadcasting is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the specification and together with the description, serve to explain the principles of the specification.
FIG. 1 is a flow chart of a method of processing live broadcasts according to an exemplary embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a UI interaction component interface shown in accordance with an exemplary embodiment of the present description;
fig. 3a, 3b, 3c, 3d are schematic views of four common placement poses of a camera;
fig. 4 is a schematic diagram of a method for converting coordinates of a face feature point in an original image according to an exemplary embodiment of the present specification;
fig. 5 is a hardware configuration diagram of a computer device where the processing apparatus for live broadcast in the embodiment of the present disclosure is located;
fig. 6 is a block diagram of a processing device for live streaming according to an exemplary embodiment of the present description.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the present specification. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present description as detailed in the accompanying claims.
The terminology used in the description presented herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the description. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this specification to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
The effects presented by live video can be generally divided into two states, horizontal and vertical. Generally, the video in the horizontal screen state has relatively wide visual field, rich space layering and strong depth sense; the video in the vertical screen state is more focused on the presentation of specific objects, is suitable for displaying simple and visual scenes, and can more provide immersive experience for audiences. When some anchor is live, the live effect similar to the vertical screen of the mobile phone is realized by utilizing the USB cameras on the computer, and as the transverse screen effect is shown by the normal arrangement postures of most USB cameras, the anchor realizes the vertical screen live broadcast by the following steps: the USB cameras are placed vertically, the cameras are opened in live broadcasting beautifying software, various special effects are added, and then the cameras are rotated for 90 degrees in a live broadcasting plug-flow tool to achieve a vertical screen effect.
However, this approach has drawbacks: the image captured by the vertically placed camera is a 'lying' figure, and because of the limitation of the face beautifying software for live broadcasting in the prior art, the face recognition module in the face beautifying software for live broadcasting cannot automatically correct and recognize the 'lying' figure, so that no face characteristic point data is output, so that many face recognition-based functions in the face beautifying software for live broadcasting, such as expression special effects, makeup and the like, cannot be used.
Of course, a similar problem can occur when a camera which presents a vertical screen effect in a normal placing posture is used for realizing a horizontal screen effect. That is, the existing horizontal and vertical screen effect alternative has a certain defect, and brings inconvenience to users.
Next, embodiments of the present specification will be described in detail.
As shown in fig. 1, fig. 1 is a flowchart illustrating a processing method of live broadcasting according to an exemplary embodiment, including the following steps:
in step 101, acquiring user input information by using a UI interaction component on a live broadcast on-demand interface so as to determine the current placement posture of a camera;
in some examples, an option is set in the UI (User Interface) interaction component mentioned in this step, where the option is used to display parameters related to the pose of the camera, the User input information is one of the options determined by the User, and the parameters related to the pose of the camera may include: normal pose, clockwise rotation by 90 degrees, rotation by 180 degrees and counterclockwise rotation by 90 degrees. As shown in fig. 2, fig. 2 is a schematic diagram of a UI interaction component interface shown in the embodiments of the present description, wherein the UI interaction component provides options for user input.
In other embodiments, the current pose of the camera may be obtained by other manners, for example, the current pose may be detected by using a detection device, where the detection device may be a detection device set by the camera itself, or may be a peripheral detection device.
102, acquiring an original image captured by the camera;
step 103, determining whether to rotate the original image according to the current placing gesture of the camera, and if so, rotating the image to a first angle, wherein the first angle is an angle capable of identifying face feature points;
in some examples, determining whether to rotate the original image according to the current pose of the camera mentioned in this step may refer to: and acquiring a rotation angle of the camera according to the current placement posture of the camera, and determining whether to rotate the original image according to the rotation angle, wherein the rotation angle can be a rotation angle representing the placement posture of the camera when the current placement posture of the camera presents a complete front posture relative to the captured image. When the face image captured by the camera presents a complete frontal posture, the face recognition module in the face beautifying software for live broadcasting is used for supporting the recognized image, and the recognition rate is high. As shown in fig. 3a, 3b, 3c, and 3d, fig. 3a, 3b, 3c, and 3d are schematic views of four common placement postures of the camera. The camera pose shown in fig. 3a is a normal pose that is generally considered, and may be considered as a pose when the captured face image presents a full frontal pose, where the rotation angle is 0 degrees; the rotation angle corresponding to the placement posture of the camera shown in fig. 3b is 180 degrees; the cameras shown in fig. 3c and 3d are the main players, and normally realize the camera placing gesture when the vertical screen live broadcast, and the rotation angles are 270 degrees clockwise and 90 degrees clockwise at the moment respectively. In some examples, when the rotation angle is 0, the result of the determination is no, the original image may not be rotated, and the original image may directly identify the face feature point, where the angle of the image is the first angle; when the rotation angle is not 0, the original image can be rotated to an angle capable of identifying the face feature point, namely, the first angle, if the determination result is that the original image is rotated. Taking fig. 3c as an example, the rotation angle obtained by the current placement posture of the camera in step 101 is 270 degrees clockwise, and then the original image is rotated 270 degrees clockwise according to the rotation angle, so as to obtain an image with a completely front posture after rotation.
In some examples, the original image is a cross-screen effect image and the rotated image is a portrait effect image. In an application scene of vertically arranging a USB camera at a PC end to realize vertical screen live broadcast, an original image is an image with a horizontal screen effect, and in the image with the horizontal screen effect, a main broadcasting face presents a 'lying' posture; and rotating the original image to an angle capable of identifying the characteristic points of the face according to the current placing gesture of the camera, wherein the rotated image is an image with a vertical screen effect.
104, obtaining face feature points in the rotated image and coordinates of each face feature point in the rotated image;
in some examples, at this step, faces are automatically detected and tracked in the image based on a face recognition algorithm, face feature points in the image are obtained, and coordinates of each face feature point relative to the image. The face feature points in the image may be predefined key points of the designated face, and may include: eyes, eyebrows, nose, mouth, facial contours, and the like. The step can select any one of the following face recognition algorithms: the face recognition method comprises a face feature point-based recognition algorithm, a whole face image-based recognition algorithm, a template-based recognition algorithm and a neural network recognition algorithm.
In some examples, the number of points of the face feature points in this step is 106 points. Some common face libraries provide the location of the corresponding face frame and the coordinates of the face feature points. The recognition rate is high and the fitting degree is high based on an algorithm of 106-point face feature points. In other embodiments, other point algorithms may be used to perform face recognition based on the application scenario.
In step 105, converting coordinates of the face feature points in the rotated image into coordinates of the face feature points in the original image;
in some examples, the specific process of this step is: as shown in fig. 4, fig. 4 is a schematic diagram showing a method for converting coordinates of a face feature point in an original image according to the present embodiment. Assuming that a certain point in the original image is taken as an origin to establish a reference coordinate system, and the coordinates of a certain face feature point in the rotated image are (a, b), the coordinates of the face feature point in the original image can be calculated according to the coordinates of the face feature point in the rotated image and the current placement posture of the camera. The rotated image is obtained by rotating the original image, the rotation angle is represented by θ, the value of θ represents clockwise rotation, and the value of θ represents counterclockwise rotation when negative. Then assuming that the included angle of the connecting line of the face feature point with the origin at coordinates (a, b) is α with respect to the abscissa axis, equation (1) can be obtained:coordinates (x, y) of the face feature point in the original image, wherein an included angle (alpha+θ) between a connecting line of the coordinates point and an origin and an abscissa axis is given, and then a formula (2) can be obtained: />The formula (1) and the formula (2) are combined with the formula (3): a, a 2 +b 2 =x 2 +y 2 The formula for calculating the coordinates (x, y) of the face feature point in the original image can be deduced, specifically as follows:
106, carrying out image beautifying processing on the image according to the coordinates of the face feature points in the original image;
in some examples, the image beautifying process referred to in this step may refer to one or more of the following: special effect of expression, make-up and lens rhythm. According to the obtained coordinates of the characteristic points of each face relative to the original image, the system can detect and position the face, so that the face recognition-based functions including expression special effects, makeup, lens rhythms and the like are realized based on the coordinates.
In step 107, the image after the image beautifying process is sent to a live broadcast pushing tool, so that the live broadcast pushing tool rotates the image after the image beautifying process back to a first angle, and pushes.
In some examples, the push referred to in this step refers to a process of transmitting the content packaged in the acquisition phase to the server, where the steps include: and packaging the image rotated back to the first angle to form stream data, and distributing the stream data to each client through a server.
In some examples, the live push tool mentioned in this step refers to a tool for transmitting live content to a server so that the live content is displayed on a live platform, and may include: OBS (Open Broadcaster Software). The OBS is live streaming media content production software, the image after the image beautifying treatment is sent to the OBS, and after the OBS rotates the image to be in a complete front gesture, the image is transmitted to a network, so that plug flow is realized. In other embodiments, other live push tools may be used to perform the corresponding operations.
According to the embodiment of the description, the image is correctly rotated by acquiring the current placing gesture of the camera, and the coordinates of the face feature points in the rotated image are converted into the correct face feature point coordinates according to the coordinates of the face feature points, so that the face recognition module can beautify the image output by the camera, and the problem that the face cannot be beautified under the scene of using the vertically placed USB camera for vertical screen live broadcasting is solved.
The following describes, with an application example, a processing method of live broadcast in the present specification:
the main broadcasting xiaomei uses the USB camera to carry out live broadcast at the PC end, and the transverse screen effect is displayed. The original practice is to cut off the left and right sides of the original horizontal screen video and only leave the middle vertical screen area, however, the resolution and definition are reduced, which is certainly unacceptable; the USB camera is placed vertically, the camera is opened in the live-broadcasting beautifying software, various special effects are added, then the camera is rotated by 90 degrees in the live-broadcasting plug-flow tool to achieve the vertical screen effect, but the face recognition module of the live-broadcasting beautifying software cannot correct and recognize the 'lying' portrait in the captured image picture, and the face feature point output is lacking, so that many functions based on face recognition in the live-broadcasting beautifying software, such as expression special effects, beauty make-up and the like, cannot be used.
The client used in the Mingmei is applied to the processing method of live broadcast of the specification, and the specific process comprises the following steps:
s401, acquiring an original image captured by a camera, and acquiring the current placement posture of the camera input by a user through a UI interaction component, wherein the UI interaction component provides options and is convenient for the user to input. In this embodiment, the placement posture of the camera for small input is "90 degrees rotated clockwise";
s402, determining to rotate an original image to a first angle according to the current placing gesture of the camera, wherein the first angle is an angle capable of identifying characteristic points of a human face, the rotation angle is 90 degrees clockwise in the embodiment, the original image is an image with a horizontal screen effect, and the rotated image is an image with a vertical screen effect;
s403, obtaining 106 point face feature points in the rotated image and coordinates of each face feature point in the rotated image based on a face recognition algorithm;
s404, converting the coordinates of each face feature point in the rotated image into the coordinates of each face feature point in the original image, in this embodiment, assuming that the coordinates of a certain face feature point in the rotated image are (a, b), calculating the coordinates (x, y) of the face feature point in the original image according to the following formula:
where θ is a rotation angle, where positive value of θ indicates clockwise rotation, negative value of θ indicates counterclockwise rotation, and in this embodiment, since the rotation angle is 90 degrees clockwise, θ=0.5pi;
s405, carrying out image beautifying processing on the image according to the coordinates of each face feature point in the original image, wherein the image beautifying processing comprises one or more of the following steps: special expression effect, makeup and lens rhythm;
s406, sending the image after the image beautifying treatment to a live streaming tool so that the live streaming tool rotates the image after the image beautifying treatment back to a first angle and pushes the image to a server.
Corresponding to the embodiment of the method, the specification also provides an embodiment of a processing device for live broadcast and a terminal applied by the processing device.
The embodiments of the processing apparatus for live broadcast in this specification may be applied to a computer device, such as a server or a terminal device. The apparatus embodiments may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking software implementation as an example, the device in a logic sense is formed by reading corresponding computer program instructions in a nonvolatile memory into a memory through a processor where the device is located. In terms of hardware, as shown in fig. 5, a hardware structure diagram of a computer device where a processing apparatus for live broadcasting in the embodiment of the present disclosure is located is shown in fig. 5, and in addition to the processor 510, the memory 530, the network interface 520, and the nonvolatile memory 540 shown in fig. 5, a server or an electronic device where the device 531 is located in the embodiment may generally include other hardware according to the actual function of the computer device, which is not described herein again.
Accordingly, the present specification embodiment also provides a computer storage medium having a program stored therein, which when executed by a processor, implements the method in any of the above embodiments.
Accordingly, the present specification also provides a computer device comprising a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: the instructions, when executed, perform the method described in the method embodiments of any one of the embodiments of the present specification.
Embodiments of the present description may take the form of a computer program product embodied on one or more storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having program code embodied therein. Computer-usable storage media include both permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to: phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by the computing device.
As shown in fig. 6, fig. 6 is a block diagram of a processing apparatus for live broadcasting, according to an exemplary embodiment of the present specification, the apparatus includes:
the acquiring module 61 is configured to acquire user input information at a live broadcast on-demand interface by using a UI interaction component, so as to determine a current placement posture of a camera, and acquire an original image captured by the camera;
a determining module 62, configured to determine whether to rotate the original image according to a current placement gesture of the camera;
a rotation module 63, configured to rotate the image to a first angle when the determination result of the determination module 62 is yes, where the first angle is an angle at which a feature point of a face can be identified;
the identifying module 64 is configured to obtain face feature points in the rotated image and coordinates of each face feature point in the rotated image;
a calculation module 65, configured to convert coordinates of the face feature points in the rotated image into coordinates of the face feature points in the original image;
the beautifying module 66 is configured to perform image beautifying processing on the original image according to coordinates of each face feature point in the original image;
and the pushing module 67 is configured to send the image after the image beautifying process to a live pushing tool, so that the live pushing tool rotates the image after the image beautifying process back to a first angle, and performs pushing.
In some examples, options are set in the UI interaction component, where the options are used to display parameters related to the pose of the camera; the user input information is one of the options determined by the user; parameters related to the camera placing posture comprise: normal pose, clockwise rotation by 90 degrees, rotation by 180 degrees and counterclockwise rotation by 90 degrees.
In some examples, the image beautification process includes one or more of: special effect of expression, make-up and lens rhythm.
In some examples, the step of pushing includes: and packaging the image rotated back to the first angle to form stream data, and distributing the stream data to each client through a server.
In some examples, the original image is a cross-screen effect image and the rotated image is a portrait effect image.
In some examples, the number of points of the face feature points is 106 points.
The implementation process of the functions and roles of each module in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present description. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
Other embodiments of the present description will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This specification is intended to cover any variations, uses, or adaptations of the specification following, in general, the principles of the specification and including such departures from the present disclosure as come within known or customary practice within the art to which the specification pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the specification being indicated by the following claims.
It is to be understood that the present description is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present description is limited only by the appended claims.
The foregoing description of the preferred embodiments is provided for the purpose of illustration only, and is not intended to limit the scope of the disclosure, since any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the disclosure are intended to be included within the scope of the disclosure.

Claims (9)

1. A method for processing live broadcast, the method comprising:
acquiring user input information by using a UI interaction component on a live broadcast on-demand interface so as to determine the current placement posture of a camera;
acquiring an original image captured by the camera;
determining whether the original image is rotated according to the current placing gesture of the camera, and if the original image is rotated, rotating the image to a first angle, wherein the first angle is an angle capable of identifying face feature points;
obtaining face feature points in the rotated image and coordinates of the face feature points in the rotated image;
converting the coordinates of the face feature points in the rotated image into coordinates of the face feature points in the original image;
carrying out image beautifying treatment on the original image according to the coordinates of the face feature points in the original image;
and sending the image after the image beautifying treatment to a live broadcast pushing tool so that the live broadcast pushing tool rotates the image after the image beautifying treatment back to the first angle and pushes.
2. The method according to claim 1, wherein options are arranged in the UI interaction component, and the options are used for displaying parameters related to the placement posture of the camera; the user input information is one of the options determined by the user; parameters related to the camera placing posture comprise: normal pose, clockwise rotation by 90 degrees, rotation by 180 degrees and counterclockwise rotation by 90 degrees.
3. The method of claim 1, wherein the image beautification process comprises one or more of:
special effect of expression, make-up and lens rhythm.
4. The method of claim 1, wherein the step of pushing comprises:
and packaging the image rotated back to the first angle to form stream data, and distributing the stream data to each client through a server.
5. The method of claim 1, wherein the original image is a cross-screen effect image and the rotated image is a portrait effect image.
6. The method according to claim 1, wherein the number of points of the face feature points is 106 points.
7. A processing apparatus for live opening, the apparatus comprising:
the acquisition module is used for acquiring user input information at the live broadcast opening interface by utilizing the UI interaction component so as to determine the current placing gesture of the camera and acquire an original image captured by the camera;
the determining module is used for determining whether the original image is rotated or not according to the current placing gesture of the camera;
the rotating module is used for rotating the image to a first angle when the determination result is yes, wherein the first angle is an angle of the identifiable face feature points;
the identification module is used for obtaining the face characteristic points in the rotated image and the coordinates of each face characteristic point in the rotated image;
the computing module is used for converting the coordinates of the face feature points in the rotated image into the coordinates of the face feature points in the original image;
the beautifying module is used for carrying out image beautifying treatment on the original image according to the coordinates of each face characteristic point in the original image;
and the live broadcast pushing tool is used for rotating the image after the image beautifying treatment back to a first angle and pushing the image.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 6 when the program is executed by the processor.
9. A computer readable storage medium, characterized in that a computer program is stored thereon, which program, when being executed by a processor, implements the method of any of claims 1-6.
CN201911144390.9A 2019-11-20 2019-11-20 Live broadcast on-demand processing method, device, storage medium and equipment Active CN110956106B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911144390.9A CN110956106B (en) 2019-11-20 2019-11-20 Live broadcast on-demand processing method, device, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911144390.9A CN110956106B (en) 2019-11-20 2019-11-20 Live broadcast on-demand processing method, device, storage medium and equipment

Publications (2)

Publication Number Publication Date
CN110956106A CN110956106A (en) 2020-04-03
CN110956106B true CN110956106B (en) 2023-10-10

Family

ID=69978122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911144390.9A Active CN110956106B (en) 2019-11-20 2019-11-20 Live broadcast on-demand processing method, device, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN110956106B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111669611B (en) * 2020-06-19 2022-02-22 广州繁星互娱信息科技有限公司 Image processing method, device, terminal and storage medium
CN111988671B (en) * 2020-09-07 2022-06-03 北京达佳互联信息技术有限公司 Image processing method and image processing apparatus
CN116325762A (en) * 2020-11-27 2023-06-23 海信视像科技股份有限公司 Display device and display method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105979342A (en) * 2016-06-24 2016-09-28 武汉斗鱼网络科技有限公司 Horizontal-vertical screen live broadcast switching method and system in live broadcast website
CN106484349A (en) * 2016-09-26 2017-03-08 腾讯科技(深圳)有限公司 The treating method and apparatus of live information
CN106603912A (en) * 2016-12-05 2017-04-26 科大讯飞股份有限公司 Video live broadcast control method and device
CN107316319A (en) * 2017-05-27 2017-11-03 北京小鸟看看科技有限公司 The methods, devices and systems that a kind of rigid body is followed the trail of
CN107820071A (en) * 2017-11-24 2018-03-20 深圳超多维科技有限公司 Mobile terminal and its stereoscopic imaging method, device and computer-readable recording medium
CN109948397A (en) * 2017-12-20 2019-06-28 Tcl集团股份有限公司 A kind of face image correcting method, system and terminal device
CN109961055A (en) * 2019-03-29 2019-07-02 广州市百果园信息技术有限公司 Face critical point detection method, apparatus, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105979342A (en) * 2016-06-24 2016-09-28 武汉斗鱼网络科技有限公司 Horizontal-vertical screen live broadcast switching method and system in live broadcast website
CN106484349A (en) * 2016-09-26 2017-03-08 腾讯科技(深圳)有限公司 The treating method and apparatus of live information
CN106603912A (en) * 2016-12-05 2017-04-26 科大讯飞股份有限公司 Video live broadcast control method and device
CN107316319A (en) * 2017-05-27 2017-11-03 北京小鸟看看科技有限公司 The methods, devices and systems that a kind of rigid body is followed the trail of
CN107820071A (en) * 2017-11-24 2018-03-20 深圳超多维科技有限公司 Mobile terminal and its stereoscopic imaging method, device and computer-readable recording medium
CN109948397A (en) * 2017-12-20 2019-06-28 Tcl集团股份有限公司 A kind of face image correcting method, system and terminal device
CN109961055A (en) * 2019-03-29 2019-07-02 广州市百果园信息技术有限公司 Face critical point detection method, apparatus, equipment and storage medium

Also Published As

Publication number Publication date
CN110956106A (en) 2020-04-03

Similar Documents

Publication Publication Date Title
US20210192188A1 (en) Facial Signature Methods, Systems and Software
US11960639B2 (en) Virtual 3D methods, systems and software
CN110956106B (en) Live broadcast on-demand processing method, device, storage medium and equipment
US11108972B2 (en) Virtual three dimensional video creation and management system and method
CN110363133B (en) Method, device, equipment and storage medium for sight line detection and video processing
US8903139B2 (en) Method of reconstructing three-dimensional facial shape
US9961334B2 (en) Simulated 3D image display method and display device
CN111311756B (en) Augmented reality AR display method and related device
CN113973190A (en) Video virtual background image processing method and device and computer equipment
WO2019237745A1 (en) Facial image processing method and apparatus, electronic device and computer readable storage medium
EP3295372A1 (en) Facial signature methods, systems and software
US10453244B2 (en) Multi-layer UV map based texture rendering for free-running FVV applications
CN111836058B (en) Method, device and equipment for playing real-time video and storage medium
US20230152883A1 (en) Scene processing for holographic displays
CN107248138B (en) Method for predicting human visual saliency in virtual reality environment
CN105631938B (en) Image processing method and electronic equipment
CN115086625A (en) Correction method, device and system of projection picture, correction equipment and projection equipment
US10237614B2 (en) Content viewing verification system
CN111161426A (en) Three-dimensional display method and system based on panoramic image
CN111047680A (en) Mobile equipment end three-dimensional model reconstruction system, method and storage medium
CN109348132B (en) Panoramic shooting method and device
CN115294273A (en) Shooting method and device
CN116152046A (en) Image processing method, device, electronic equipment and storage medium
CN117557768A (en) Video image correction method and device and electronic equipment
CN117455754A (en) Image conversion method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210113

Address after: 511442 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 511442 24 floors, B-1 Building, Wanda Commercial Square North District, Wanbo Business District, 79 Wanbo Second Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Applicant before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant