CN113286186B - Image display method, device and storage medium in live broadcast - Google Patents

Image display method, device and storage medium in live broadcast Download PDF

Info

Publication number
CN113286186B
CN113286186B CN202110615701.6A CN202110615701A CN113286186B CN 113286186 B CN113286186 B CN 113286186B CN 202110615701 A CN202110615701 A CN 202110615701A CN 113286186 B CN113286186 B CN 113286186B
Authority
CN
China
Prior art keywords
anchor
action
host
virtual
virtual character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110615701.6A
Other languages
Chinese (zh)
Other versions
CN113286186A (en
Inventor
蓝永峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Huya Information Technology Co Ltd
Original Assignee
Guangzhou Huya Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huya Information Technology Co Ltd filed Critical Guangzhou Huya Information Technology Co Ltd
Priority to CN202110615701.6A priority Critical patent/CN113286186B/en
Publication of CN113286186A publication Critical patent/CN113286186A/en
Application granted granted Critical
Publication of CN113286186B publication Critical patent/CN113286186B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present application relates to the field of network live broadcasting technologies, and in particular, to a method, an apparatus, and a storage medium for displaying images in live broadcasting. The image display method in live broadcasting comprises the following steps: acquiring control parameters corresponding to image display of a host; controlling the execution action of the virtual role corresponding to the anchor according to the control parameter; and sending the execution action of the virtual character to a client interface of the audience for display. By using the scheme provided by the application, the anchor user can conveniently interact with the audience by using the virtual character.

Description

Image display method, device and storage medium in live broadcast
The application is a divisional application of an invention patent application with the application number of 201811185967.6 and the invention name of 'image display method, device and storage medium in live broadcasting'.
Technical Field
The application relates to the technical field of network live broadcasting, in particular to an image display method, an image display device and a storage medium in live broadcasting.
Background
The advantage of the Internet is absorbed and extended by the live webcast, and the Internet and advanced multimedia communication technology are utilized for users with live webcast demands, and an enterprise or a person can directly and online perform comprehensive communication and interaction of voice, video and data by constructing a multifunctional live webcast platform integrating audio, video, desktop sharing, document sharing and interaction links.
In the existing internet live broadcast, in order to have better interaction with audiences, the anchor mostly selects a direct access mirror, when the audiences give virtual gifts and the like and need the anchor to give responses, the anchor who enters the mirror can express thank through sound, facial expression or limb language, and for some anchors who do not want to enter the mirror or do not enter the environment, the anchor is often processed in a mode of covering or blurring other images, and after the processing in the mode, the feedback information of the anchor to the audiences cannot be accurately transmitted, the interaction effect is poor, and the experience of the audiences is greatly discounted.
Content of the application
Aiming at the defect that the prior art cannot interact with the audience well, the method, the device and the storage medium for displaying the image in live broadcast are provided, so that the anchor user can interact with the audience conveniently by utilizing the virtual character.
The embodiment of the application firstly provides an image display method in live broadcast, which comprises the following steps:
acquiring control parameters corresponding to image display of a host;
controlling the execution action of the virtual role corresponding to the anchor according to the control parameter;
and sending the execution action of the virtual character to a client interface of the audience for display.
Preferably, the step of obtaining the control parameters corresponding to the image presentation of the anchor includes:
identifying a face image of a host;
acquiring gesture characteristic information of a anchor according to the face image;
and converting the gesture characteristic information into control parameters for controlling the image display of the anchor.
Preferably, the step of obtaining the control parameters corresponding to the image presentation of the anchor includes:
acquiring a control instruction input by a host; the control instruction is an instruction which is associated with the virtual role in advance and provided for the anchor to input;
and converting the control instruction into a control parameter for the image display of the anchor.
Preferably, before the step of controlling the execution action of the virtual character selected by the anchor according to the control parameter, the method includes:
displaying the multiple virtual roles to a client interface of a host;
and determining the virtual role selected by the anchor as the virtual role to be displayed according to the selection operation of the anchor on the client interface.
Preferably, before the step of controlling the execution action of the virtual character selected by the anchor according to the control parameter, the method includes:
displaying a plurality of background pictures to a client interface of a host;
and replacing the background of the current anchor living broadcast room with the background picture selected by the anchor according to the selection operation of the anchor on the client interface.
Preferably, the step of sending the execution action of the virtual character to a client interface of a viewer for presentation includes:
detecting whether the virtual character has executed the execution action;
if not, identifying the current action of the virtual character, performing gradual change processing on the current action, splicing the current action and the executing action, and controlling the virtual character to execute the executing action.
Preferably, after the step of sending the execution action of the virtual character to the client interface of the audience for presentation, the method further includes:
after the virtual character completes the execution action, detecting whether a next execution action instruction is received at the current moment;
and if the next execution action instruction is not received, controlling the virtual character to execute the preset action.
Further, the embodiment of the application also provides an image display device in live broadcast, which comprises:
the acquisition module is used for acquiring control parameters corresponding to image display of the anchor;
the control module is used for controlling the execution action of the virtual role corresponding to the anchor according to the control parameter;
and the display module is used for sending the execution action of the virtual character to a client interface of a spectator for display.
Further, the embodiment of the present application further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the image presentation method in live broadcast according to any one of the foregoing.
Still further, embodiments of the present application also provide a computer device, including:
one or more processors;
storage means for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the steps of the image display method in live broadcast according to any one of the above technical solutions.
Compared with the prior art, the application has the following beneficial effects:
the embodiment of the application provides a live image display method, which controls the execution action of a virtual character through the image display corresponding to a host, displays the execution action of the virtual character on a client interface of a spectator, displays the virtual character representing the host image on the client interface of the spectator, changes according to the change of the host control parameter, not only meets the requirement that the host cannot go out of the mirror, but also can interact with the spectator by using the flexibly changed virtual character, improves the interaction effect and improves the user experience.
Additional aspects and advantages of the application will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
fig. 1 is a flow chart of a method for displaying images in live broadcast according to an embodiment of the present application;
fig. 2 is a flow chart of acquiring control parameters corresponding to a visual presentation of a host according to an embodiment of the present application;
FIG. 3 is a flowchart of obtaining control parameters corresponding to a visual presentation of a host according to another embodiment of the present application;
FIG. 4 is a schematic view of the embodiment of FIG. 3 of the present application;
fig. 5 is a schematic flow chart of sending the execution action of the virtual character to the client interface of the audience for display according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image display device in live broadcast according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of illustrating the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It will be understood that the terms first, second, etc. as used herein may be used to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another element. For example, a first live video image may be referred to as a second live video image, and similarly, a second live video image may be referred to as a first live video image, without departing from the scope of the invention. Both the first live video image and the second live video image are live video images, but they are not the same live video image.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As will be appreciated by those skilled in the art, a client as used herein includes both a device of a wireless signal receiver having only wireless signal receivers without transmitting capabilities and a device of receiving and transmitting hardware having a device of receiving and transmitting hardware capable of performing two-way communications over a two-way communications link. Such a device may include: a cellular or other communication device having a single-line display or a multi-line display or a cellular or other communication device without a multi-line display; a PCS (Personal Communications Service, personal communication system) that may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant ) that can include a radio frequency receiver, pager, internet/intranet access, web browser, notepad, calendar and/or GPS (Global Positioning System ) receiver; a conventional laptop and/or palmtop computer or other appliance that has and/or includes a radio frequency receiver. As used herein, a client may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or adapted and/or configured to operate locally and/or in a distributed fashion, at any other location(s) on earth and/or in space.
The invention firstly provides an image display method in live broadcast, which is suitable for being executed at a host client, and in one embodiment, the method comprises the steps S11, S12 and S13, wherein the flow diagram is shown in figure 1, and the method specifically comprises the following steps:
s11, obtaining control parameters corresponding to image display of the anchor.
The image display comprises information such as expression, action, morphology and clothing of a host, the host displays users of the host for other audiences, the host displays the talent for the audiences in the host, the audience entering the live broadcasting room and being seen by the other audiences can also be included, if the traditional host sets the background of the live broadcasting room as a game scene, the traditional host and the audiences all correspondingly find own virtual roles to enter the scene, in the scene, the virtual roles of the traditional host and the audiences all need to react in real time, and the traditional host and the audiences at the moment can be the host in the application.
In one embodiment, the step of obtaining the control parameters corresponding to the avatar presentation of the anchor, the flow chart of which is shown in fig. 2, includes the following sub-steps:
s21, recognizing a face image of a host;
in one embodiment, before the step of identifying the face image of the anchor, the method further comprises: and acquiring a face image of the anchor. Face images are identified using image recognition techniques. If the face images of the anchor are to be accurately identified, a plurality of face images of the anchor can be obtained, a large number of training samples are utilized to establish an anchor identification model, and the accuracy and speed of anchor identification are improved.
S22, acquiring gesture feature information of the anchor according to the face image.
The facial image is analyzed through an image recognition algorithm, characteristic information of a host in various postures is extracted, such as the bending degree of eyes and eyebrows, the shape of a mouth and the like of the host when the host is in smile, smile and the like are obtained, and the characteristic information of the number or the area of teeth and the like is exposed, so that control parameters such as the expression, the form, the action and the like of the host image can be controlled according to the characteristic information.
S23, converting the gesture characteristic information into control parameters for controlling the image display of the anchor.
Converting the current gesture feature information of the anchor into control parameters for controlling the image display of the anchor by utilizing the gesture feature information of the anchor obtained in the step S22, if the current gesture of the anchor is smile, obtaining the control parameters for the image display of the anchor according to the feature information, wherein the gesture feature information of the anchor is that the bending radian of eyes is 10 degrees, the corners of the mouth are raised by 15 degrees, 6 teeth are exposed, and the like, and correspondingly determining the control parameters for the image display according to the association relation between the gesture feature information of the anchor and the control parameters for the image display of the anchor, wherein the virtual character selected by the anchor is a cat, the control parameters for the image display of the anchor are that the bending radian of eyes is 10 degrees, the corners of the mouth are raised by 15 degrees, and the control parameters for the virtual character corresponding to the 6 exposed teeth are: the eyes of the kittens are bent by 45 degrees, and the corners of the mouth are raised by 30 degrees.
According to the scheme provided by the embodiment of the application, the control parameters of the image display of the anchor are automatically adjusted by intelligently identifying the face image of the anchor and controlling the control parameters of the image display of the anchor according to the gesture characteristic information of the anchor, so that the action of the virtual character is controlled according to the control parameters, the control parameters of the image display do not need to be manually adjusted, and the user experience is improved.
In one embodiment, the obtaining the control parameters corresponding to the avatar presentation of the anchor may further be implemented by the following substeps, as shown in fig. 3, specifically including the substeps of:
s31, acquiring a control instruction input by a host; the control instruction is an instruction which is associated with the virtual role in advance and provided for the anchor to input.
The virtual characters are fictitious characters, puppets or cartoon characters customized according to the image of a host, control instructions input by the host are associated with virtual characters selected by the host, each action of the virtual characters corresponds to an independent control element and is triggered by the host, each action is provided with a control panel, the control panel can be characterized in a form of a knob or a progress bar, each action of the virtual characters corresponds to the progress bar as an example, the setting value is between 0 and 1, 0 represents that the action is not triggered, and 1 represents that the action is in a trigger state. When an action is in a trigger state, the progress bar is slid, the action can change the action amplitude, and a scene schematic diagram provided by the embodiment is shown in fig. 4.
Preferably, when the progress bar indication is detected to be 0, the identification operation of the action is not needed, so that the efficiency is improved, and resources are saved.
In the application scene, when the progress bar is detected to point to 1, namely the action is in a trigger state, a control instruction or identification operation of a user can be accepted, and the gesture characteristic information of a host is identified. The specific algorithm is used for combining the product requirement, and a corresponding action module is found and needed, such as a host can make sound or action of 'hello', and when 'hello' is detected, the action is considered to be required to be triggered. If the action is triggered, the processing is not performed, otherwise, the recognition action is stopped, the action is not triggered, then the action which is currently triggered is found, and gradual animation is performed on the action, so that the two actions are switched in a natural transition way.
When the virtual character is in the 0 state, the progress bar does not start sliding, which indicates that the action represented by the progress bar is not started at the moment, the background action controller is relocated to the action which is currently recognized, the size of the background action controller is consistent with the layout which is currently recognized, the animation corresponding to the virtual character is displayed, namely the action which is currently started is recognized, and the action is displayed at the audience side.
S32, converting the control instruction into control parameters of the image display of the anchor.
The step S32 is preceded by converting the control instruction sent by the user into a control parameter for image display of the anchor, and further includes: and pre-establishing a mapping relation between the control instruction and the control parameter displayed by the image of the anchor, so that the control parameter displayed by the image of the corresponding anchor can be quickly obtained later according to the currently obtained control instruction.
According to the embodiment of the application, the anchor input control instruction is converted into the control parameter of the anchor image display, so that the anchor image displayed on the audience side can be manually controlled by the anchor, the real-time acquisition of the face image of the anchor is not needed, and the processing complexity and the system energy consumption of the system are reduced.
And S12, controlling the execution action of the virtual role selected by the anchor according to the control parameter.
Before receiving the execution action of the virtual role selected by the control parameter control anchor, the method further comprises the following steps: displaying the multiple virtual roles to a client interface of a host; and determining the virtual roles to be displayed according to the selection operation of the anchor on the client interface.
Specifically, when a plurality of virtual roles exist, the plurality of virtual roles are displayed to clients of a host, so that the host can select the virtual roles to display own images according to own preference. If the anchor can select the kitten and the puppy as the virtual characters for the self image display.
In one embodiment, before the step of controlling the execution of the selected virtual character by the anchor according to the control parameter, the method comprises: displaying a plurality of background pictures to a client interface of a host; and replacing the background picture of the current live broadcasting room with the selected background picture according to the selection operation of the live broadcasting on the client interface.
The anchor may select an appropriate context based on the current live topic, such as: music, optionally a music lobby, etc., including but not limited to game scenes, star shows, outdoor scenes, etc., brings the user with an immersive experience.
In one embodiment, before step S12, the method further includes: and establishing an association relation between the control parameters of the image display of the anchor and the control parameters of the virtual roles.
In one embodiment, historical behavior data of a host and a virtual character are obtained, an association relationship between control parameters of image display of the host and control parameters of the virtual character is determined according to the historical behavior data, as described in the previous example, the host determines that the virtual character representing the image display of the host is a kitten, the detected posture characteristic information of the host is that the bending radian of eyes is 10 degrees, mouth corners are raised by 15 degrees, 6 teeth are exposed, the current posture of the host is smile, the current posture of the host is limited by the design of the virtual character, the kitten in the virtual character may not design teeth, and the posture characteristic information of the smile of the kitten has a large difference with the control parameters of the kitten can be determined according to the association relationship established by the historical data, wherein the control parameters of the kitten are as follows: the degree of eye bending was 45 deg., and the mouth angle raised 30 deg..
The method has the advantages that the association relation between the control parameters of the image display of the host and the control parameters of the virtual roles is established in advance, the control parameters of the virtual roles can be obtained quickly according to the control parameters of the image display of the current host, the association relation is particularly important in the case that the virtual roles have multiple choices, and the virtual roles can be controlled quickly according to the current gesture characteristic information of the current host to make corresponding feedback information.
In one embodiment, an association relationship between the control parameters of the image display of the anchor and the control parameters of the virtual roles is established, and an identification model between the control parameters of the image display of the anchor and the control parameters of the virtual roles can be established through a large number of training samples, so that the control parameters of the selected virtual roles can be accurately and rapidly obtained according to the currently obtained gesture characteristic information of the anchor.
And S13, sending the execution action of the virtual character to a client interface of the audience for display.
The action of the virtual character selected by the anchor is shown at the client of the audience according to the execution action, in one embodiment, step S13 includes the following sub-steps, the flow chart of which is shown in fig. 5, and the specific process is as follows:
s51, detecting whether the virtual character has executed the executing action.
And detecting whether the virtual character is responding to the instruction related to the execution action, if so, continuing to execute the action, if the execution action is a series of continuous actions, acquiring whether the current action of the virtual character is matched with the action corresponding to the current time progress, if so, determining that the virtual character is executing the execution action, and if not, determining that the virtual character does not execute the execution action.
And S52, if not, identifying the current action being executed by the virtual character, performing gradual change processing on the current action being executed, splicing the current action and the executing action, and controlling the virtual character to execute the executing action.
If the virtual character selected by the user does not execute the current executing action, identifying the current action which is being executed by the virtual character, obtaining the current action which is being executed currently, obtaining the characteristic information of the current action and the executing action, splicing the current action and the executing action according to the two characteristic information, carrying out gradual change processing on the current action which is being executed currently, and controlling the virtual character to execute the executing action after the current action is completed so as to realize the natural transition of the two executing actions.
In one embodiment, after the step S13, the method further includes: after the virtual character completes the execution action, detecting whether a next execution action instruction is received at the current moment; and if the next execution action instruction is not received, controlling the virtual character to execute the preset action.
Specifically, after the virtual character performs the action, whether a next execution action instruction exists at the current moment is detected, if the next execution action instruction is not detected, the virtual character enters a standby action state, and the standby action state can be a preset action, such as smiling action of a kitten. According to the scheme, when no action is executed, the preset action is automatically executed, and the flexible state of the virtual character is maintained.
According to the image display scheme in live broadcasting, the execution action of the virtual characters is controlled through the image display related control parameters of the anchor, the execution action of the virtual characters is displayed on the client interface of the audience, the virtual characters representing the anchor image are displayed on the client interface of the audience, the virtual characters are changed according to the change of the control parameters of the anchor, the requirement that the anchor cannot go out of a mirror is met, and the virtual characters which are flexibly changed can interact with the audience, so that user experience is improved.
Further, the embodiment of the invention also provides an image display device in live broadcast, the structure schematic diagram of which is shown in fig. 6, comprising: the functions of the acquisition module 61, the control module 62 and the display module 63 are as follows:
the acquiring module 61 is configured to acquire control parameters corresponding to image presentation of a host;
a control module 62, configured to control an execution action of the virtual character selected by the anchor according to the control parameter;
and the display module 63 is used for sending the execution action of the virtual character to a client interface of a spectator for display.
The specific manner in which the respective modules and units perform the operations in the live presentation apparatus in the above embodiment has been described in detail in the embodiments related to the method, and will not be described in detail herein.
Further, an embodiment of the present invention further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the method for displaying an image in live broadcast according to any one of the above. Wherein the storage medium includes, but is not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magneto-optical disks, ROMs (Read-Only Memory), RAMs (Random AcceSS Memory ), EPROMs (EraSable Programmable Read-Only Memory), EEPROMs (Electrically EraSable Programmable Read-Only Memory), flash Memory, magnetic cards, or optical cards. That is, a storage medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer). And may be a read-only memory, a magnetic or optical disk, etc.
Still further, an embodiment of the present invention further provides a computer device, which may be a server, including:
one or more processors;
storage means for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the method for displaying an image in live broadcast according to any one of the above technical solutions.
Fig. 7 is a schematic structural diagram of a computer device according to the present invention, which includes a processor 720, a storage device 730, an input unit 740, and a display unit 750. Those skilled in the art will appreciate that the structural elements illustrated in FIG. 7 do not constitute a limitation of all computer devices, and may include more or fewer elements than shown, or may combine certain elements. The storage device 730 may be used to store the application 710 and various functional modules, and the processor 720 runs the application 710 stored in the storage device 730 to perform various functional applications and data processing of the device. The storage 730 may be or include both internal memory or external memory. The internal memory may include read-only memory, programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), flash memory, or random access memory. The external memory may include a hard disk, floppy disk, ZIP disk, U-disk, tape, etc. The disclosed memory devices include, but are not limited to, these types of memory devices. The storage device 730 disclosed herein is by way of example only and not by way of limitation.
The input unit 740 is for receiving input of signals, and the input unit 740 may include a touch panel and other input devices. The touch panel may collect touch operations on or near the user (e.g., the user's operation on or near the touch panel using any suitable object or accessory such as a finger, stylus, etc.), and drive the corresponding connection device according to a preset program; other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., play control keys, switch keys, etc.), a trackball, mouse, joystick, etc. The display unit 750 may be used to display information input by a user or information provided to the user and various menus of the computer device. The display unit 750 may take the form of a liquid crystal display, an organic light emitting diode, or the like. Processor 720 is the control center of the computer device, connects the various parts of the overall computer using various interfaces and lines, performs various functions and processes data by running or executing software programs and/or modules stored in storage 730, and invoking data stored in the storage.
In an embodiment, a computer device includes one or more processors 720, and one or more storage devices 730, one or more application programs 710, wherein the one or more application programs 710 are stored in the storage devices 730 and configured to be executed by the one or more processors 720, the one or more application programs 710 are configured to perform the in-live avatar presentation method described in the above embodiments.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
It should be understood that each functional unit in the embodiments of the present invention may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for a person skilled in the art, several improvements and modifications can be made without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (9)

1. A method for displaying an image in live broadcast, comprising:
determining that any executing action corresponding to the virtual role corresponding to the anchor is in a trigger state, and acquiring a control instruction input by the anchor and/or identifying a face image of the anchor;
based on a control instruction input by a host and/or a result obtained by identifying a face image of the host, obtaining control parameters corresponding to image display of the host;
acquiring historical behavior data of a host and a virtual character, and determining an association relationship between control parameters of image display of the host and control parameters of the virtual character under the same gesture according to the historical behavior data; obtaining control parameters of the virtual roles according to the control parameters displayed by the image of the anchor, and controlling execution actions of the virtual roles corresponding to the anchor according to the control parameters of the virtual roles; the execution action of the virtual character is provided with a corresponding progress bar, if the progress bar is in a trigger state, a control instruction input by a host is received, and when the control instruction indicates to slide the progress bar, the corresponding execution action generates change of action amplitude;
and sending the execution action of the virtual character to a client of the audience so as to display the execution action on a client interface of the audience.
2. The method for displaying the image in the live broadcast according to claim 1, wherein the step of obtaining the control parameters corresponding to the image display of the anchor according to the result obtained by identifying the face image of the anchor comprises the following steps:
identifying a face image of a host;
acquiring gesture characteristic information of a anchor according to the face image;
converting the gesture characteristic information into control parameters for controlling the image display of the anchor;
the step of obtaining the control parameters corresponding to the image display of the anchor based on the control instruction input by the anchor comprises the following steps:
acquiring a control instruction input by a host; the control instruction is an instruction which is associated with the virtual role in advance and provided for the anchor to input;
converting the control instruction into a control parameter for the image display of the anchor;
the converting the control instruction into the control parameter of the image display of the anchor comprises the following steps:
determining amplitude variation information of an execution action corresponding to a control element in response to a trigger operation of the control element by an anchor; the triggering operation comprises a rotating operation and a sliding operation;
and converting the amplitude change information into control parameters for the image display of the anchor.
3. The method for displaying the image in live broadcast according to claim 1, wherein before the step of controlling the execution action of the virtual character corresponding to the anchor according to the control parameter of the virtual character, the method comprises:
displaying a plurality of virtual roles on a client interface of a host;
and determining the virtual role selected by the anchor as the virtual role to be displayed according to the selection operation of the anchor on the client interface.
4. The method for displaying the image in live broadcast according to claim 1, wherein before the step of controlling the execution action of the virtual character corresponding to the anchor according to the control parameter of the virtual character, the method comprises:
displaying a plurality of background pictures on a client interface of a host;
and replacing the background of the current anchor living broadcast room with the background picture selected by the anchor according to the selection operation of the anchor on the client interface.
5. The in-live avatar presentation method of claim 1, wherein the step of transmitting the execution action of the avatar to a client interface of a viewer for presentation comprises:
detecting whether the virtual character has executed the execution action;
if not, identifying the current action of the virtual character, performing gradual change processing on the current action, splicing the current action and the executing action, and controlling the virtual character to execute the executing action.
6. The method for displaying an image in live broadcast according to claim 1, wherein after the step of transmitting the execution action of the virtual character to the client interface of the viewer for displaying, the method further comprises:
after the virtual character completes the execution action, detecting whether a next execution action instruction is received at the current moment;
and if the next execution action instruction is not received, controlling the virtual character to execute the preset action.
7. An image display device in live broadcast, comprising:
the acquisition module is used for determining that any execution action corresponding to the virtual role corresponding to the anchor is in a trigger state, and acquiring a control instruction input by the anchor and/or identifying a face image of the anchor; based on a control instruction input by a host and/or a result obtained by identifying a face image of the host, obtaining control parameters corresponding to image display of the host;
the control module is used for acquiring historical behavior data of the anchor and the virtual character, and determining the association relation between the control parameters of the image display of the anchor and the control parameters of the virtual character under the same gesture according to the historical behavior data; obtaining control parameters of the virtual roles according to the control parameters displayed by the image of the anchor, and controlling execution actions of the virtual roles corresponding to the anchor according to the control parameters of the virtual roles; the execution action of the virtual character is provided with a corresponding progress bar, if the progress bar is in a trigger state, a control instruction input by a host is received, and when the control instruction indicates to slide the progress bar, the corresponding execution action generates change of action amplitude;
and the display module is used for sending the execution action of the virtual character to the client of the audience so as to display the execution action on the client interface of the audience.
8. A computer readable storage medium having stored thereon a computer program, characterized in that the program when executed by a processor realizes the steps of the in-live avatar presentation method as claimed in any one of claims 1 to 6.
9. A computer device, the computer device comprising:
one or more processors;
storage means for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors implement the steps of the in-live avatar presentation method of any one of claims 1 to 6.
CN202110615701.6A 2018-10-11 2018-10-11 Image display method, device and storage medium in live broadcast Active CN113286186B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110615701.6A CN113286186B (en) 2018-10-11 2018-10-11 Image display method, device and storage medium in live broadcast

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811185967.6A CN109120985B (en) 2018-10-11 2018-10-11 Image display method and device in live broadcast and storage medium
CN202110615701.6A CN113286186B (en) 2018-10-11 2018-10-11 Image display method, device and storage medium in live broadcast

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811185967.6A Division CN109120985B (en) 2018-10-11 2018-10-11 Image display method and device in live broadcast and storage medium

Publications (2)

Publication Number Publication Date
CN113286186A CN113286186A (en) 2021-08-20
CN113286186B true CN113286186B (en) 2023-07-18

Family

ID=64857918

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110615701.6A Active CN113286186B (en) 2018-10-11 2018-10-11 Image display method, device and storage medium in live broadcast
CN201811185967.6A Active CN109120985B (en) 2018-10-11 2018-10-11 Image display method and device in live broadcast and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201811185967.6A Active CN109120985B (en) 2018-10-11 2018-10-11 Image display method and device in live broadcast and storage medium

Country Status (1)

Country Link
CN (2) CN113286186B (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111641844B (en) * 2019-03-29 2022-08-19 广州虎牙信息科技有限公司 Live broadcast interaction method and device, live broadcast system and electronic equipment
CN109922355B (en) * 2019-03-29 2020-04-17 广州虎牙信息科技有限公司 Live virtual image broadcasting method, live virtual image broadcasting device and electronic equipment
CN109788345B (en) * 2019-03-29 2020-03-10 广州虎牙信息科技有限公司 Live broadcast control method and device, live broadcast equipment and readable storage medium
CN109905724A (en) * 2019-04-19 2019-06-18 广州虎牙信息科技有限公司 Live video processing method, device, electronic equipment and readable storage medium storing program for executing
CN110062267A (en) * 2019-05-05 2019-07-26 广州虎牙信息科技有限公司 Live data processing method, device, electronic equipment and readable storage medium storing program for executing
CN110072116A (en) * 2019-05-06 2019-07-30 广州虎牙信息科技有限公司 Virtual newscaster's recommended method, device and direct broadcast server
CN110308792B (en) * 2019-07-01 2023-12-12 北京百度网讯科技有限公司 Virtual character control method, device, equipment and readable storage medium
CN110784676B (en) * 2019-10-28 2023-10-03 深圳传音控股股份有限公司 Data processing method, terminal device and computer readable storage medium
CN111541908A (en) * 2020-02-27 2020-08-14 北京市商汤科技开发有限公司 Interaction method, device, equipment and storage medium
CN111432267B (en) * 2020-04-23 2021-05-21 深圳追一科技有限公司 Video adjusting method and device, electronic equipment and storage medium
CN111970522A (en) * 2020-07-31 2020-11-20 北京琳云信息科技有限责任公司 Processing method and device of virtual live broadcast data and storage medium
CN112135160A (en) * 2020-09-24 2020-12-25 广州博冠信息科技有限公司 Virtual object control method and device in live broadcast, storage medium and electronic equipment
CN112241203A (en) * 2020-10-21 2021-01-19 广州博冠信息科技有限公司 Control device and method for three-dimensional virtual character, storage medium and electronic device
CN112601098A (en) * 2020-11-09 2021-04-02 北京达佳互联信息技术有限公司 Live broadcast interaction method and content recommendation method and device
CN112535867B (en) * 2020-12-15 2024-05-10 网易(杭州)网络有限公司 Game progress information processing method and device and electronic equipment
CN112788359B (en) * 2020-12-30 2023-05-09 北京达佳互联信息技术有限公司 Live broadcast processing method and device, electronic equipment and storage medium
CN113289332B (en) * 2021-06-17 2023-08-01 广州虎牙科技有限公司 Game interaction method, game interaction device, electronic equipment and computer readable storage medium
CN113457171A (en) * 2021-06-24 2021-10-01 网易(杭州)网络有限公司 Live broadcast information processing method, electronic equipment and storage medium
CN113518239A (en) * 2021-07-09 2021-10-19 珠海云迈网络科技有限公司 Live broadcast interaction method and system, computer equipment and storage medium thereof
CN113435431B (en) * 2021-08-27 2021-12-07 北京市商汤科技开发有限公司 Posture detection method, training device and training equipment of neural network model
CN113824982A (en) * 2021-09-10 2021-12-21 网易(杭州)网络有限公司 Live broadcast method and device, computer equipment and storage medium
CN114245155A (en) * 2021-11-30 2022-03-25 北京百度网讯科技有限公司 Live broadcast method and device and electronic equipment
CN114168018A (en) * 2021-12-08 2022-03-11 北京字跳网络技术有限公司 Data interaction method, data interaction device, electronic equipment, storage medium and program product
CN114363685A (en) * 2021-12-20 2022-04-15 咪咕文化科技有限公司 Video interaction method and device, computing equipment and computer storage medium
CN114693294A (en) * 2022-03-04 2022-07-01 支付宝(杭州)信息技术有限公司 Interaction method and device based on electronic certificate and electronic equipment
WO2023178640A1 (en) * 2022-03-25 2023-09-28 云智联网络科技(北京)有限公司 Method and system for realizing live-streaming interaction between virtual characters
CN117289791A (en) * 2023-08-22 2023-12-26 杭州空介视觉科技有限公司 Meta universe artificial intelligence virtual equipment data generation method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106878820A (en) * 2016-12-09 2017-06-20 北京小米移动软件有限公司 Living broadcast interactive method and device
CN107154069A (en) * 2017-05-11 2017-09-12 上海微漫网络科技有限公司 A kind of data processing method and system based on virtual role
CN107180446A (en) * 2016-03-10 2017-09-19 腾讯科技(深圳)有限公司 The expression animation generation method and device of character face's model
CN107194979A (en) * 2017-05-11 2017-09-22 上海微漫网络科技有限公司 The Scene Composition methods and system of a kind of virtual role
CN107750005A (en) * 2017-09-18 2018-03-02 迈吉客科技(北京)有限公司 Virtual interactive method and terminal

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102478960B (en) * 2010-11-29 2015-11-18 国际商业机器公司 Human-computer interaction device and this equipment is used for the apparatus and method of virtual world
US10546406B2 (en) * 2016-05-09 2020-01-28 Activision Publishing, Inc. User generated character animation
CN106937154A (en) * 2017-03-17 2017-07-07 北京蜜枝科技有限公司 Process the method and device of virtual image
CN106993195A (en) * 2017-03-24 2017-07-28 广州创幻数码科技有限公司 Virtual portrait role live broadcasting method and system
CN107170030A (en) * 2017-05-31 2017-09-15 珠海金山网络游戏科技有限公司 A kind of virtual newscaster's live broadcasting method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107180446A (en) * 2016-03-10 2017-09-19 腾讯科技(深圳)有限公司 The expression animation generation method and device of character face's model
CN106878820A (en) * 2016-12-09 2017-06-20 北京小米移动软件有限公司 Living broadcast interactive method and device
CN107154069A (en) * 2017-05-11 2017-09-12 上海微漫网络科技有限公司 A kind of data processing method and system based on virtual role
CN107194979A (en) * 2017-05-11 2017-09-22 上海微漫网络科技有限公司 The Scene Composition methods and system of a kind of virtual role
CN107750005A (en) * 2017-09-18 2018-03-02 迈吉客科技(北京)有限公司 Virtual interactive method and terminal

Also Published As

Publication number Publication date
CN109120985B (en) 2021-07-23
CN113286186A (en) 2021-08-20
CN109120985A (en) 2019-01-01

Similar Documents

Publication Publication Date Title
CN113286186B (en) Image display method, device and storage medium in live broadcast
US11450350B2 (en) Video recording method and apparatus, video playing method and apparatus, device, and storage medium
CN111556278B (en) Video processing method, video display device and storage medium
CN111541936A (en) Video and image processing method and device, electronic equipment and storage medium
US11924540B2 (en) Trimming video in association with multi-video clip capture
CN111984763B (en) Question answering processing method and intelligent device
CN112188267B (en) Video playing method, device and equipment and computer storage medium
US11516550B2 (en) Generating an interactive digital video content item
KR20230133404A (en) Displaying augmented reality content in messaging application
CN112905074A (en) Interactive interface display method, interactive interface generation method and device and electronic equipment
CN114245155A (en) Live broadcast method and device and electronic equipment
WO2022182660A1 (en) Whole body visual effects
CN113822972A (en) Video-based processing method, device and readable medium
EP4272209A1 (en) Trimming video for multi-video clip capture
CN113596574A (en) Video processing method, video processing apparatus, electronic device, and readable storage medium
CN113676734A (en) Image compression method and image compression device
WO2024051467A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN113409431B (en) Content generation method and device based on movement data redirection and computer equipment
CN116708920B (en) Video processing method, device and storage medium applied to virtual image synthesis
CN111176451A (en) Control method and system for virtual reality multi-channel immersive environment
CN113658213B (en) Image presentation method, related device and computer program product
US11829834B2 (en) Extended QR code
US11995757B2 (en) Customized animation from video
CN110460719B (en) Voice communication method and mobile terminal
US20230138677A1 (en) Customized animation from video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant