CN113938336A - Conference control method and device and electronic equipment - Google Patents

Conference control method and device and electronic equipment Download PDF

Info

Publication number
CN113938336A
CN113938336A CN202111350232.6A CN202111350232A CN113938336A CN 113938336 A CN113938336 A CN 113938336A CN 202111350232 A CN202111350232 A CN 202111350232A CN 113938336 A CN113938336 A CN 113938336A
Authority
CN
China
Prior art keywords
virtual
speaking
meeting place
conference
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111350232.6A
Other languages
Chinese (zh)
Inventor
陈铭
刘柏
李均
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Netease Hangzhou Network Co Ltd
Original Assignee
Netease Hangzhou Network Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netease Hangzhou Network Co Ltd filed Critical Netease Hangzhou Network Co Ltd
Priority to CN202111350232.6A priority Critical patent/CN113938336A/en
Publication of CN113938336A publication Critical patent/CN113938336A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1813Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
    • H04L12/1822Conducting the conference, e.g. admission, detection, selection or grouping of participants, correlating users to one or more conference sessions, prioritising transmission
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Abstract

The invention provides a conference control method, a conference control device and electronic equipment, wherein a speaking request is sent to a host of a current conference in a three-dimensional virtual conference room in response to a speaking triggering operation aiming at a first virtual role in the virtual roles; responding to a confirmation message returned aiming at the speaking request, and controlling the first virtual role to speak; and in the process of controlling the first virtual character to speak, responding to the action triggering operation aiming at the first virtual character, and controlling the first virtual character to execute the action corresponding to the action triggering operation in the three-dimensional virtual meeting place. In the three-dimensional virtual meeting place with the method, when the virtual character speaks in the three-dimensional virtual meeting place, the virtual character can be controlled to execute various actions so as to attract the participating users to pay attention to the three-dimensional virtual meeting place, the interestingness of the meeting is improved, the spirit of the participants can be concentrated in the meeting, and the user experience and the meeting effect are further improved.

Description

Conference control method and device and electronic equipment
Technical Field
The invention relates to the technical field of immersive activity systems, in particular to a conference control method, a conference control device and electronic equipment.
Background
In the related technology, the online conference is usually an online audio and video conference based on a 2D display interface, and each participant in the online conference can release his own speech and also can hear the speech released by other participants, for example, click a talk button on the 2D display interface and then start to release the speech; the video pictures of the participants can also be viewed in a fixed display area of the 2D display interface. However, the online conference based on the 2D display interface is limited to the interaction between the speeches, and the feedback in the display interface is less, so that the conference is boring, the spirit of the participants is not easily concentrated in the conference, the experience is poor, and the conference effect is affected.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus and an electronic device for controlling a conference, so as to improve the interest of the conference, concentrate the spirit of participants in the conference, and further improve the user experience and the conference effect.
In a first aspect, an embodiment of the present invention provides a conference control method, where a terminal device provides a graphical user interface, where the graphical user interface includes a three-dimensional virtual meeting place and a virtual character located in the three-dimensional virtual meeting place, and the method includes: responding to a speaking triggering operation aiming at a first virtual role in the virtual roles, and sending a speaking request to a host of a current conference in the three-dimensional virtual conference place; responding to a confirmation message returned aiming at the speaking request, and controlling the first virtual role to speak; and in the process of controlling the first virtual character to speak, responding to the action triggering operation aiming at the first virtual character, and controlling the first virtual character to execute the action corresponding to the action triggering operation in the three-dimensional virtual meeting place.
Further, controlling the first virtual character to speak includes: starting a voice call or video call function for the first virtual role; acquiring speech information of a user corresponding to a first virtual role; and sending the speaking information to a second virtual role in the three-dimensional virtual meeting place, wherein the second virtual role is other virtual roles except the first virtual role in the three-dimensional virtual meeting place.
Further, the method further comprises: receiving speech information sent by a terminal corresponding to a second virtual role in the three-dimensional virtual meeting place, wherein the second virtual role is other virtual roles except the first virtual role in the three-dimensional virtual meeting place; and in the process of receiving the speaking information sent by the terminal corresponding to the second virtual character, responding to the action triggering operation aiming at the first virtual character, and controlling the first virtual character to execute the action corresponding to the action triggering operation in the three-dimensional virtual meeting place.
Further, the method further comprises: and responding to the triggering operation aiming at the expression migration control, and migrating the expression of the user corresponding to the first virtual character to the face of the first virtual character in the process of controlling the speech of the first virtual character.
Further, the method further comprises: and displaying a participant list of the current conference in the three-dimensional virtual meeting place through a graphical user interface, wherein the participant list comprises the identity information and the speaking state information of each participant.
Further, the speech state information: a speech request state, a speech state, a screen sharing state.
Further, the method further comprises: displaying a stop control through a graphical user interface in the process of controlling the first virtual role to speak; and responding to the trigger operation aiming at the stop control, controlling the first virtual role to finish speaking, and updating the state of finishing speaking of the first virtual role to a terminal corresponding to a second virtual role, wherein the second virtual role is other virtual roles except the first virtual role in the three-dimensional virtual meeting place.
In a second aspect, an embodiment of the present invention provides a conference control apparatus, where a terminal device provides a graphical user interface, where the graphical user interface includes a three-dimensional virtual meeting place and a virtual character located in the three-dimensional virtual meeting place, and the apparatus includes: the request sending module is used for responding to a speaking triggering operation aiming at a first virtual role in the virtual roles and sending a speaking request to a host of a current conference in the three-dimensional virtual conference place; the speech control module is used for responding to a confirmation message returned by aiming at the speech request and controlling the speech of the first virtual role; and the action control module is used for responding to action trigger operation aiming at the first virtual character in the process of controlling the first virtual character to speak, and controlling the first virtual character to execute action corresponding to the action trigger operation in the three-dimensional virtual meeting place.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, where the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to implement the conference control method according to any one of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a machine-readable storage medium, including a processor and a memory, where the memory stores machine-executable instructions capable of being executed by the processor, and the processor executes the machine-executable instructions to implement the conference control method according to any one of the first aspect.
The embodiment of the invention has the following beneficial effects:
the invention provides a conference control method, a conference control device and electronic equipment, wherein a speaking request is sent to a host of a current conference in a three-dimensional virtual conference room in response to a speaking triggering operation aiming at a first virtual role in the virtual roles; responding to a confirmation message returned aiming at the speaking request, and controlling the first virtual role to speak; and in the process of controlling the first virtual character to speak, responding to the action triggering operation aiming at the first virtual character, and controlling the first virtual character to execute the action corresponding to the action triggering operation in the three-dimensional virtual meeting place. In the three-dimensional virtual meeting place with the method, when the virtual character speaks in the three-dimensional virtual meeting place, the virtual character can be controlled to execute various actions so as to attract the participating users to pay attention to the three-dimensional virtual meeting place, the interestingness of the meeting is improved, the spirit of the participants can be concentrated in the meeting, and the user experience and the meeting effect are further improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a conference control method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a three-dimensional virtual meeting place according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a participant list in a three-dimensional virtual meeting place according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a participant list of another three-dimensional virtual meeting place according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of another three-dimensional virtual meeting place according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a conference control apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, an online conference is usually an online audio and video conference based on a 2D display interface, each participant in the online conference can release own speech and also can hear the speech released by other participants, for example, clicking a speaking button on the 2D display interface and then starting to release the speech; the video pictures of the participants can also be viewed in a fixed display area of the 2D display interface. However, the online conference based on the 2D display interface is limited to the interaction between the speeches, and the feedback displayed in the display interface is less, so that the conference is boring, the spirit of the participants is not easily concentrated in the conference, the experience is poor, and the conference effect is affected.
To facilitate understanding of the embodiment, first, a method for controlling a conference disclosed in the embodiment of the present invention is described in detail, where a terminal device provides a graphical user interface, where the graphical user interface includes a three-dimensional virtual meeting place and virtual characters located in the three-dimensional virtual meeting place, where the graphical user interface may be a graphical user interface of an immersive activity system, and the three-dimensional virtual meeting place includes virtual characters of multiple characters, including a common participant of the conference, a conference presenter, a conference host, and the like. The users corresponding to the conference interpreter and the conference host can speak directly in the three-dimensional virtual meeting place, and the users corresponding to all virtual roles in the conference can hear the speech.
As shown in fig. 1, the method comprises the steps of:
step S102, responding to a speaking triggering operation aiming at a first virtual role in the virtual roles, and sending a speaking request to a host of a current conference in a three-dimensional virtual conference place;
the three-dimensional virtual meeting place usually comprises virtual roles corresponding to common meeting participation users, virtual roles corresponding to meeting lecturers and virtual roles corresponding to meeting presenters, wherein the lecturers and the presenters can speak directly in the three-dimensional virtual meeting place, and the common meeting participation users can only apply for speaking. Specifically, in response to a trigger operation for a speech request control in a graphical user interface, a speech interface of a server of the immersive activity system is called, and a speech request is sent to a terminal corresponding to a host of the current conference.
Referring to the schematic diagram of the three-dimensional virtual meeting place shown in fig. 2, a graphical user interface of the terminal device includes a talk request control, and a user can trigger the talk request control, so that the terminal device invokes a talk interface SpeakRequest of a server of the immersive activity system, sends a talk request to a target terminal corresponding to a host of a current meeting through the talk interface, and simultaneously displays a talk request of a first virtual character in a graphical user interface of the target terminal corresponding to the host.
In actual implementation, after a user of a first virtual character clicks a speech button (as shown in fig. 2), a terminal device invokes a server speech interface SpeakRequest of the immersive activity system, the server goes to process a speech request logic, data of the speech request is synchronized to a target terminal hosting the virtual character through a participant list, the speech request of the first virtual character is displayed on a graphical user interface of the target terminal (as shown in fig. 3), if the user selects whether to approve the speech of the first virtual character, a speech control is displayed on the graphical user interface of the target terminal, as shown in fig. 4, the speech control includes a speak control, which indicates that the first virtual character is approved to speak, Ban indicates that the first virtual character is not approved to speak, and Shield the speech of the first virtual character.
Step S104, responding to a confirmation message returned aiming at the speaking request, and controlling the speaking of the first virtual role;
in actual implementation, a host of a current conference can select whether to grant the first virtual character to Speak according to a received speaking request, if the host agrees to Speak with the first virtual character, the target terminal invokes an allowseak of a server interface of the immersive activity system, and at this time, the terminal device receives a speaking grant message fed back by the target terminal, that is, the terminal device receives a Speak event sent by the server, that is, the confirmation message is described above. The terminal equipment receives a Speak event sent by the server side and can control the first virtual character to Speak. In addition, it is also possible to display the speaking information on the graphical user interface of the terminal device, and simultaneously collect conference voice information of the user of the terminal device (as shown in fig. 5). And displaying the information of speaking on a graphical user interface of the terminal equipment, and simultaneously acquiring conference voice information of a user of the terminal equipment.
The above-mentioned control of the first avatar may be voice utterance or video voice utterance, and all of the avatars other than the first avatar in the three-dimensional virtual meeting place may speak from the first avatar, including a meeting presenter and a host. During actual implementation, the terminal device may obtain the conference voice information of the first virtual character, and send the conference voice information to other terminals through the voice interaction channel, so that users of the other terminals can hear the speech of the user corresponding to the first virtual character. Similarly, the terminal device can acquire conference voice information of users of other terminals, receive the conference voice information of the users of the other terminals through the voice interaction channel, and then play the conference voice information of the users of the other terminals at the terminal device, so that the users of the terminal device can hear the speech of the users corresponding to the other virtual characters, and thus the conference voice information interaction is performed.
The voice interaction channel is an independent function in the three-dimensional virtual meeting place, can be a voice interaction interface in an immersive activity system, and specifically carries out interaction of meeting voice information through the voice interaction function of the voice interaction interface.
The virtual roles in the three-dimensional virtual meeting place can apply for speaking in the three-dimensional virtual meeting place through the speaking request, and the conference effect of the three-dimensional virtual meeting place is further improved. In addition, the speaking process completes the interactive communication between the terminal and the server side with the least communication traffic in a mode of calling the interface, and the service capability of the system is improved.
And step S106, responding to the action trigger operation aiming at the first virtual character in the process of controlling the first virtual character to speak, and controlling the first virtual character to execute the action corresponding to the action trigger operation in the three-dimensional virtual meeting place.
Controlling a first virtual character to execute an interesting action corresponding to the action triggering operation, and synchronously displaying an interesting action execution picture of the first virtual character in a three-dimensional virtual meeting place of a terminal corresponding to other virtual characters through an action interaction channel; the first virtual role is a role corresponding to the terminal equipment, and the interesting action is used for attracting the participating users to pay attention to the three-dimensional virtual meeting place.
The action corresponding to the action triggering operation can be a designated limb action, such as interesting interactive actions of waving hands, dancing, running, jumping and the like; facial expression actions such as smiling, mouth opening, crying, etc. may also be possible. Of course, the action corresponding to the action triggering operation may be that the first virtual character performs a certain limb action together with the nearby virtual character, such as two-person friendship dance, or the first virtual character performs a certain limb action together with other more virtual characters.
Specifically, when the first virtual character speaks in the three-dimensional virtual meeting place, an action control in the graphical user interface may be clicked, or a specified input is input through an external component to trigger the first virtual character to execute an action, so that the first virtual character may be controlled to execute an action corresponding to the action triggering operation, or the first virtual character and at least one other virtual character may be controlled to execute an action corresponding to the action triggering operation. When the first virtual character is controlled to execute the action corresponding to the action triggering operation, the picture of the first virtual character executing action is also displayed on the graphical user interface of the terminal equipment, so that the user corresponding to the first virtual character is attracted to pay attention to the three-dimensional virtual meeting place.
In addition, the action execution screen of the first virtual character can be synchronously displayed in the three-dimensional virtual meeting places of the terminals corresponding to other virtual characters through the action interaction channel, and other virtual characters can pay attention to the actions of other virtual characters in the three-dimensional virtual meeting places when listening to or speaking, so that users corresponding to other virtual characters can be attracted to pay attention to the three-dimensional virtual meeting places.
In the existing online conference, a conference participant cannot perform action interaction with other conference participants or display voice interaction in a graphical user interface, and only can view video pictures in a screen, including head portrait pictures of the participants and shared conference content, the pictures displayed on the graphical user interface are 2D pictures, the pictures are simple, the conference participant faces the pictures and is boring, and the conference participant cannot watch the pictures at the graphical user interface all the time. In this embodiment, the user may control the virtual character to perform some interactive actions in the meeting place, either by himself or together with other virtual characters, and the user may view the meeting place while performing the interactive actions, which may attract the user to pay attention to the current meeting.
The action interaction channel is an independent function in the three-dimensional virtual meeting place, can be an action interaction interface in the immersive activity system, and particularly carries out interesting action interaction through the action interaction function of the action interaction interface.
The embodiment of the invention provides a conference control method, which comprises the steps of responding to a speech triggering operation aiming at a first virtual role in virtual roles, and sending a speech request to a host of a current conference in a three-dimensional virtual conference place; responding to a confirmation message returned aiming at the speaking request, and controlling the first virtual role to speak; and in the process of controlling the first virtual character to speak, responding to the action triggering operation aiming at the first virtual character, and controlling the first virtual character to execute the action corresponding to the action triggering operation in the three-dimensional virtual meeting place. In the three-dimensional virtual meeting place with the method, when the virtual character speaks in the three-dimensional virtual meeting place, the virtual character can be controlled to execute various actions so as to attract the participating users to pay attention to the three-dimensional virtual meeting place, the interestingness of the meeting is improved, the spirit of the participants can be concentrated in the meeting, and the user experience and the meeting effect are further improved.
How to control the first avatar to speak, one possible implementation is described below: starting a voice call or video call function for the first virtual role; acquiring speech information of a user corresponding to a first virtual role; and sending the speaking information to a second virtual role in the three-dimensional virtual meeting place, wherein the second virtual role is other virtual roles except the first virtual role in the three-dimensional virtual meeting place.
After receiving the confirmation message returned by the speaking request, the terminal device opens a voice call or video call function for the first virtual role, and specifically can call an audio and video SDK (component responsible for bottom audio and video transmission during chatting) to realize a formal speaking function. Embodied in the graphical user interface, the "speaking" prompt is displayed as shown in fig. 5. After the voice call or video call function is started, speech information of the user corresponding to the first virtual role, including voice information and video voice information, is acquired or collected in real time. And finally, sending the speech information to a second virtual role in the three-dimensional virtual meeting place so that the terminal of the first virtual role plays the speech information of the first virtual role. The voice information can be played only or the video voice information can be played.
Specifically, when a user of the terminal device starts speaking, the terminal device collects conference voice call or video call information of the user of the first virtual character, and then transmits the conference voice call or video call information of the user of the first virtual character to a terminal corresponding to the second virtual character through the voice interaction channel, so that a formal speaking function is realized.
By the method, the voice call or video call function can be started, so that the user can realize the same real speaking process in the virtual conference as in reality, the conference participants can hear the voice of the conference participants and see the video pictures of the conference participants during speaking, and the conference immersion feeling of the user is further improved.
In order to further improve the immersion of a user in the conference room, when the first virtual character listens to the speech, that is, when the terminal device plays the speech information of the second virtual character, the first virtual character may also be controlled to perform the action, in a possible implementation manner, the speech information sent by the terminal device corresponding to the second virtual character in the three-dimensional virtual conference room is received, and the second virtual character is another virtual character in the three-dimensional virtual conference room except the first virtual character; and in the process of receiving the speaking information sent by the terminal corresponding to the second virtual character, responding to the action triggering operation aiming at the first virtual character, and controlling the first virtual character to execute the action corresponding to the action triggering operation in the three-dimensional virtual meeting place.
When a user corresponding to a second virtual character in the three-dimensional virtual meeting field speaks through the terminal, the terminal equipment receives the speaking information sent by the terminal corresponding to the second virtual character through the voice interaction channel, then the speaking information is transmitted to the playing device of the terminal equipment through the audio and video component, the meeting voice information of other terminals is played at the terminal equipment, and the function of listening to the speaking is achieved. When the first virtual character listens to the speech, that is, when the terminal device plays the voice information or the video information, the user of the first virtual character can also control the first virtual character to execute the action corresponding to the action triggering operation.
In the mode, the first virtual role can be controlled to execute some interesting actions in the three-dimensional virtual meeting place under the two states of speaking and listening to the speaking in the three-dimensional virtual meeting place, the application of the three-dimensional virtual meeting place is enriched, the three-dimensional virtual meeting place is closer to reality, the three-dimensional virtual meeting place cannot be boring, and meanwhile, the concentration of conference participants on a conference is improved.
In order to further improve the concentration of the participated users in the conference, the three-dimensional virtual meeting place comprises a corresponding relation between action triggering operation and action; the actions include: limb movements and facial expression movements. The corresponding relationship between the action triggering operation and the action is that different action triggering operations correspond to different actions, for example, an action corresponding to an operation of clicking a right button of a mouse is a jump, an action corresponding to an operation of touching a first virtual character to move is a walk or a run, and an action corresponding to an operation of clicking an expression migration control of a graphical user interface is expression migration. The above limb movements include dancing, calling, walking, running, jumping, etc. The facial expression motion may be a designated facial motion such as smile, laugh, cry, beep and the like, or a facial expression motion corresponding to expression migration.
In fact, the product types of the terminal are various, so the process of the action triggering operation is usually different for different terminal products, and the specific operation of the action triggering operation for different products of the terminal is described below.
In a possible implementation manner, the terminal device is a non-touch terminal, and the action triggering operation includes: and (3) operating on an input component of the terminal equipment, wherein the input component comprises keyboard keys and/or mouse keys.
For a non-touch terminal, such as a desktop computer, a notebook computer, etc., the action triggering operation is an operation generated by controlling a keyboard and a mouse, for example, pressing an "a" key in a keyboard triggers an interactive operation of dancing to control a first virtual character to execute the action of dancing, and then if a mouse is used to click an action control in a graphical user interface, triggers an interactive operation of calling, controls the first virtual character to execute the action of calling, and if a right mouse button is clicked, triggers the interactive operation of jumping to control the first virtual character to execute the action of jumping. It is understood that the action-triggering operations include: the operation acting on the keyboard keys and mouse keys of the terminal device, or the action triggering operation comprises: the operation acting on the keyboard keys of the terminal equipment or the action triggering operation comprises the following steps: and (4) operation on a mouse button of the terminal equipment.
In another possible implementation manner, the terminal device is a touch terminal, and the action triggering operation includes: an operation on an action control acting on a graphical user interface; the motion controls include a limb motion control and a facial expression control.
For a touch terminal, such as a mobile phone, a tablet computer, and the like, the action triggering operation is an operation of a control in a touch graphical user interface, and a finger clicks the action control on the touch graphical user interface to control the first virtual character to execute an action corresponding to the action control. During actual implementation, the fingers click the limb action control on the touch graphical user interface to control the first virtual character to execute the limb action corresponding to the limb action control. Or clicking a facial expression control on the touch graphical user interface by a finger to control the first virtual character to execute a facial action corresponding to the facial expression control.
For example, when a finger clicks a limb action control on a touch graphical user interface, a plurality of selectable limb action identifiers can be displayed, and when the control for selecting the call identifier is clicked, the first virtual character is controlled to execute the call identifier. For another example, when a finger clicks a facial expression control on the touch graphical user interface, a plurality of selectable facial expression identifiers can be displayed, and when the control for selecting the smile identifier is clicked, the first virtual character is controlled to execute the smile action. Or, a finger clicks a facial expression control on the touch graphical user interface, an expression migration function can be started, then the facial expression acquired by the camera on the terminal equipment side is mapped to the face of the first virtual character, and a facial expression action corresponding to the facial expression is executed.
In the above manner, different terminal devices have different action triggering operation manners, the non-touch terminal can control the first virtual character to execute the interesting action through a keyboard or a mouse, and the touch terminal can control the first virtual character to execute different interesting actions by clicking different controls of the graphical user interface, so that the application of the three-dimensional virtual meeting place is richer, the interactive feedback of the virtual character in the virtual meeting scene is more, and a user can concentrate on the virtual meeting scene more.
In order to further increase the interest of the conference, one possible implementation: and responding to the triggering operation aiming at the expression migration control, and migrating the expression of the user corresponding to the first virtual character to the face of the first virtual character in the process of controlling the speech of the first virtual character.
Responding to the triggering operation aiming at the expression migration control, and starting a camera of the terminal equipment in the process of controlling the first virtual role to speak; the method comprises the steps of collecting the facial expression of a user corresponding to a first virtual character through a camera, and executing expression migration operation on the first virtual character through the collected facial expression. Specifically, a user corresponding to the first virtual character touches a control for expression migration through a finger, or clicks the control for expression migration through a mouse, then an expression migration function is started, a camera of the terminal device is started, the user of the terminal device collects facial expressions of the real user through the camera, the facial expressions shot by the real camera can be mapped to the face of the first virtual character in the three-dimensional virtual meeting place in real time, and the first virtual character can make the same expressions or facial actions as those of the real character.
In the mode, the user can control the first virtual character to execute the limb action and can transfer the facial expression of the real user to the face of the first virtual character, so that the first virtual character can make the expression or facial action the same as that of a real character, the three-dimensional virtual meeting place can be closer to a real meeting scene, and the interestingness of the three-dimensional virtual meeting place is further improved.
In order to enable a participant to see the speaking state of each virtual role in the three-dimensional virtual meeting place through a graphical user interface of a corresponding terminal in real time, a possible implementation mode is that a participant list of a current meeting in the three-dimensional virtual meeting place is displayed through the graphical user interface, and the participant list comprises identity information and speaking state information of each participant. Wherein the speech state information: a speech application state, a speech state and a screen sharing state.
As shown in fig. 3, the participant list, i.e., the online personnel in the figure, is displayed with a "first virtual character" and a "second virtual character", and a "conference host virtual character", i.e., the identity information of the above-mentioned participants. The different speaking status information may be specifically represented by a text or symbol identifier. For example, in fig. 3, the hand-lifting flag in the utterance state information of the first avatar indicates a hand-lifting state, that is, the utterance application state, the microphone flag indicates a speaking state, that is, the utterance state, and the display screen flag indicates that screen sharing is being performed, that is, the screen sharing state.
In addition, the method further comprises: detecting the speaking state information of each virtual role in the three-dimensional virtual meeting place in real time; and updating the speaking state information of the virtual role displayed in the graphical user interface according to the detected speaking state information.
Since the speaking status information of the virtual characters in the three-dimensional virtual meeting place changes in real time, the speaking status information of the virtual characters needs to be updated in real time, so that a user of a terminal can know the current status of each virtual character in real time through the parameter status, for example, which virtual character the currently heard voice is explaining, and the like. Specifically, the speaking state information of each virtual character in the three-dimensional virtual meeting place can be detected in real time through the interface of the server, the speaking state information of the virtual character displayed in the graphical user interface is updated according to the detected speaking state information, and the state identification can be specifically switched to indicate the speaking state information of the virtual character.
Specifically, the speaking status information of the virtual role is synchronized to all users and displayed through a designated function in a data layer, wherein participant status is the speaking status information, isShare is whether to share a screen (bone) 0/1, isSpeak is whether to speak (bone) 0/1, and isHand is whether to hold a hand (bone) 0/1, wherein "1" is true2, and "0" is false, which indicates no.
The actual graphical user interface may display a participant list on the right or left side of the interface, with all participant information and status available for viewing.
In the mode, the speaking state information of each virtual role can be displayed in the three-dimensional virtual meeting place, and meanwhile, the speaking state information of the virtual roles displayed in the graphical user interface is updated according to the detected speaking state information, so that a user can know the state information of participants in real time, the three-dimensional virtual meeting place is enriched, and the meeting experience of the user is improved.
Describing the process of stopping speaking of the first virtual character, and displaying a stop control through a graphical user interface in the process of controlling the first virtual character to speak; responding to the trigger operation aiming at the stop control, ending the speech of the first virtual role, and updating the state of ending the speech of the first virtual role to a terminal corresponding to a second virtual role, wherein the second virtual role is other virtual roles except the first virtual role in the three-dimensional virtual meeting place.
Specifically, as shown in fig. 5, the first virtual character is speaking, the graphical user interface displays information of "speaking" and also displays a "stop" control, if a user corresponding to the first virtual character wants to stop speaking, the user can click or trigger the stop speaking control, at this time, the terminal device calls the stop speaking interface playstoppeak of the immersive activity system server, and then sends a request for ending speaking to the target terminal through the stop speaking interface; and the target terminal selects to stop the speaking of the first virtual role according to the speaking stopping request, sends the speaking stopping message to the terminal equipment, and finally the terminal equipment stops collecting the speaking information of the user of the first virtual role. In addition, in order to enable each terminal to display the current state of each virtual character in real time, the state in which the first virtual character finishes speaking may be updated to the terminal corresponding to the second virtual character.
In the above manner, if the first virtual character wants to stop speaking, the speaking control can be stopped to apply for stopping speaking to the target terminal, so that the speaking process of the virtual character in the three-dimensional virtual meeting place is more complete. And meanwhile, the state of the first virtual role ending the speech is updated to the terminal corresponding to the second virtual role, so that the state of the virtual role displayed by each terminal is ensured to be consistent with the actual state.
In addition, in the above embodiment, the three-dimensional virtual meeting place has a whole call link from applying for speaking to formal speaking, so that the interactive communication between the client and the server is completed with the least communication traffic. Which comprises the following steps: SpeakRequest- > AllowSpeak- > Speak- > PlayerStopSpeak interface.
Corresponding to the method embodiment, the embodiment of the invention provides a conference control device, which provides a graphical user interface of an immersive activity system through terminal equipment, wherein the graphical user interface displays a three-dimensional virtual meeting place; as shown in fig. 6, the apparatus includes:
a request sending module 61, configured to send a speech request to a host of a current conference in a three-dimensional virtual conference room in response to a speech trigger operation for a first virtual role in the virtual roles;
a speech control module 62, configured to control the first virtual role to speak in response to a confirmation message returned in response to the speech request;
and an action control module 63, configured to, in a process of controlling the first virtual character to speak, respond to an action trigger operation for the first virtual character, and control the first virtual character to execute an action corresponding to the action trigger operation in the three-dimensional virtual meeting place.
The invention provides a conference control device, which responds to a speech triggering operation aiming at a first virtual role in virtual roles and sends a speech request to a host of a current conference in a three-dimensional virtual conference room; responding to a confirmation message returned aiming at the speaking request, and controlling the first virtual role to speak; and in the process of controlling the first virtual character to speak, responding to the action triggering operation aiming at the first virtual character, and controlling the first virtual character to execute the action corresponding to the action triggering operation in the three-dimensional virtual meeting place. In the three-dimensional virtual meeting place with the method, when the virtual character speaks in the three-dimensional virtual meeting place, the virtual character can be controlled to execute various actions so as to attract the participating users to pay attention to the three-dimensional virtual meeting place, the interestingness of the meeting is improved, the spirit of the participants can be concentrated in the meeting, and the user experience and the meeting effect are further improved.
Further, the floor control module is further configured to: starting a voice call or video call function for the first virtual role; acquiring speech information of a user corresponding to a first virtual role; and sending the speaking information to a second virtual role in the three-dimensional virtual meeting place, wherein the second virtual role is other virtual roles except the first virtual role in the three-dimensional virtual meeting place.
Further, the apparatus further includes a second motion control module, configured to: receiving speech information sent by a terminal corresponding to a second virtual role in the three-dimensional virtual meeting place, wherein the second virtual role is other virtual roles except the first virtual role in the three-dimensional virtual meeting place; and in the process of receiving the speaking information sent by the terminal corresponding to the second virtual character, responding to the action triggering operation aiming at the first virtual character, and controlling the first virtual character to execute the action corresponding to the action triggering operation in the three-dimensional virtual meeting place.
Further, the device further comprises an expression migration module, configured to: and responding to the triggering operation aiming at the expression migration control, and migrating the expression of the user corresponding to the first virtual character to the face of the first virtual character in the process of controlling the speech of the first virtual character.
Further, the device further comprises a list display module for displaying a participant list of the current conference in the three-dimensional virtual conference place through a graphical user interface, wherein the participant list comprises identity information and speaking state information of each participant.
Further, the utterance status information: a speech request state, a speech state, a screen sharing state.
Further, the apparatus further includes a status update module, configured to: displaying a stop control through a graphical user interface in the process of controlling the first virtual role to speak; and responding to the trigger operation aiming at the stop control, controlling the first virtual role to finish speaking, and updating the state of finishing speaking of the first virtual role to a terminal corresponding to a second virtual role, wherein the second virtual role is other virtual roles except the first virtual role in the three-dimensional virtual meeting place.
The conference control device provided by the embodiment of the invention has the same technical characteristics as the conference control method provided by the embodiment, so that the same technical problems can be solved, and the same technical effects can be achieved.
The embodiment also provides an electronic device, which comprises a processor and a memory, wherein the memory stores machine executable instructions capable of being executed by the processor, and the processor executes the machine executable instructions to realize the conference control method. The electronic device may be a server or a terminal device.
Referring to fig. 7, the electronic device includes a processor 100 and a memory 101, the memory 101 stores machine executable instructions capable of being executed by the processor 100, and the processor 100 executes the machine executable instructions to implement the conference control method.
Further, the electronic device shown in fig. 7 further includes a bus 102 and a communication interface 103, and the processor 100, the communication interface 103, and the memory 101 are connected through the bus 102.
The Memory 101 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The communication connection between the network element of the system and at least one other network element is realized through at least one communication interface 103 (which may be wired or wireless), and the internet, a wide area network, a local network, a metropolitan area network, and the like can be used. The bus 102 may be an ISA bus, PCI bus, EISA bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one double-headed arrow is shown in FIG. 7, but this does not indicate only one bus or one type of bus.
Processor 100 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 100. The Processor 100 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the device can also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 101, and the processor 100 reads the information in the memory 101 and completes the steps of the method of the foregoing embodiment in combination with the hardware thereof.
The present embodiments also provide a machine-readable storage medium having stored thereon machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement the above-described method of conference control.
The conference control method, apparatus, and computer program product of the system provided in the embodiments of the present invention include a computer-readable storage medium storing program codes, where instructions included in the program codes may be used to execute the method described in the foregoing method embodiments, and specific implementations may refer to the method embodiments and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In addition, in the description of the embodiments of the present invention, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood in specific cases for those skilled in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that the following embodiments are merely illustrative of the present invention, and not restrictive, and the scope of the present invention is not limited thereto: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. A conference control method, wherein a graphical user interface is provided through a terminal device, the graphical user interface includes a three-dimensional virtual meeting place and a virtual character located in the three-dimensional virtual meeting place, and the method includes:
responding to a speaking triggering operation aiming at a first virtual role in the virtual roles, and sending a speaking request to a host of a current conference in the three-dimensional virtual conference place;
responding to a confirmation message returned by aiming at the speaking request, and controlling the speaking of the first virtual role;
and in the process of controlling the first virtual character to speak, responding to action triggering operation aiming at the first virtual character, and controlling the first virtual character to execute action corresponding to the action triggering operation in the three-dimensional virtual meeting place.
2. The method of claim 1, wherein said controlling said first avatar to speak comprises:
starting a voice call or video call function for the first virtual role;
obtaining speech information of a user corresponding to the first virtual role;
and sending the speaking information to a second virtual role in the three-dimensional virtual meeting place, wherein the second virtual role is other virtual roles except the first virtual role in the three-dimensional virtual meeting place.
3. The method of claim 1, further comprising:
receiving speech information sent by a terminal corresponding to a second virtual role in the three-dimensional virtual meeting place, wherein the second virtual role is other virtual roles except the first virtual role in the three-dimensional virtual meeting place;
and in the process of receiving the speaking information sent by the terminal corresponding to the second virtual character, responding to action triggering operation aiming at the first virtual character, and controlling the first virtual character to execute action corresponding to the action triggering operation in the three-dimensional virtual meeting place.
4. The method of claim 1, further comprising:
responding to a triggering operation aiming at an expression migration control, and migrating the expression of the user corresponding to the first virtual character to the face of the first virtual character in the process of controlling the first virtual character to speak.
5. The method of claim 1, further comprising:
and displaying a participant list of the current conference in the three-dimensional virtual meeting place through the graphical user interface, wherein the participant list comprises the identity information and the speaking state information of each participant.
6. The method of claim 5, wherein the talk state information comprises: a speech application state, a speech state and a screen sharing state.
7. The method of claim 1, further comprising:
displaying a stop control through the graphical user interface in a process of controlling the first virtual role to speak;
and responding to the trigger operation aiming at the stop control, controlling the first virtual role to finish speaking, and simultaneously updating the state of finishing speaking of the first virtual role to a terminal corresponding to a second virtual role, wherein the second virtual role is other virtual roles except the first virtual role in the three-dimensional virtual meeting place.
8. An apparatus for conference control, wherein a graphical user interface is provided through a terminal device, the graphical user interface comprising a three-dimensional virtual meeting place and a virtual character located in the three-dimensional virtual meeting place, the apparatus comprising:
a request sending module, configured to send a speech request to a host of a current conference in the three-dimensional virtual conference room in response to a speech trigger operation for a first virtual role in the virtual roles;
the speaking control module is used for responding to a confirmation message returned by aiming at the speaking request and controlling the speaking of the first virtual role;
and the action control module is used for responding to action trigger operation aiming at the first virtual character in the process of controlling the first virtual character to speak, and controlling the first virtual character to execute action corresponding to the action trigger operation in the three-dimensional virtual meeting place.
9. An electronic device comprising a processor and a memory, the memory storing machine executable instructions executable by the processor, the processor executing the machine executable instructions to implement the method of conference control of any of claims 1-7.
10. A machine-readable storage medium comprising a processor and a memory, the memory storing machine-executable instructions executable by the processor, the processor executing the machine-executable instructions to implement the method of conference control of any of claims 1-7.
CN202111350232.6A 2021-11-15 2021-11-15 Conference control method and device and electronic equipment Pending CN113938336A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111350232.6A CN113938336A (en) 2021-11-15 2021-11-15 Conference control method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111350232.6A CN113938336A (en) 2021-11-15 2021-11-15 Conference control method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN113938336A true CN113938336A (en) 2022-01-14

Family

ID=79286631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111350232.6A Pending CN113938336A (en) 2021-11-15 2021-11-15 Conference control method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN113938336A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5491743A (en) * 1994-05-24 1996-02-13 International Business Machines Corporation Virtual conference system and terminal apparatus therefor
US20040128350A1 (en) * 2002-03-25 2004-07-01 Lou Topfl Methods and systems for real-time virtual conferencing
WO2009139903A1 (en) * 2008-05-15 2009-11-19 Upton Kevin S System and method for providing a virtual environment with shared video on demand
CN102263772A (en) * 2010-05-28 2011-11-30 经典时空科技(北京)有限公司 Virtual conference system based on three-dimensional technology
CN103814568A (en) * 2011-09-23 2014-05-21 坦戈迈公司 Augmenting a video conference
US20170302709A1 (en) * 2015-12-31 2017-10-19 Maria Francisca Jones Virtual meeting participant response indication method and system
CN111476903A (en) * 2019-01-23 2020-07-31 北京奇虎科技有限公司 Virtual interaction implementation control method and device, computer equipment and storage medium
CN111641800A (en) * 2020-04-20 2020-09-08 视联动力信息技术股份有限公司 Method and device for realizing conference
CN111953922A (en) * 2019-05-16 2020-11-17 南宁富桂精密工业有限公司 Face identification method for video conference, server and computer readable storage medium
CN112866619A (en) * 2021-01-05 2021-05-28 浙江大学 Teleconference control method and device, electronic equipment and storage medium
CN113395597A (en) * 2020-10-26 2021-09-14 腾讯科技(深圳)有限公司 Video communication processing method, device and readable storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5491743A (en) * 1994-05-24 1996-02-13 International Business Machines Corporation Virtual conference system and terminal apparatus therefor
US20040128350A1 (en) * 2002-03-25 2004-07-01 Lou Topfl Methods and systems for real-time virtual conferencing
WO2009139903A1 (en) * 2008-05-15 2009-11-19 Upton Kevin S System and method for providing a virtual environment with shared video on demand
CN102263772A (en) * 2010-05-28 2011-11-30 经典时空科技(北京)有限公司 Virtual conference system based on three-dimensional technology
CN103814568A (en) * 2011-09-23 2014-05-21 坦戈迈公司 Augmenting a video conference
US20170302709A1 (en) * 2015-12-31 2017-10-19 Maria Francisca Jones Virtual meeting participant response indication method and system
CN111476903A (en) * 2019-01-23 2020-07-31 北京奇虎科技有限公司 Virtual interaction implementation control method and device, computer equipment and storage medium
CN111953922A (en) * 2019-05-16 2020-11-17 南宁富桂精密工业有限公司 Face identification method for video conference, server and computer readable storage medium
CN111641800A (en) * 2020-04-20 2020-09-08 视联动力信息技术股份有限公司 Method and device for realizing conference
CN113395597A (en) * 2020-10-26 2021-09-14 腾讯科技(深圳)有限公司 Video communication processing method, device and readable storage medium
CN112866619A (en) * 2021-01-05 2021-05-28 浙江大学 Teleconference control method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张茂军, 孙立峰, 李云浩, 杨冰, 孙雷, 贺宝权: "虚拟会议空间的研究与实现", 计算机工程, no. 01 *

Similar Documents

Publication Publication Date Title
US10873769B2 (en) Live broadcasting method, method for presenting live broadcasting data stream, and terminal
US10154232B2 (en) Communication event
US20140351720A1 (en) Method, user terminal and server for information exchange in communications
WO2017028424A1 (en) Method, apparatus and terminal device for game in chat interface
CN113055628A (en) Displaying video call data
CN113453029B (en) Live broadcast interaction method, server and storage medium
EP4047938A1 (en) Method for displaying interactive interface and apparatus thereof, method for generating interactive interface
CN109195003B (en) Interaction method, system, terminal and device for playing game based on live broadcast
JP7228338B2 (en) System, method and program for distributing videos
WO2021169432A1 (en) Data processing method and apparatus of live broadcast application, electronic device and storage medium
CN106105172A (en) Highlight the video messaging do not checked
EP4096223A1 (en) Live broadcast interaction method and apparatus, electronic device, and storage medium
CN106209396B (en) Matching process and relevant apparatus
WO2023098011A1 (en) Video playing method and electronic device
CN112000252A (en) Virtual article sending and displaying method, device, equipment and storage medium
CN113573092A (en) Live broadcast data processing method and device, electronic equipment and storage medium
CN109819341B (en) Video playing method and device, computing equipment and storage medium
JP2018515979A (en) Communication processing method and electronic apparatus
CN109039851B (en) Interactive data processing method and device, computer equipment and storage medium
WO2022001552A1 (en) Message sending method and apparatus, message receiving method and apparatus, device, and medium
JP2023524930A (en) CONFERENCE PROCESSING METHOD AND SYSTEM USING AVATARS
CN113938336A (en) Conference control method and device and electronic equipment
WO2018149170A1 (en) Cross-application control method and device
CN114430494B (en) Interface display method, device, equipment and storage medium
CN112717422B (en) Real-time information interaction method and device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination