CN116755549A - Conference development method and device, storage medium and computer equipment - Google Patents

Conference development method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN116755549A
CN116755549A CN202310556926.8A CN202310556926A CN116755549A CN 116755549 A CN116755549 A CN 116755549A CN 202310556926 A CN202310556926 A CN 202310556926A CN 116755549 A CN116755549 A CN 116755549A
Authority
CN
China
Prior art keywords
target
conference
vocabulary
target virtual
virtual object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310556926.8A
Other languages
Chinese (zh)
Inventor
彭子娇
张伟彬
陈东鹏
李亚桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Voiceai Technologies Co ltd
Original Assignee
Voiceai Technologies Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Voiceai Technologies Co ltd filed Critical Voiceai Technologies Co ltd
Priority to CN202310556926.8A priority Critical patent/CN116755549A/en
Publication of CN116755549A publication Critical patent/CN116755549A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/10Office automation; Time management
    • G06Q10/109Time management, e.g. calendars, reminders, meetings or time accounting
    • G06Q10/1093Calendar-based scheduling for persons or groups
    • G06Q10/1095Meeting or appointment

Abstract

The embodiment of the application discloses a conference development method, a conference development device, a storage medium and computer equipment. The embodiment of the application determines a target virtual object for developing a conference in a virtual scene and acquires voice content corresponding to the target virtual object; analyzing the voice content corresponding to the target virtual object, and determining a corresponding target core vocabulary; and displaying the target core vocabulary in a preset range of the position of the target virtual object in the virtual scene. Therefore, the conference is carried out in the virtual scene, the target core vocabulary for carrying out the conference is displayed in the preset range of the target virtual object in the virtual scene, the key content in the conference carrying out process is prompted in a word cloud picture mode, the diversity of the conference carrying out content and form is increased, and the overall effect of the conference carrying out is optimized.

Description

Conference development method and device, storage medium and computer equipment
Technical Field
The application relates to the technical field of digital technology, in particular to a conference developing method, a conference developing device, a storage medium and computer equipment.
Background
In the related art, the manner of conducting the conference mainly includes an offline conference and an online conference. However, with the continuous development of digital technology, the virtual scene of the metauniverse is expanded, and meeting through the virtual scene of the metauniverse becomes a technical hotspot.
In the existing online conference and online conference, the display content of the conference is only remained in the actual scene obtained by the participants, and even if the conference is online, the display content of the conference is limited to the actual scene of the two parties, so in the existing conference development mode, the display content and form of the conference are single, and the overall effect of the conference is poor.
Disclosure of Invention
The embodiment of the application provides a conference development method, a conference development device, a storage medium and computer equipment, which can increase the diversity of conference presentation and optimize the overall effect of a conference.
In order to solve the technical problems, the embodiment of the application provides the following technical scheme:
a conference initiation method comprising:
determining a target virtual object for developing a conference in a virtual scene, and acquiring voice content corresponding to the target virtual object;
analyzing the voice content corresponding to the target virtual object, and determining a corresponding target core vocabulary;
and displaying the target core vocabulary in a preset range of the position of the target virtual object in the virtual scene.
A conference launch apparatus comprising:
the determining unit is used for determining a target virtual object for developing the conference in the virtual scene and acquiring voice content corresponding to the target virtual object;
The analysis unit is used for analyzing the voice content corresponding to the target virtual object and determining a target core vocabulary of the conference;
the display unit is used for displaying the target core vocabulary in a preset range of the position of the target virtual object in the virtual scene.
In some embodiments, the determining unit includes:
the login subunit is used for determining a target virtual object for carrying out the conference according to a participant account number of the virtual scene, wherein the participant account number has unique identification;
the acquisition subunit is used for acquiring real-time audio data of the target virtual object based on the target audio equipment corresponding to the target virtual object;
and the recognition subunit is used for recognizing the real-time audio data and acquiring the voice content corresponding to the target virtual object.
In some embodiments, the analysis unit comprises:
the screening subunit is used for screening target vocabularies belonging to the preset part-of-speech types in the voice content according to the preset part-of-speech types;
and the statistics subunit is used for carrying out word frequency statistics on the target vocabulary and determining the target vocabulary with the statistical word frequency larger than the preset word frequency as a target core vocabulary.
In some embodiments, the statistics subunit is configured to:
Recording target vocabularies of target virtual objects according to the reference account number of the virtual scene in a preset time period, and generating a history vocabulary set corresponding to the target virtual objects;
performing word frequency statistics on target words with the same content in the history word set to obtain word frequency corresponding to each target word;
and determining target core words displayed in the conference of the virtual scene according to the word frequency of each target word in the historical word set.
In some embodiments, the display unit comprises:
a color subunit, configured to determine a corresponding target display color according to the part of speech of the target core vocabulary;
a position subunit, configured to determine an initial position of a target core vocabulary based on a position of a target virtual object in the virtual scene, and display the target core vocabulary in a target display color at the initial position;
the transparency subunit is used for recording the display time of the target core vocabulary and dynamically adjusting the display position of the displayed target core vocabulary and the transparency of the target display color according to the display time;
the display position of the target core vocabulary is always within a preset range of the position of the target virtual object.
In some embodiments, the display unit is further configured to:
and hiding the target core vocabulary when the display time of the target core vocabulary exceeds the preset display time.
In some embodiments, the conference initiation device further comprises:
the association calculation unit is used for calculating association according to the target core vocabulary of each target virtual object when the number of target virtual objects for developing the conference in the virtual scene reaches a first preset value, so as to obtain the vocabulary association between every two target virtual objects;
and the connection unit is used for determining the two target virtual objects as connectable target virtual objects when the vocabulary association degree between the two target virtual objects reaches a preset threshold value.
In some embodiments, the conference initiation device is further configured to:
when the number of connectable target virtual objects in the virtual scene reaches a second preset value, the target connection mode between every two target virtual objects is adjusted according to the vocabulary association degree between every two target virtual objects.
In some embodiments, the association calculation unit is configured to:
creating a vocabulary vector set corresponding to the target virtual object according to the target core vocabulary of the target virtual object;
And carrying out word vector distance calculation according to the word vector sets corresponding to each two target virtual objects to obtain the distance between the word vector sets corresponding to each two target virtual objects, and taking the distance between the word vector sets as the word association degree between each two target virtual objects.
In some embodiments, the conference initiation device is further configured to:
according to the vocabulary association degree between every two target virtual objects, carrying out association degree sequencing on every two target virtual objects to obtain a corresponding association degree sequencing result;
and adjusting the color depth of the connecting line between every two connectable target virtual objects according to the relevancy sorting result.
A computer storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps of the conference method described above.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps in the conference method provided above when the computer program is executed.
A computer program product or computer program comprising computer instructions stored in a storage medium. The processor of the computer device reads the computer instructions from the storage medium and the processor executes the computer instructions, so that the computer device performs the steps in the conference carrying out method provided above.
The embodiment of the application determines a target virtual object for developing a conference in a virtual scene and acquires voice content corresponding to the target virtual object; analyzing the voice content corresponding to the target virtual object, and determining a corresponding target core vocabulary; and displaying the target core vocabulary in a preset range of the position of the target virtual object in the virtual scene. Therefore, the conference is carried out in the virtual scene, the target core vocabulary for carrying out the conference is displayed in the preset range of the target virtual object in the virtual scene, the key content in the conference carrying out process is prompted in a word cloud picture mode, the diversity of the conference carrying out content and form is increased, and the overall effect of the conference carrying out is optimized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of a virtual conference system according to an embodiment of the present application;
Fig. 2 is a schematic flow chart of a conference carrying method according to an embodiment of the present application;
fig. 3 is another flow chart of a conference carrying method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a conference device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The embodiment of the application provides a conference development method, a conference development device, a storage medium and computer equipment.
Referring to fig. 1, fig. 1 is a schematic view of a virtual conference system according to an embodiment of the present application, including: the interactive terminal A and the server B can be connected through a communication network, wherein the communication network comprises a wireless network and a wired network, and the wireless network comprises one or a combination of a plurality of wireless wide area networks, wireless local area networks, wireless metropolitan area networks and wireless personal networks. The network includes network entities such as routers, gateways, etc., which are not shown. The interactive terminal A can perform information interaction with the server B through a communication network, wherein the interactive terminal A realizes the meeting under the virtual scene by uploading the meeting characteristics and the meeting data of the participants to the server B, and the server B records the data of the meeting under the virtual scene.
The virtual conference system can comprise a conference developing device which can be integrated in a terminal with a storage unit and a microprocessor and an operation capability, such as a tablet personal computer, a mobile phone, a notebook computer, a desktop computer and the like, and the terminal can be provided with an interactive terminal, such as a conference client and the like. In fig. 1, an interactive terminal a may be configured to determine a target virtual object for performing a conference in a virtual scene, and obtain a voice content corresponding to the target virtual object; analyzing the voice content corresponding to the target virtual object, and determining a corresponding target core vocabulary; and displaying the target core vocabulary in a preset range of the position of the target virtual object in the virtual scene.
The virtual conference system may further include a server B, where conference data during a conference may be stored in the server B, and when a user enters a virtual conference in a virtual scene, the server B may record audio data after the user enters the virtual conference. The server B can also count the target core vocabulary corresponding to the audio data of the user for recording.
It should be noted that, the schematic view of the virtual conference system shown in fig. 1 is only an example, and the virtual conference system and the scene described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and those skilled in the art can know that, with the evolution of the virtual conference system and the appearance of a new service scenario, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
The following will describe in detail.
In the present embodiment, description will be made from the viewpoint of a conference-carrying device, which may be integrated in a client of a terminal in particular.
Referring to fig. 2, fig. 2 is a flow chart of a conference developing method according to an embodiment of the application. The conference development method comprises the following steps:
in step 101, a target virtual object for developing a conference in a virtual scene is determined, and a voice content corresponding to the target virtual object is obtained.
The target virtual object is a participant who performs display in a virtual scene, the target virtual object comprises a participant characteristic of the participant when a conference is carried out, and the participant characteristic of the target virtual object can be subjected to self-defined adjustment. In addition, the virtual scene may be a virtual scene in a metauniverse, and when a conference is performed in the metauniverse scene, the target virtual object performs a simulated conference by displaying a conference feature corresponding to a participant in the metauniverse space.
Further, when the meeting participants develop the meeting in the virtual scene of the meta universe, the audio information of the target virtual object can be collected through the target audio device, the target audio device is associated with the target virtual object, and understandably, each target virtual object (meeting participant) corresponds to the target audio device, and the target audio device can collect the voice content of the target virtual object when the meeting is developed.
The meta universe is a virtual universe constructed based on technologies such as virtual reality (VirtualReality, VR) and augmented reality (AugmentedReality, AR), and has strong interaction capability by applying digital technologies, and can meet social demands in the aspects of games, social contact, conferences and the like.
In the related art, the conference is mostly presented in an offline or online mode, the offline conference is used as a traditional conference mode, along with the increase of the conference span, the cost of conference development is continuously improved and the convenience is low, the online conference is used for providing the conference content visible to two parties by a display party in the conference development process through being connected with the conference room, the conference is developed according to the conference content, the conference content stays in the conference content provided by the conference display party, the display mode is obtained in a visible mode, the content and the form in the conference development process are single, and the content which can be displayed in the conference development process is limited, so that the conference development effect is poor.
Therefore, the embodiment of the application determines the target virtual object for carrying out the conference in the virtual scene, shows the participation characteristics of the target virtual object when carrying out the conference in the virtual scene, realizes the virtual interaction among the virtual participants through the display of the participation characteristics among the target virtual objects, and displays the target core vocabulary in the voice content of the target virtual object on the premise that the target virtual object can carry out the virtual interaction, thereby assisting in carrying out the virtual conference capable of prompting the key content. The diversity of the content and the form in the conference development process is increased, the effective interaction of the participants in the virtual scene is enhanced, and the overall effect of conference development is optimized.
Further, each target virtual object collects real-time audio data of each target virtual object for developing a conference through corresponding target audio equipment, the real-time audio data can be independent audio streams collected by the target audio equipment or multi-person audio data collected by the target audio equipment, the independent audio streams and/or the multi-person audio streams collected by the target audio equipment are analyzed, and voice content corresponding to each target virtual object is determined in the process that the target virtual object is developed in the virtual space conference.
In some embodiments, determining a target virtual object for developing a conference in a virtual scene, and acquiring voice content corresponding to the target virtual object includes:
(1) Determining a target virtual object for developing a conference according to a participant account number of the virtual scene, wherein the participant account number has unique identification;
(2) Collecting real-time audio data of a target virtual object based on target audio equipment corresponding to the target virtual object;
(3) And identifying the real-time audio data to acquire the voice content corresponding to the target virtual object.
It should be noted that, each target virtual object corresponds to a reference account, and the reference account has unique identifier, that is, the target virtual object in the virtual scene of the metauniverse has unique identifier. The participant account number can be an independent individual or an integral participant of the participant.
Further, the real-time audio data of the conference carried out by each target virtual object is collected through the target audio equipment corresponding to the participant account and used as the audio data of the target virtual object when the virtual conference is carried out, so that the audio conference in the virtual scene of the meta universe is realized. In addition, real-time audio data acquired by the target audio equipment corresponding to the reference account is converted into voice content corresponding to the target virtual object through concurrent identification of the real-time audio data of each target virtual object.
In the embodiment of the application, the real-time audio data can be single voice or multi-person voice. Specifically, for example, the real-time audio data collected by the target audio device corresponding to the target virtual object may be an independent audio stream, where the independent audio stream includes speech content of a single person, and subsequent uploading and identifying operations may be performed separately; the real-time audio data collected by the target audio device corresponding to the target virtual object may also be that a participant integrally includes a plurality of individuals, and the real-time audio data is used as an independent audio stream corresponding to the virtual conference developed by the target virtual object, and the corresponding audio data needs to be obtained after processing.
Specifically, for example, in a conference in a virtual scene, the audio data collected by the target audio device may be a participant to speak the conference content in a single voice, and the corresponding real-time audio data is a single voice, which can be directly identified to determine a corresponding target core vocabulary; the audio data collected by the target audio device can also be that a plurality of participants dispute a discussion point in the virtual conference, the corresponding implementation audio data is multi-person voice, and the multi-person voice needs to be subjected to audio processing and is further identified after the processing. The audio processing mode may be voiceprint recognition, audio energy recognition, etc., and the embodiment of the present application does not specifically limit the audio processing mode.
In step 102, the voice content corresponding to the target virtual object is analyzed, and a corresponding target core vocabulary is determined.
The target core vocabulary is a conference key vocabulary obtained by screening voice contents corresponding to a target virtual object in the process of developing a conference by the target virtual object, and the target core vocabulary can represent conference vocabulary representing important contents in a dialogue of the target virtual object in the current conference.
Through carrying out structural analysis on the voice content corresponding to the target virtual object, determining a target vocabulary which accords with a preset structure in the voice content, screening out a target core vocabulary of the target virtual object for developing a conference, and identifying the voice content in a mode of classifying input voice according to a certain mode through learning, and then finding out an optimal matching result according to a judging criterion; the text matching may also be performed on the real-time audio data through an acoustic model, and the manner of identifying the voice content in this embodiment is not particularly limited. Therefore, the method and the device have the advantages that the key words of the target virtual object are extracted in the meeting development process under the virtual scene, key content prompt is carried out on the meeting developed under the current virtual scene by using the key words, the form diversity of content display in the meeting development process is increased, and the display effect of the meeting is optimized.
In the embodiment of the application, the voice content corresponding to the target virtual object can be subjected to voice analysis, and the voice analysis can be performed by screening target words with the same type as the designated part of speech from the voice content, wherein weights exist in the target words, the target words with the weights reaching a preset threshold value are used as target core words, and the larger the weights of the target words, the larger the probability that the target words are used as the target core words. The target core vocabulary can be used as important content of the target virtual object in the virtual conference developing process, and the greater the weight of the target core vocabulary is, the higher the importance degree of the target core vocabulary is.
In step 103, the target core vocabulary is displayed within a preset range of the position of the target virtual object in the virtual scene.
In the related art, during the development of a conference, the display content of the conference is limited to the conference content provided by the display party, and the conference content needs to be displayed in an actual scene by the participant provider, so that the conference content can be presented during the development of an online conference or an online conference. Specifically, for example, in the existing online conference, the text presentation presented by the participant is presented on the shared screen, and the presentation content of the online conference is formed according to the text presentation visible by the participant and the voice information of the participant, so that the interactivity and the interestingness of the conference are reduced by the text presentation and the conference development mode of the voice content, the form content is single, and the effect is poor.
Therefore, the embodiment of the application displays the target core vocabulary in the conference process in the preset range of the position of the target virtual object in the virtual scene, wherein the target core vocabulary can be set by background personnel or selected by a user, and can also be obtained by recognition according to real-time explanation or dialogue of the target virtual object. And displaying the target core vocabulary in the voice content of the target virtual object in the virtual scene, assisting in developing the virtual conference capable of prompting the key content, increasing the diversity of the content and the form in the conference developing process, and optimizing the overall effect of conference developing.
As can be seen from the above, the embodiment of the present application determines the target virtual object for developing the conference in the virtual scene, and obtains the voice content corresponding to the target virtual object; analyzing the voice content corresponding to the target virtual object, and determining a corresponding target core vocabulary; and displaying the target core vocabulary in a preset range of the position of the target virtual object in the virtual scene. Therefore, the conference is carried out in the virtual scene, the target core vocabulary for carrying out the conference is displayed in the preset range of the target virtual object in the virtual scene, the key content in the conference carrying out process is prompted in a word cloud picture mode, the diversity of the conference carrying out content and form is increased, and the overall effect of the conference carrying out is optimized.
In this embodiment, a description will be given from the perspective of a conference device, which may be specifically integrated in a terminal having a storage unit and a microprocessor installed and having an operation capability, such as a tablet computer, a mobile phone, etc., where the terminal may start a live client, and in this embodiment, the live client may be a viewer client.
Referring to fig. 3, fig. 3 is another flow chart of a conference developing method according to an embodiment of the application. The method flow may include:
in step 201, a target vocabulary belonging to a preset part-of-speech type in the speech content is screened according to the preset part-of-speech type.
The preset part-of-speech type may be adjectives, verbs, objects, complements, and the like, or various parts-of-speech or combinations of parts-of-speech, which are not specifically limited herein. It should be noted that, each conference has one or more preset part-of-speech types, and the preset part-of-speech types can represent target vocabularies meeting the preset part-of-speech types, so that target core vocabularies concerned when the participants perform virtual conference can be captured later. The preset part-of-speech type can be automatically identified and selected through an algorithm, and can be set manually and automatically, and the set standard can be the theme of conference development, the type of conference development, the content of conference development and the like.
Further, when the corresponding preset part-of-speech type exists in the current conference, the recognized voice content of the target virtual object can be acquired at regular time, and the recognized voice content is preprocessed, wherein the preprocessing process can include: the word segmentation and the removal of stop words are carried out, the word segmentation can carry out sentence breaking processing on the voice content, short sentences in the voice content are determined, and the accuracy of subsequent recognition is improved; the stop words are words or words automatically filtered in natural language, and the search efficiency and storage space in the information retrieval process can be saved by removing the stop words and performing subsequent preset part-of-speech type matching. The method comprises the steps that a vocabulary structure corresponding to a preset part-of-speech type exists in voice content of a target virtual object in a virtual conference obtained through part-of-speech detection, and when a vocabulary belonging to the preset part-of-speech type is detected, the vocabulary is recorded as a target vocabulary.
Specifically, for example, if the conference is a sales policy conference, the preset part-of-speech type of the conference is set to be a verb+object type, and the voice content of the participant 1 is processed, and when the grammar structure of the verb+object exists in the voice content corresponding to the target virtual object (participant), the target vocabulary of the grammar structure is screened and recorded.
In step 202, word frequency statistics is performed on the target vocabulary, and the target vocabulary with the statistical word frequency greater than the preset word frequency is determined as the target core vocabulary.
It should be noted that, in the meeting of each target virtual object in the virtual scene, as long as the target vocabulary belongs to the preset part-of-speech type, the recorded vocabulary data include, but are not limited to, part-of-speech type, update time, and the like, when the same target vocabulary appears, the word frequency of the target vocabulary is counted, and the more the number of times that the same target vocabulary appears, the higher the word frequency of the target vocabulary. The word frequency of the target vocabulary is associated with the weight of the target vocabulary, and the larger the weight of the target vocabulary is, the higher the probability of the target core vocabulary is.
In one embodiment, performing word frequency statistics on the target vocabulary, and determining the target vocabulary with the statistical word frequency greater than the preset word frequency as the target core vocabulary includes:
(1) Recording target vocabularies of target virtual objects according to the reference account number of the virtual scene in a preset time period, and generating a history vocabulary set corresponding to the target virtual objects;
(2) Performing word frequency statistics on target words with the same content in the history word set to obtain word frequency corresponding to each target word;
(3) And determining target core words displayed in the conference of the virtual scene according to the word frequency of each target word in the historical word set.
In the embodiment of the application, conference contents in a preset time period are recorded, the recorded contents comprise target words in voice contents when each target virtual object (participant) carries out a conference in the preset time period, the target words are stored according to a reference account number of each target virtual object, a historical word set of the target virtual object corresponding to the reference account number is obtained, namely, each participant has records of all target words in the preset time, and the recorded contents comprise, but are not limited to, the target words, word parts of the target words and update time.
Further, while recording target words belonging to a preset part-of-speech type, word frequency statistics is required to be performed on the same target words, timing statistics is performed on target words accumulated by history of a certain target virtual object, and a target core word with word frequency meeting a preset threshold in the target virtual object is calculated.
Specifically, word frequency statistics may be performed on the target vocabulary by using a TextRank algorithm or a YAKE algorithm, and the present vocabulary may be counted, and the target core vocabulary of word frequency topK may be calculated. It should be noted that, the TextRank algorithm is a graph-based ranking algorithm for keyword extraction and document summarization, and can extract keywords and keyword groups of a given text by identifying co-occurrence information (semantics) among words in the document and extracting the keywords. The word frequency statistics of the target word can also be carried out by utilizing a keyword extraction YAKE algorithm, and the target core word can be generated through text preprocessing, feature extraction, single word weight calculation.
According to the embodiment of the application, the target vocabulary belonging to the preset part-of-speech type is screened out from the voice content of the target virtual object, and the vocabulary with the word frequency larger than the preset threshold value of the target vocabulary is selected as the target core vocabulary. Specifically, for example, if the conference is a sales policy conference, the preset part-of-speech type of the conference is set to be a verb+object type, and when the participant 2 expresses that "the discount activity should be performed in the next week", it is determined that, in the voice content corresponding to the participant 2, the "discount activity is performed" as a target vocabulary corresponding to the preset part-of-speech type. Further, in a conference for up to 30 minutes, by recording all the target words of the participants 2, the participants 2 mention the target words 1 "do discount activity" 3 times, the target words 2 "do full subtraction activity" 4 times, and the target words 3 "give gifts" 1 time. The word frequency of the gift given by the target word 3 is smaller than a preset threshold 2, and the target word 1 and the target word 2 are larger than the preset threshold 2, so that the discount activity of the target word 1 and the full deactivation of the target word 2 are used as target core words.
Further, after the target core vocabulary in the target vocabulary is determined, the target core vocabulary corresponding to the target virtual object is displayed on the screen, and the target core vocabulary is displayed in a preset range on the position of the target virtual object in the virtual scene, so that the conference key content of the target virtual object is prompted.
In step 203, a corresponding target display color is determined according to the part of speech of the target core vocabulary.
The part of speech of the target core vocabulary has a corresponding relation with the target display color, and understandably, the part of speech of the target core vocabulary has the same corresponding relation with the preset part of speech type, and the corresponding relation can ensure that when the content prompt is carried out on the target core vocabulary of the target virtual object in the virtual scene, the target display colors of different parts of speech are different. The target display color may be represented by RGB, and illustratively, the correspondence of the part of speech and the target display color may be: adjectives (255,240,245), the adjectives being displayed in purple.
Further, a plurality of preset part-of-speech types may be set, each of the preset part-of-speech types being associated with a respective target display color, for example, a target display color corresponding to a preset part-of-speech type of "subject+predicate" may be set to pink (255,192,203). Therefore, the corresponding target display color can be matched according to the preset part-of-speech type of the target core vocabulary.
In step 204, an initial position of a target core vocabulary is determined based on the position of the target virtual object in the virtual scene, and the target core vocabulary is displayed in a target display color at the initial position.
When the target core vocabulary is displayed on the screen, an initial position of the target core vocabulary for displaying is required to be determined within a preset position range of the target virtual object in the virtual scene, the initial position of the target core vocabulary for displaying is determined by the weight of the target core vocabulary, and the weight of the target core vocabulary is in direct proportion to the word frequency of the target core vocabulary. The initial position of the target core vocabulary may be directly above the target virtual object, and the preset range of virtual object positions may be any range around the target virtual object and/or a portion of the target virtual object.
In step 205, recording the display time of the target core vocabulary, and dynamically adjusting the display position of the displayed target core vocabulary and the transparency of the target display color according to the display time;
the display position of the target core vocabulary is always within a preset range of the position of the target virtual object.
It should be noted that, along with development and time lapse of the virtual conference, the target core vocabulary is dynamically played, and the dynamic display of the target core vocabulary is realized by adjusting the initial position of the display of the target core vocabulary and the target display color of the target core vocabulary. The initial position of the target core vocabulary gradually diffuses outwards to fade out the preset range of the target virtual object against the time change, and the target display color of the target core vocabulary gradually becomes transparent along with the time change until the target core vocabulary is invisible in the preset range of the target virtual object.
Specifically, for example, in an embodiment, the target display color may be changed at any time, but the target display color of the target virtual object with respect to the same part of speech is always the same, so that the target display colors with respect to different parts of speech can be distinguished, for example, the preset part of speech type of "subject+predicate" is changed from pink (255,192,203) to light pink (255,182,193) with time (transparency is changed), and the target display color of the target core vocabulary is adjusted in real time with time, but the target core vocabulary with respect to the same part of speech of the same target virtual object is always the same target display color. Further, the display position of the target core vocabulary gradually spreads from directly above the target virtual object to the outside as time goes by, and gradually gets away from the target virtual object.
In an embodiment, when the target core vocabulary is in a state that the word cloud chart is continuously displayed, the target display color and the display position of the target core vocabulary are dynamically changed, and when the current voice content of the target virtual object is recognized again to appear the target core vocabulary, the target core vocabulary is hidden, and the target display color and the initial position of the target core vocabulary for displaying are redetermined.
In step 206, when the display time of the target core vocabulary exceeds the preset display time, the target core vocabulary is hidden.
In an embodiment, in a state that the target core vocabulary is continuously displayed, when the current voice content of the target virtual object is not displayed with the target core vocabulary, and the target core vocabulary is displayed in a preset range on the target virtual object position in the virtual scene for more than a preset period of time, the target core vocabulary is hidden and is not displayed.
The preset display time can be 5 minutes, 10 minutes or 20 minutes, and the like, when the fact that the target core vocabulary is in the preset range of the target virtual object position is detected to be more than 10 minutes is detected, the target core vocabulary which is not appeared again in the preset range of the target virtual object position is hidden, the effectiveness of the target core vocabulary is improved, and the key vocabulary prompting effect in the conference development process is enhanced.
In some embodiments, the method further comprises:
(1) When the number of target virtual objects for developing the conference in the virtual scene reaches a first preset value, performing association calculation according to the target core vocabulary of each target virtual object to obtain the vocabulary association between every two target virtual objects;
(2) And when the vocabulary association degree between the two target virtual objects reaches a preset threshold, determining the two target virtual objects as connectable target virtual objects.
(3) When the number of connectable target virtual objects in the virtual scene reaches a second preset value, the target connection mode between every two target virtual objects is adjusted according to the vocabulary association degree between every two target virtual objects.
The number of target virtual objects for developing the conference in the virtual scene may refer to the number of participant accounts for participating in the conference, and when the number of participant accounts (target virtual objects) reaches a first preset value, the relevance calculation is performed on target core vocabularies of each two target virtual objects, so as to determine the relevance between each two target virtual objects. It should be noted that, the first preset value of the number of the target virtual objects may be 2, that is, when the number of the target virtual objects in the virtual scene is two or more, the vocabulary association degree calculation between the different two target virtual objects is started, so as to obtain the association strength of the target core vocabulary between each two target virtual objects.
Because the requirements on the association strength are different in the displaying process, the parameters of the first preset value for the association degree connection of the target virtual object can be adjusted, so that the association degree of the association degree mark of the target virtual object in the conference under the virtual scene can be adjusted, and the flexibility of the conference display form is improved.
And carrying out association degree calculation on all target virtual objects for developing the conference in the current virtual scene, and calculating the vocabulary association degree between the two target virtual objects by calculating the vocabulary association degree of all target core vocabularies displayed on the upper screen of each two target virtual objects. Specifically, for example, the following scheme may be adopted for the vocabulary association degree of the two target virtual objects:
in some embodiments, performing a relevancy calculation according to the target core vocabulary of each target virtual object, where obtaining the vocabulary relevancy between each two target virtual objects includes:
1. and creating a vocabulary vector set corresponding to the target virtual object according to the target core vocabulary of the target virtual object.
2. And carrying out word vector distance calculation according to the word vector sets corresponding to each two target virtual objects to obtain the distance between the word vector sets corresponding to each two target virtual objects, and taking the distance between the word vector sets as the word association degree between each two target virtual objects.
According to the historical vocabulary sets correspondingly stored by the reference account numbers of the target virtual objects, a vocabulary vector set corresponding to each target virtual object is established, the vocabulary vector sets corresponding to each two target virtual objects are placed into a preset space coordinate system for word vector distance calculation, the distance between the vocabulary vector sets corresponding to each two target virtual objects is obtained, and the distance between the vocabulary vector sets corresponding to each two target virtual objects is used as the vocabulary similarity of each two target virtual objects. The distance between the vocabulary vector sets corresponding to each two target virtual objects can be calculated in the space coordinate system in a cosine similarity mode.
When the vocabulary similarity of the two target virtual objects reaches a preset threshold, the two target virtual objects are used as connectable target virtual objects and are connected in a conference of a virtual scene, and the preset threshold is changed to effectively judge whether the association degree of target core vocabularies corresponding to the two target virtual objects is effective.
Further, when the number of connectable target virtual objects reaches a second preset value, that is, when the number of target virtual objects with the association degree reaching a preset threshold reaches the second preset value in a conference under a virtual scene, it is indicated that the number of target virtual objects to be connected in the conference is increased.
Specifically, for example, the similarity of the target core vocabularies of the target virtual objects is divided into the similarity degrees, 100% is taken as the maximum value of the vocabulary similarity, and the preset threshold value of the vocabulary association degree between the two target virtual objects is set to 80%, so that when the vocabulary association degree of the two target virtual objects exceeds 80%, the two connectable target virtual objects can be determined.
In some specific embodiments, when the number of connectable target virtual objects in the virtual scene reaches a second preset value, adjusting, according to the vocabulary association degree between each two target virtual objects, a target connection manner between each two target virtual objects, including:
(1) According to the vocabulary association degree between every two target virtual objects, carrying out association degree sequencing on every two target virtual objects to obtain a corresponding association degree sequencing result;
(2) And adjusting the color depth of the connecting line between every two connectable target virtual objects according to the relevancy sorting result.
In some embodiments, if a connectable target virtual object exceeding a second preset value appears in the virtual scene, firstly sorting the vocabulary association degree of the target virtual object exceeding a threshold value, and adjusting the color depth of the target display color according to the sorting result of the vocabulary association degree of the target virtual object to determine the target connection mode between a plurality of pairs of connectable target virtual objects. For example, on the basis of the standard red, the higher the vocabulary association degree of the target virtual object, the darker the target display color is, and the lower the vocabulary association degree of the target virtual object, the lower the color saturation of the target display color is, and the more the color saturation of the target virtual object is, and the more the target display color is, the more the color saturation of the target virtual object is.
Specifically, for example, if the lexical association degree of a target virtual object in which three or more persons exist in a conference in a virtual scene exceeds a preset threshold, the intra-area association degree of [80% -100% ] is divided into two degrees, and the similarity threshold interval [90% -100% ] may be set to "strong association degree", and the similarity threshold interval [80% -90% ] may be set to "generally strong association degree". Aiming at the vocabulary association degree of different similarity threshold intervals, different connecting lines are adopted to connect the target virtual objects: the 'association degree strong' intervals of [90% -100% ] are connected by red lines; the "association degree generally strong" intervals of [80% -90% ] are connected by yellow lines.
Therefore, the target virtual objects with the association degree reaching the preset threshold value are connected in different modes under different vocabulary association degrees, and the conference development effect and conference development experience of participants under the virtual scene are further improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a conference device according to an embodiment of the present application, where the conference device is applied to a terminal, and the conference device may include a determining unit 301, an analyzing unit 302, a displaying unit 303, and the like.
A conference launch apparatus comprising:
a determining unit 301, configured to determine a target virtual object for performing a conference in a virtual scene, and obtain a voice content corresponding to the target virtual object;
the analysis unit 302 is configured to analyze the voice content corresponding to the target virtual object, and determine a target core vocabulary of the conference;
and the display unit 303 is configured to display the target core vocabulary within a preset range of a position of a target virtual object in the virtual scene.
In some embodiments, the determining unit includes:
the login subunit is used for determining a target virtual object for carrying out the conference according to a participant account number of the virtual scene, wherein the participant account number has unique identification;
the acquisition subunit is used for acquiring real-time audio data of the target virtual object based on the target audio equipment corresponding to the target virtual object;
and the recognition subunit is used for recognizing the real-time audio data and acquiring the voice content corresponding to the target virtual object.
In some embodiments, the analysis unit comprises:
the screening subunit is used for screening target vocabularies belonging to the preset part-of-speech types in the voice content according to the preset part-of-speech types;
and the statistics subunit is used for carrying out word frequency statistics on the target vocabulary and determining the target vocabulary with the statistical word frequency larger than the preset word frequency as a target core vocabulary.
In some embodiments, the statistics subunit is configured to:
recording target vocabularies of target virtual objects according to the reference account number of the virtual scene in a preset time period, and generating a history vocabulary set corresponding to the target virtual objects;
performing word frequency statistics on target words with the same content in the history word set to obtain word frequency corresponding to each target word;
and determining target core words displayed in the conference of the virtual scene according to the word frequency of each target word in the historical word set.
In some embodiments, the display unit comprises:
a color subunit, configured to determine a corresponding target display color according to the part of speech of the target core vocabulary;
a position subunit, configured to determine an initial position of a target core vocabulary based on a position of a target virtual object in the virtual scene, and display the target core vocabulary in a target display color at the initial position;
the transparency subunit is used for recording the display time of the target core vocabulary and dynamically adjusting the display position of the displayed target core vocabulary and the transparency of the target display color according to the display time;
The display position of the target core vocabulary is always within a preset range of the position of the target virtual object.
In some embodiments, the display unit is further configured to:
and hiding the target core vocabulary when the display time of the target core vocabulary exceeds the preset display time.
In some embodiments, the conference initiation device further comprises:
the association calculation unit is used for calculating association according to the target core vocabulary of each target virtual object when the number of target virtual objects for developing the conference in the virtual scene reaches a first preset value, so as to obtain the vocabulary association between every two target virtual objects;
and the connection unit is used for determining the two target virtual objects as connectable target virtual objects when the vocabulary association degree between the two target virtual objects reaches a preset threshold value.
In some embodiments, the conference initiation device is further configured to:
when the number of connectable target virtual objects in the virtual scene reaches a second preset value, the target connection mode between every two target virtual objects is adjusted according to the vocabulary association degree between every two target virtual objects.
In some embodiments, the association calculation unit is configured to:
creating a vocabulary vector set corresponding to the target virtual object according to the target core vocabulary of the target virtual object;
and carrying out word vector distance calculation according to the word vector sets corresponding to each two target virtual objects to obtain the distance between the word vector sets corresponding to each two target virtual objects, and taking the distance between the word vector sets as the word association degree between each two target virtual objects.
In some embodiments, the conference initiation device is further configured to:
according to the vocabulary association degree between every two target virtual objects, carrying out association degree sequencing on every two target virtual objects to obtain a corresponding association degree sequencing result;
and adjusting the color depth of the connecting line between every two connectable target virtual objects according to the relevancy sorting result.
The embodiment of the application also provides a computer device, which may be a terminal, as shown in fig. 5, which shows a schematic structural diagram of the computer device according to the embodiment of the application, specifically:
the computer device may include one or more processors 401 of a processing core, memory 402 of one or more computer readable storage media, a power supply 403, and an input unit 404, among other components. Those skilled in the art will appreciate that the computer device structure shown in FIG. 5 is not limiting of the computer device and may include more or fewer components than shown, or may be combined with certain components, or a different arrangement of components. Wherein:
The processor 401 is a control center of the computer device, connects various parts of the entire computer device using various interfaces and lines, and performs various functions of the computer device and processes data by running or executing software programs and/or modules stored in the memory 402, and calling data stored in the memory 402, thereby performing overall monitoring of the computer device. Optionally, processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, an application program, etc., and the modem processor mainly processes wireless communication. It will be appreciated that the modem processor described above may not be integrated into the processor 401.
The memory 402 may be used to store software programs and modules, and the processor 401 executes various functional applications and data processing by executing the software programs and modules stored in the memory 402. The memory 402 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the computer device, etc. In addition, memory 402 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 with access to the memory 402.
The computer device further comprises a power supply 403 for supplying power to the various components, preferably the power supply 403 may be logically connected to the processor 401 by a power management system, so that functions of charge, discharge, and power consumption management may be performed by the power management system. The power supply 403 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The computer device may also include an input unit 404, which input unit 404 may be used to receive input numeric or character information and to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
Although not shown, the computer device may further include a display unit or the like, which is not described herein. In particular, in this embodiment, the processor 401 in the computer device loads executable files corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and the processor 401 executes the application programs stored in the memory 402, so as to implement various functions as follows:
Determining a target virtual object for developing a conference in a virtual scene, and acquiring voice content corresponding to the target virtual object;
analyzing the voice content corresponding to the target virtual object, and determining a corresponding target core vocabulary;
and displaying the target core vocabulary in a preset range of the position of the target virtual object in the virtual scene.
In the foregoing embodiments, the descriptions of the embodiments are focused on, and the portions of an embodiment that are not described in detail in the foregoing embodiments may be referred to the detailed description of the conference development method, which is not repeated herein.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer storage medium having stored therein a plurality of instructions capable of being loaded by a processor to perform the steps of any of the conference carrying out methods provided by the embodiments of the present application.
According to one aspect of the present application, there is provided a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs the methods provided in the various alternative implementations provided in the above embodiments.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
Wherein the computer storage medium may include: read-only memory (ROM, readOnlyMemory), random access memory (RAM, randomAccessMemory), magnetic or optical disk, and the like.
The instructions stored in the computer storage medium can execute the steps in any conference carrying method provided by the embodiments of the present application, so that the beneficial effects that any conference carrying method provided by the embodiments of the present application can be achieved, and detailed descriptions of the foregoing embodiments are omitted herein.
The foregoing describes in detail a method, apparatus, storage medium and computer device for conference development provided by the embodiments of the present application, and specific examples are applied to illustrate the principles and embodiments of the present application, where the foregoing examples are only used to help understand the method and core ideas of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (13)

1. A method of meeting development, comprising:
determining a target virtual object for developing a conference in a virtual scene, and acquiring voice content corresponding to the target virtual object;
analyzing the voice content corresponding to the target virtual object, and determining a corresponding target core vocabulary;
and displaying the target core vocabulary in a preset range of the position of the target virtual object in the virtual scene.
2. The conference development method according to claim 1, wherein determining a target virtual object for developing a conference in a virtual scene and acquiring voice content corresponding to the target virtual object comprises:
determining a target virtual object for developing a conference according to a participant account number of the virtual scene, wherein the participant account number has unique identification;
collecting real-time audio data of a target virtual object based on target audio equipment corresponding to the target virtual object;
and identifying the real-time audio data to acquire the voice content corresponding to the target virtual object.
3. The conference development method of claim 2, wherein analyzing the voice content corresponding to the target virtual object to determine the corresponding target core vocabulary includes:
Screening target words belonging to the preset part-of-speech type in the voice content according to the preset part-of-speech type;
and counting word frequency of the target word, and determining the target word with the counted word frequency being greater than the preset word frequency as a target core word.
4. The conference development method of claim 3, wherein the performing word frequency statistics on the target vocabulary, determining the target vocabulary with the statistical word frequency greater than the preset word frequency as the target core vocabulary, includes:
recording target vocabularies of target virtual objects according to the reference account number of the virtual scene in a preset time period, and generating a history vocabulary set corresponding to the target virtual objects;
performing word frequency statistics on target words with the same content in the history word set to obtain word frequency corresponding to each target word;
and determining target core words displayed in the conference of the virtual scene according to the word frequency of each target word in the historical word set.
5. The conference development method of claim 4, wherein displaying the target core vocabulary within a preset range of a position of a target virtual object in the virtual scene comprises:
determining a corresponding target display color according to the part of speech of the target core vocabulary;
Determining an initial position of a target core vocabulary based on the position of a target virtual object in the virtual scene, and displaying the target core vocabulary in a target display color at the initial position;
recording the display time of the target core vocabulary, and dynamically adjusting the display position of the displayed target core vocabulary and the transparency of the target display color according to the display time;
the display position of the target core vocabulary is always within a preset range of the position of the target virtual object.
6. The conference initiation method of claim 5, wherein after said recording the display time of the target core vocabulary, the method further comprises:
and hiding the target core vocabulary when the display time of the target core vocabulary exceeds the preset display time.
7. The conference initiation method of claim 1, wherein after presenting the target core vocabulary within a preset range of positions of target virtual objects in the virtual scene, the method further comprises:
when the number of target virtual objects for developing the conference in the virtual scene reaches a first preset value, performing association calculation according to the target core vocabulary of each target virtual object to obtain the vocabulary association between every two target virtual objects;
And when the vocabulary association degree between the two target virtual objects reaches a preset threshold, determining the two target virtual objects as connectable target virtual objects.
8. The conference initiation method of claim 7, wherein after said determining connectable target virtual objects in a virtual scene based on said vocabulary association, the method further comprises:
when the number of connectable target virtual objects in the virtual scene reaches a second preset value, the target connection mode between every two target virtual objects is adjusted according to the vocabulary association degree between every two target virtual objects.
9. The method of claim 7, wherein the performing the association calculation according to the target core vocabulary of each target virtual object to obtain the vocabulary association between each two target virtual objects comprises:
creating a vocabulary vector set corresponding to the target virtual object according to the target core vocabulary of the target virtual object;
and carrying out word vector distance calculation according to the word vector sets corresponding to each two target virtual objects to obtain the distance between the word vector sets corresponding to each two target virtual objects, and taking the distance between the word vector sets as the word association degree between each two target virtual objects.
10. The conference development method of claim 8, wherein when the number of connectable target virtual objects in the virtual scene reaches a second preset value, adjusting the target connection mode between each two target virtual objects according to the vocabulary association degree between each two target virtual objects comprises:
according to the vocabulary association degree between every two target virtual objects, carrying out association degree sequencing on every two target virtual objects to obtain a corresponding association degree sequencing result;
and adjusting the color depth of the connecting line between every two connectable target virtual objects according to the relevancy sorting result.
11. A conference carrying out apparatus, comprising:
the determining unit is used for determining a target virtual object for developing the conference in the virtual scene and acquiring voice content corresponding to the target virtual object;
the analysis unit is used for analyzing the voice content corresponding to the target virtual object and determining a target core vocabulary of the conference;
the display unit is used for displaying the target core vocabulary in a preset range of the position of the target virtual object in the virtual scene.
12. A computer readable storage medium, characterized in that the storage medium stores a plurality of instructions adapted to be loaded by a processor for performing the steps in the conference development method of any one of claims 1 to 10.
13. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps in the conference development method of any one of claims 1 to 10 when the computer program is executed.
CN202310556926.8A 2023-05-17 2023-05-17 Conference development method and device, storage medium and computer equipment Pending CN116755549A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310556926.8A CN116755549A (en) 2023-05-17 2023-05-17 Conference development method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310556926.8A CN116755549A (en) 2023-05-17 2023-05-17 Conference development method and device, storage medium and computer equipment

Publications (1)

Publication Number Publication Date
CN116755549A true CN116755549A (en) 2023-09-15

Family

ID=87959858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310556926.8A Pending CN116755549A (en) 2023-05-17 2023-05-17 Conference development method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN116755549A (en)

Similar Documents

Publication Publication Date Title
CN107818798B (en) Customer service quality evaluation method, device, equipment and storage medium
CN107665708B (en) Intelligent voice interaction method and system
CN111488433B (en) Artificial intelligence interactive system suitable for bank and capable of improving field experience
EP3508991A1 (en) Man-machine interaction method and apparatus based on artificial intelligence
US20210012777A1 (en) Context acquiring method and device based on voice interaction
CN110489527A (en) Banking intelligent consulting based on interactive voice and handle method and system
CN103077207B (en) A kind of microblogging happy index analysis method and system
CN110557659A (en) Video recommendation method and device, server and storage medium
CN108446320A (en) A kind of data processing method, device and the device for data processing
CN109086276B (en) Data translation method, device, terminal and storage medium
CN112887746B (en) Live broadcast interaction method and device
CN107480766A (en) The method and system of the content generation of multi-modal virtual robot
CN109739354A (en) A kind of multimedia interaction method and device based on sound
CN111626061A (en) Conference record generation method, device, equipment and readable storage medium
CN113591489B (en) Voice interaction method and device and related equipment
CN111797599A (en) Conference record extraction and PPT insertion method and system
CN116755549A (en) Conference development method and device, storage medium and computer equipment
CN110046922A (en) A kind of marketer terminal equipment and its marketing method
CN110535749B (en) Dialogue pushing method and device, electronic equipment and storage medium
CN114138960A (en) User intention identification method, device, equipment and medium
WO2021167732A1 (en) Implementing automatic chatting during video displaying
CN113497946A (en) Video processing method and device, electronic equipment and storage medium
CN109976700A (en) A kind of method, electronic equipment and the storage medium of the transfer of recording permission
CN112752142B (en) Dubbing data processing method and device and electronic equipment
CN114399821B (en) Policy recommendation method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination